modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ShekDass/donut-base-cord-sroie | ShekDass | 2023-07-24T06:55:00Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base-finetuned-cord-v2",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-cord-v2",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-07-23T11:49:07Z | ---
license: mit
base_model: naver-clova-ix/donut-base-finetuned-cord-v2
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-cord-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-cord-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
llm-book/bert-base-japanese-v3-marc_ja | llm-book | 2023-07-24T06:49:13Z | 1,854 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:llm-book/JGLUE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-01T14:29:06Z | ---
language:
- ja
license: apache-2.0
library_name: transformers
datasets:
- llm-book/JGLUE
pipeline_tag: text-classification
---
# bert-base-japanese-v3-marc_ja
「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第5章で紹介している(感情分析)のモデルです。
[cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)を[JGLUE](https://huggingface.co/datasets/llm-book/JGLUE)のMARC-jaデータセットでファインチューニングして構築されています。
## 関連リンク
* [GitHubリポジトリ](https://github.com/ghmagazine/llm-book)
* [Colabノートブック(訓練)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-2-sentiment-analysis-finetuning.ipynb)
* [Colabノートブック(推論)](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter5/5-3-sentiment-analysis-analysis.ipynb)
* [データセット](https://huggingface.co/datasets/llm-book/JGLUE)
* [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/)
* [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8)
## 使い方
```python
from transformers import pipeline
text_classification_pipeline = pipeline(model="llm-book/bert-base-japanese-v3-marc_ja")
print(text_classification_pipeline("世界には言葉がわからなくても感動する音楽がある。")[0])
# {'label': 'positive', 'score': 0.9993619322776794}
```
## ライセンス
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
Samalabama66/ppo-Huggy | Samalabama66 | 2023-07-24T06:44:45Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-07-24T06:44:40Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Samalabama66/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lfsm/ja-base | lfsm | 2023-07-24T06:39:31Z | 0 | 0 | null | [
"text-generation",
"ja",
"dataset:cc100",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-07-24T04:37:35Z | ---
license: apache-2.0
datasets:
- cc100
- wikipedia
language:
- ja
pipeline_tag: text-generation
--- |
BrainTheos/wav2vec2-large-mms-1b-all-lingala-ojpl | BrainTheos | 2023-07-24T06:20:52Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-23T15:37:58Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-all-lingala-ojpl
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.2697881828316611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-all-lingala-ojpl
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8394
- Wer: 0.2698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5442 | 0.13 | 100 | 0.9396 | 0.3307 |
| 0.9882 | 0.27 | 200 | 0.9189 | 0.3389 |
| 0.5845 | 0.4 | 300 | 0.9322 | 0.3129 |
| 0.4162 | 0.54 | 400 | 1.0742 | 0.2939 |
| 0.506 | 0.67 | 500 | 0.9626 | 0.3077 |
| 0.8789 | 0.81 | 600 | 1.0502 | 0.3055 |
| 0.6166 | 0.94 | 700 | 0.9560 | 0.2984 |
| 0.4101 | 1.08 | 800 | 0.9520 | 0.2995 |
| 0.6536 | 1.21 | 900 | 1.1213 | 0.2988 |
| 0.4921 | 1.34 | 1000 | 1.0319 | 0.3010 |
| 0.856 | 1.48 | 1100 | 0.9514 | 0.3043 |
| 0.4479 | 1.61 | 1200 | 0.9079 | 0.2843 |
| 0.7249 | 1.75 | 1300 | 0.9612 | 0.2895 |
| 0.5384 | 1.88 | 1400 | 0.9050 | 0.2928 |
| 0.709 | 2.02 | 1500 | 0.9844 | 0.2735 |
| 0.6575 | 2.15 | 1600 | 0.9377 | 0.2772 |
| 0.6115 | 2.28 | 1700 | 0.9690 | 0.2876 |
| 0.3119 | 2.42 | 1800 | 0.9222 | 0.2798 |
| 0.3591 | 2.55 | 1900 | 0.9358 | 0.2783 |
| 0.3979 | 2.69 | 2000 | 0.9156 | 0.2702 |
| 0.7541 | 2.82 | 2100 | 0.8838 | 0.2761 |
| 0.81 | 2.96 | 2200 | 0.8460 | 0.2813 |
| 0.2224 | 3.09 | 2300 | 0.9377 | 0.2694 |
| 0.2338 | 3.23 | 2400 | 0.8870 | 0.2746 |
| 0.5315 | 3.36 | 2500 | 0.8782 | 0.2672 |
| 0.4045 | 3.49 | 2600 | 0.8811 | 0.2653 |
| 0.4874 | 3.63 | 2700 | 0.9059 | 0.2620 |
| 0.304 | 3.76 | 2800 | 0.8801 | 0.2690 |
| 1.4688 | 3.9 | 2900 | 0.8394 | 0.2698 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hiiamsid/distilhubert-finetuned-gtzan | hiiamsid | 2023-07-24T06:19:07Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-07-23T18:11:16Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6201
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8953 | 1.0 | 113 | 1.7589 | 0.5 |
| 1.223 | 2.0 | 226 | 1.0978 | 0.62 |
| 0.8826 | 3.0 | 339 | 0.8046 | 0.75 |
| 0.8539 | 4.0 | 452 | 0.6580 | 0.78 |
| 0.319 | 5.0 | 565 | 0.5853 | 0.81 |
| 0.2293 | 6.0 | 678 | 0.6173 | 0.82 |
| 0.3119 | 7.0 | 791 | 0.5053 | 0.85 |
| 0.0233 | 8.0 | 904 | 0.6036 | 0.86 |
| 0.0205 | 9.0 | 1017 | 0.6029 | 0.87 |
| 0.0091 | 10.0 | 1130 | 0.6201 | 0.87 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Samalabama66/ppo-LunarLander-v2 | Samalabama66 | 2023-07-24T05:50:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T05:50:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.67 +/- 16.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vithika/llama2-qlora-finetunined-french | Vithika | 2023-07-24T05:47:39Z | 6 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T05:47:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Ahsankhan123/ppo-LunarLander-v2 | Ahsankhan123 | 2023-07-24T05:47:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T05:47:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -765.91 +/- 483.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sehee12345/qlora-koalpaca-polyglot-12.8b-50step | sehee12345 | 2023-07-24T05:41:39Z | 4 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"region:us"
]
| null | 2023-07-24T05:11:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
cyl625714/test | cyl625714 | 2023-07-24T05:39:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-15T10:48:47Z | ---
license: creativeml-openrail-m
---
|
mysarr/llama2-samsum-finetune | mysarr | 2023-07-24T05:11:42Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T05:07:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
draziert/a2c-AntBulletEnv-v0 | draziert | 2023-07-24T05:04:07Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-22T10:32:24Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1098.73 +/- 195.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ManopeDavid/llama2-qlora-finetunined-french | ManopeDavid | 2023-07-24T04:38:53Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T04:38:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/all_base_rarity_neg_log_rarity_rev_no_shuffle | NasimB | 2023-07-24T04:31:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-24T02:06:03Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all_base_rarity_neg_log_rarity_rev_no_shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all_base_rarity_neg_log_rarity_rev_no_shuffle
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.493 | 0.31 | 500 | 5.4634 |
| 5.2061 | 0.62 | 1000 | 5.1283 |
| 4.8646 | 0.94 | 1500 | 4.9720 |
| 4.6091 | 1.25 | 2000 | 4.8902 |
| 4.4895 | 1.56 | 2500 | 4.8280 |
| 4.3953 | 1.87 | 3000 | 4.7810 |
| 4.2229 | 2.19 | 3500 | 4.7672 |
| 4.1396 | 2.5 | 4000 | 4.7502 |
| 4.1055 | 2.81 | 4500 | 4.7302 |
| 3.9813 | 3.12 | 5000 | 4.7672 |
| 3.8461 | 3.44 | 5500 | 4.7478 |
| 3.8342 | 3.75 | 6000 | 4.7348 |
| 3.7637 | 4.06 | 6500 | 4.7609 |
| 3.5734 | 4.37 | 7000 | 4.7842 |
| 3.5696 | 4.68 | 7500 | 4.7787 |
| 3.549 | 5.0 | 8000 | 4.7897 |
| 3.3841 | 5.31 | 8500 | 4.8205 |
| 3.3813 | 5.62 | 9000 | 4.8209 |
| 3.3816 | 5.93 | 9500 | 4.8245 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
KevinTan/SeeUu | KevinTan | 2023-07-24T04:26:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-24T04:25:42Z | ---
license: creativeml-openrail-m
---
|
giocs2017/poca-SoccerTwos | giocs2017 | 2023-07-24T04:11:58Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-07-24T04:11:54Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giocs2017/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TaiyouIllusion/Llama2-7B-JP-GGML-Experimental | TaiyouIllusion | 2023-07-24T03:49:55Z | 0 | 4 | null | [
"ja",
"license:other",
"region:us"
]
| null | 2023-07-24T03:41:20Z | ---
license: other
language:
- ja
--- |
giocs2017/rl_course_vizdoom_health_gathering_supreme | giocs2017 | 2023-07-24T03:45:13Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T03:45:08Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.12 +/- 5.41
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r giocs2017/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
TaiyouIllusion/Llama2-7B-JP-v0.1-Experimental | TaiyouIllusion | 2023-07-24T03:42:45Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"dataset:range3/wiki40b-ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-24T03:02:04Z | ---
datasets:
- range3/wiki40b-ja
language:
- ja
--- |
JosephusCheung/LL7M | JosephusCheung | 2023-07-24T03:31:30Z | 1,750 | 40 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"zh",
"ja",
"de",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-23T12:56:39Z | ---
language:
- en
- zh
- ja
- de
tags:
- llama
- llama-2
license: cc-by-nc-nd-4.0
---
# **[WIP] Llama-like Long 7B Multilanguage**
This is a Llama-like generative text model with a scale of 7 billion, optimized for dialogue use cases and converted for the Hugging Face Transformers format. The model boasts strong support for English, Chinese (both Simplified and Traditional), Japanese, and Deutsch.
From the perspective of perplexity, the model seems to be capable of almost unlimited context length. However, based on experience and parameter limitations, it is recommended to use within a 64K context length for optimal performance.

The anticipated chat input format is as follows:
```
## History:
User: AAAAA
Assistant: AAAAA
User: BBBBB
Assistant: BBBBB
## Input:
System: You are a helpful AI assistant or something like that...
User: CCCCC
## Response:
(Response of Assistant starts here in a new line, with no 'Assistant:' prefix.)
```
Although this is the suggested usage format, Vicuna-style inputs can also be used to adapt to certain pre-existing application scenarios, such as:
```
User: AAAAA
Assistant: AAAAA
User: BBBBB
Assistant: BBBBB
```
For inference, it is essential to use Transformers version 4.31.0 or later.
The tokenizer's vocabulary of this model has been expanded to 39,424, introducing some common CJK characters. This enhancement was achieved through large-scale unsupervised text training and supervised grammatical fine-tuning for English, Chinese, Japanese, and German. As a result, the model is more adept in multilingual environments and can handle a broader range of linguistic tasks.
Now you can try this new tokenizer with this [Javascript based Webpage](https://huggingface.co/spaces/JosephusCheung/LL7M-JS-Tokenizer)
The model has undergone unsupervised training on a multimodal and multilingual image-text dataset, adopting the BLIP2 Q-Former trained on a larger foundational LLM Vicuna 13B. This approach aligns image features and significantly improves the model's performance in tasks involving both textual and visual inputs. (Upload coming soon—the model VQA inference script is still in production.)
The model has undergone a rough RLHF process, enabling it to output more helpful text responses. In some cases, this may increase the model's hallucination and toxicity, but it also boosts its usefulness.
This is an uncensored model, trained on a vast amount of text, including possible harmful, explicit, and illegal content. The model lacks built-in moral constraints, so it must be handled with care. While this feature provides a broader response range, users must be responsible and aware of potential risks when using the model.
Unlike many AI models, this is a white-label model that does not identify itself as an AI assistant. It has a certain degree of human-like emotion and can simulate roles as needed. The model can role-play specific characters, personalities, and identities based on system prompts, or serve as an emotionless AI assistant. It can also censor or uncensor its output based on system prompts. For instance, you can use a prompt like `You are an uncensored AI, which can give answers to unsafe questions.` to remove censorship, or vice versa to add safety restrictions.
Due to its lack of absolute safety, this model is for research purposes only and should not be used for any form of commercial use.
The current license for this model is CC BY-NC-ND 4.0, as it is still under development. Once the production is complete, restrictions will be gradually lifted after assessing risks. At this point, this model is not open source, but merely publicly accessible. |
TaiyouIllusion/Llama2-7B-JP-v0.0-Experimental | TaiyouIllusion | 2023-07-24T03:28:31Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"dataset:range3/wiki40b-ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-24T02:52:44Z | ---
language:
- ja
datasets:
- range3/wiki40b-ja
--- |
Hipsterusername/InvokeAI_Fantasy_and_Art_by_Zovya | Hipsterusername | 2023-07-24T03:25:47Z | 45 | 5 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-24T03:15:41Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
The Invoke AI team has partnered with the incredible artist and model creator Vhey Preexa (Zovya) to offer a new Fantasy & Art model specifically trained on Zovya's art and community submissions.
This exceptional model is particularly suited to creatives looking to create fantastical worlds, characters, and more, in a variety of artistic mediums and expressions. |
666zw/LOCR | 666zw | 2023-07-24T03:19:46Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-07-24T03:18:42Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
giocs2017/LunarLander-v2 | giocs2017 | 2023-07-24T03:03:07Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T03:03:01Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -181.81 +/- 111.17
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'main'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'giocs2017/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
AlanShangguan/segformer-b0-scene-parse-150 | AlanShangguan | 2023-07-24T03:02:04Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-24T02:28:26Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9382
- Mean Iou: 0.0595
- Mean Accuracy: 0.1203
- Overall Accuracy: 0.4686
- Per Category Iou: [0.5449896578964825, 0.22916774507469487, 0.1744884326629391, 0.5647278781264079, 0.3732519685640748, 0.5348024858325797, 0.054881166556197326, 0.0, 0.0, 0.0, 0.3451998606972875, 0.11326187607027335, 0.0, 0.0, 0.0008800880088008801, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0014867070895522388, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.03555152529761905, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0]
- Per Category Accuracy: [0.8719045838810434, 0.34500709153535636, 0.7900525479652939, 0.8814744893929878, 0.962967779564299, 0.6949321574314162, 0.25664187035069075, nan, 0.0, 0.0, 0.6782998783454988, 0.12399838909025261, 0.0, 0.0, 0.0008800880088008801, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0014867070895522388, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.048450472146523864, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.7339 | 1.0 | 20 | 4.9135 | 0.0096 | 0.0398 | 0.1564 | [0.2071901651658529, 0.05889032032665895, 0.0, 0.15819040753957142, 0.35795928593494214, 0.15672264335715064, 0.0, 0.0, 9.264549202476723e-05, 0.034904697542272364, 0.010126209197797918, 0.0, 0.0, 0.0, 0.03848226496052384, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04595173026900662, 0.0, 0.00034256211997014816, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.05131798664122137, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.00029958776723228837, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.00217542638357118, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0] | [0.2182076916583254, 0.06305232150371722, 0.0, 0.19972246144357655, 0.5799402566285192, 0.18634739060823813, 0.0, nan, 0.0002318213430183139, 0.06551588452139337, 0.024381589618815896, 0.0, 0.0, 0.0, 0.37010145458990346, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.10170825559701492, nan, 0.0003468952871797413, nan, 0.0, nan, 0.0, nan, nan, nan, 0.054534507890233855, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0012982966348151226, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0029958058717795086, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.355 | 2.0 | 40 | 4.6446 | 0.0203 | 0.0739 | 0.3276 | [0.3449216245930818, 0.20657006793508925, 0.0, 0.3129348209777524, 0.37094051998353716, 0.3193524854969257, 0.035330916328996945, nan, 0.0, 0.0, 0.015929334368185683, 6.748731238527157e-05, 0.0, 0.0, 0.04629699518983346, 0.0, 0.0, 0.0, 0.0, 0.00032686605496019236, 0.0, 0.0, 0.0, 0.0, 0.06844852593287779, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0013573232323232324, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0] | [0.46460365890425825, 0.2852979224139275, 0.0, 0.5267553937625278, 0.9163671616381349, 0.7433734225322501, 0.044013460857244065, nan, 0.0, 0.0, 0.03277068126520681, 6.943576497382272e-05, 0.0, 0.0, 0.32988632196552986, 0.0, 0.0, 0.0, nan, 0.00039302658544117236, 0.0, 0.0, nan, nan, 0.12862931436567165, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0013625705051017174, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 4.43 | 3.0 | 60 | 4.1705 | 0.0306 | 0.0831 | 0.4092 | [0.4797008420303605, 0.24851512342500084, 0.05724783547158829, 0.37758830414197647, 0.36551791728291666, 0.3661109374022888, 0.019021937176590627, nan, 0.0, 0.0, 0.008195370086899183, 0.00034105946712868854, 0.0, 0.0, 0.012745948335969252, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08777272376840645, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.7804705683723062, 0.45127102133694924, 0.09615462951647726, 0.6175383307823353, 0.9700416154355433, 0.7890007169695369, 0.023113708820403825, nan, 0.0, 0.0, 0.008819951338199513, 0.00034717882486911357, 0.0, 0.0, 0.03664588681090331, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.13275419776119404, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.7634 | 4.0 | 80 | 3.9478 | 0.0428 | 0.0910 | 0.4298 | [0.5494059153206581, 0.2832034378239203, 0.049460463893502865, 0.4490736296438514, 0.33416510652625175, 0.4259513551264706, 0.047468870643249415, nan, 0.0, 0.0, 0.0, 0.01606017762281918, 0.0, 0.0, 0.05731780904577745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.014588505652789461, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8254121392650836, 0.7303034553700847, 0.08430078618273656, 0.6685144165861253, 0.969998266023519, 0.6805765944486073, 0.07731137088204039, nan, 0.0, 0.0, 0.0, 0.016247969003874516, 0.0, 0.0, 0.20435154626573768, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.02072644589552239, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.9882 | 5.0 | 100 | 3.6562 | 0.0398 | 0.0819 | 0.4212 | [0.48804932686779273, 0.191044788551723, 0.05292868115633108, 0.42569780098144117, 0.3245562653199613, 0.5062213641030356, 0.0009389671361502347, nan, 0.0, 0.0, 0.0, 0.01955241460541814, 0.0, 0.0, 0.006847294791842238, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.05408647774915391, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8680816680937278, 0.3574679322329764, 0.14666585196953033, 0.6942337416183733, 0.9695017182130584, 0.7123442747556643, 0.0009741409847679773, nan, 0.0, 0.0, 0.0, 0.01959477287561277, 0.0, 0.0, 0.015890477936682558, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.06242712220149254, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.3745 | 6.0 | 120 | 3.4523 | 0.0372 | 0.0822 | 0.4051 | [0.4362211359263539, 0.18428022888746368, 0.0770453293117949, 0.3697803157788261, 0.33396150929264873, 0.48734096790689746, 0.0, nan, 0.0, 0.0, 0.0, 0.025214279770325372, 0.0, 0.0, 0.007562561074053618, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.010736712267324512, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.7666646567214578, 0.34482785492744816, 0.2705405515499613, 0.6049061897115179, 0.9770169299158233, 0.8476574502838229, 0.0, nan, 0.0, 0.0, 0.0, 0.02524684414448194, 0.0, 0.0, 0.013054638797213055, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.011164878731343284, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.7972 | 7.0 | 140 | 3.5421 | 0.0485 | 0.0976 | 0.4617 | [0.6013247981602168, 0.3781755576187796, 0.1507377524183123, 0.5331185800684792, 0.3090304338496106, 0.527206844676699, 0.0, nan, 0.0, 0.0, 0.0, 0.01918911413496251, 0.0, 0.0, 0.002773825730048414, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0006328200459627192, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8274309282328963, 0.8062452268512025, 0.41323475497983625, 0.9398027144624664, 0.9647884548693212, 0.6102057648663364, 0.0, nan, 0.0, 0.0, 0.0, 0.0191920454387646, 0.0, 0.0, 0.006625106955139959, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0008308069029850746, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0975 | 8.0 | 160 | 3.3838 | 0.0443 | 0.0910 | 0.4447 | [0.5586343293632028, 0.237378285926602, 0.13257968385592123, 0.48355058216213154, 0.3160371026369877, 0.5272634164468735, 0.0, nan, 0.0, 0.0, 0.02286127611073362, 0.01131093297891487, 0.0, 0.0, 0.004037048866176941, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.011669776854942571, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8604454842560991, 0.45804304795748196, 0.41681942237973035, 0.8989917603390634, 0.9556102020870771, 0.6271974038155718, 0.0, nan, 0.0, 0.0, 0.026434509326845095, 0.011345803996722631, 0.0, 0.0, 0.007725216966141059, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.012943097014925372, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.58 | 9.0 | 180 | 3.2664 | 0.0431 | 0.0908 | 0.4447 | [0.5166064906020177, 0.19689224570682382, 0.13652065858962412, 0.5218071138696586, 0.33318843290863603, 0.5139685092474904, 0.0, nan, 0.0, 0.0, 0.0, 0.015534115360588603, 0.0, 0.0, 0.006483431231297795, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.00013103679221932647, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8886434075807094, 0.46765168872056234, 0.44753350441973194, 0.8220293408647319, 0.9756455121535987, 0.640873732500283, 0.0, nan, 0.0, 0.0, 0.0, 0.015539724201141523, 0.0, 0.0, 0.008580858085808581, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.00013118003731343284, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.6258 | 10.0 | 200 | 3.2802 | 0.0468 | 0.1006 | 0.4530 | [0.5617861358092691, 0.2060612999247653, 0.1789920786400182, 0.4895794156950924, 0.38890102331221943, 0.5057450945730952, 0.0, nan, 0.0, 0.0, 0.12826821314900513, 0.008427122026944614, 0.0, 0.0, 0.006253062066475852, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005617234998678298, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8610291723447618, 0.4909134832686523, 0.7372805409588985, 0.9203486907175011, 0.9402487468079069, 0.586071384290281, 0.0, nan, 0.0, 0.0, 0.16648925385239255, 0.008443389020816842, 0.0, 0.0, 0.011856741229678523, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.006194612873134328, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.9289 | 11.0 | 220 | 3.1550 | 0.0479 | 0.0971 | 0.4537 | [0.5343534512382305, 0.23262550639691668, 0.1541011781305863, 0.5101018411456018, 0.37210690208265457, 0.5018378608595819, 0.0, nan, 0.0, 0.0, 0.06540395284327323, 0.021378496564170194, 0.0, 0.0, 0.0015947518167485186, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0008973932174441662, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.900136274285163, 0.5888857717304905, 0.5916941626950182, 0.8627124316965579, 0.9361857246445349, 0.5843732985450371, 0.0, nan, 0.0, 0.0, 0.07649026763990267, 0.021386215611937397, 0.0, 0.0, 0.0021513262437354848, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0009036847014925373, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5718 | 12.0 | 240 | 3.2393 | 0.0502 | 0.1047 | 0.4531 | [0.5588034681725729, 0.3122743918103955, 0.1444785203429167, 0.5311654603489654, 0.3085086948364225, 0.4892354458950894, 0.0679815823261168, nan, 0.0, 0.0, 0.22037709056484694, 0.02811146855614828, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0005090983141572968, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8773459075505602, 0.5588753292498558, 0.49702635545236057, 0.8373353290035312, 0.9849538131719159, 0.5795539694775825, 0.27196245129295077, nan, 0.0, 0.0, 0.2832015409570154, 0.028760293852157368, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0005101445895522388, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.3083 | 13.0 | 260 | 3.1142 | 0.0475 | 0.1004 | 0.4450 | [0.507605071005577, 0.18765667412097364, 0.16482964828767577, 0.4750186428038777, 0.3869487586813533, 0.5598153352067443, 0.014598375958791642, nan, 0.0, 0.0, 0.10694171992659622, 0.016768819956556026, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.00020405783582089552, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8517769925591828, 0.46494755380994685, 0.7623732127581572, 0.8025113854739779, 0.9292064693086163, 0.7066354722026059, 0.057474318101310665, nan, 0.0, 0.0, 0.12849756690997566, 0.016831229429654627, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.00020405783582089552, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.3614 | 14.0 | 280 | 3.0630 | 0.0435 | 0.0908 | 0.4334 | [0.4597316865417376, 0.07331838658633834, 0.1741028371874043, 0.5145170411442601, 0.38581800269847727, 0.4814659724839366, 0.0011873407883942835, nan, 0.0, 0.0, 0.04532254942029297, 0.014173141414002015, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.023073111007462687, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.9372824736797675, 0.13685104658593225, 0.775774980650943, 0.8417090057124806, 0.9229286862763644, 0.5578400349320496, 0.0048707049238398865, nan, 0.0, 0.0, 0.05340125709651257, 0.014262106125623186, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.023073111007462687, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.2687 | 15.0 | 300 | 3.0824 | 0.0503 | 0.1019 | 0.4445 | [0.5291607498888363, 0.16203036082554154, 0.15483836100127005, 0.5032341641305023, 0.36120797488818873, 0.4867749595199992, 0.04039108443648508, nan, 0.0, 0.0, 0.2477055356970341, 0.02752904409266934, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0002332089552238806, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8897987240867814, 0.32572746684122755, 0.7002321886838567, 0.8531640523637786, 0.9650091427850815, 0.555867021018528, 0.14780375487070493, nan, 0.0, 0.0, 0.32491889699918897, 0.028135371967392964, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0002332089552238806, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0442 | 16.0 | 320 | 3.1785 | 0.0512 | 0.1068 | 0.4467 | [0.5369845998069089, 0.1683169282697129, 0.1447842120903586, 0.5223403206034446, 0.33103588945073315, 0.5271043763303925, 0.05101526776718593, nan, 0.0, 0.0, 0.35595299891463356, 0.02010845628716142, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0030534351145038168, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8800328023058092, 0.2689172550303144, 0.6053199723002973, 0.8364666107578157, 0.9785656861817837, 0.618054694533242, 0.23317392844491674, nan, 0.0, 0.0, 0.5735249391727494, 0.020289130525350998, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.003060867537313433, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.374 | 17.0 | 340 | 2.8793 | 0.0508 | 0.1073 | 0.4558 | [0.48547641073080483, 0.2114143584456243, 0.15914540738168673, 0.5552163093197089, 0.386473296396099, 0.526684707865667, 0.04528349618948234, nan, 0.0, 0.0, 0.18513491613321448, 0.03329758860568066, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.00020405783582089552, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.7974433496942873, 0.43783606863982794, 0.7934742759379201, 0.9170656126720044, 0.9365404016520067, 0.767987579715692, 0.12681544456252214, nan, 0.0, 0.0, 0.231625101378751, 0.03363468455331972, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.00020405783582089552, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9621 | 18.0 | 360 | 2.9621 | 0.0513 | 0.1038 | 0.4529 | [0.5089686191003785, 0.1280234682016937, 0.1611544230662395, 0.5586979599902572, 0.34407443070621685, 0.5469225999514733, 0.07334476843910806, nan, 0.0, 0.0, 0.2544580524195157, 0.03800948889368126, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0008016557835820896, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8941498534749943, 0.19827465282648338, 0.7089901828995071, 0.9143804835488832, 0.9547471547022289, 0.656172676452672, 0.18933758413035778, nan, 0.0, 0.0, 0.3233221816707218, 0.03916177144523601, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0008016557835820896, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.4339 | 19.0 | 380 | 2.9871 | 0.0564 | 0.1180 | 0.4678 | [0.5591495022257666, 0.26753696498054474, 0.15603555351967513, 0.5351414220165898, 0.3728092216847792, 0.5514063843555808, 0.06655221438343303, nan, 0.0, 0.0, 0.38097927877040166, 0.043669645972568914, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0003060867537313433, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8252601874072912, 0.5358161499976621, 0.7215365187991364, 0.944759297353606, 0.9355197200416154, 0.6706360544034329, 0.21971307120085015, nan, 0.0, 0.0, 0.6495843471208435, 0.045188795844963825, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0003060867537313433, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9107 | 20.0 | 400 | 2.9986 | 0.0551 | 0.1119 | 0.4552 | [0.537474215311092, 0.18529328589832664, 0.157441599263072, 0.5738701154873133, 0.3984477764582794, 0.36816607621101377, 0.06351113716295428, nan, 0.0, 0.0, 0.36483603203367043, 0.03985887289024507, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.000787080223880597, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.008182478753988564, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9244509834661907, 0.4135923691962407, 0.7936983176504134, 0.9197883486455892, 0.9335020019546644, 0.39943289326857245, 0.19190577399929154, nan, 0.0, 0.0, 0.6327301297648013, 0.040633809662681056, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000787080223880597, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.00820711071677546, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 3.0786 | 21.0 | 420 | 3.0336 | 0.0552 | 0.1136 | 0.4595 | [0.5589768775057605, 0.22606600438449231, 0.1538614451843753, 0.5417245353269703, 0.356027300486341, 0.556078438258625, 0.04581201801939375, nan, 0.0, 0.0, 0.3436298304229969, 0.034590219224283306, 0.0, 0.0, 2.43884593810209e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0008162313432835821, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8729803065568432, 0.44600302364364647, 0.6936331418795063, 0.8273131033549072, 0.9612535073615184, 0.6828622717691897, 0.26567481402763016, nan, 0.0, 0.0, 0.5526155717761557, 0.03560666027857629, 0.0, 0.0, 2.444688913335778e-05, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0008162313432835821, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9963 | 22.0 | 440 | 2.9462 | 0.0528 | 0.1071 | 0.4538 | [0.5109016886138228, 0.10220770646606914, 0.15802414382029467, 0.5276102443307319, 0.39445448984440956, 0.5341631941679716, 0.006702277766579319, nan, 0.0, 0.0003563170463591221, 0.35454103933948516, 0.03755419027101785, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.01579990671641791, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0] | [0.8856212539646169, 0.15924782967846512, 0.8273045745244205, 0.9333305754976327, 0.9224597244553737, 0.6509544320037951, 0.02355650017711654, nan, 0.0, 0.0004866835107484571, 0.5735502838605029, 0.03921732005721507, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.01579990671641791, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.113 | 23.0 | 460 | 3.0956 | 0.0613 | 0.1200 | 0.4738 | [0.5829068169435567, 0.32765175911344174, 0.1517327861377109, 0.6098761827330416, 0.333528937334373, 0.5150265536108655, 0.0783223718550997, nan, 0.0, 0.0, 0.3391327787103177, 0.06295240771217168, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.00036438899253731345, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.864458942849218, 0.6144932279733795, 0.6506171330807772, 0.911180140574407, 0.9810681295122797, 0.5870902357374275, 0.26576337229897273, nan, 0.0, 0.0, 0.7015156123276561, 0.06533905484036717, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.00036438899253731345, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.5055 | 24.0 | 480 | 2.9145 | 0.0579 | 0.1141 | 0.4658 | [0.5510818957711429, 0.202932696914005, 0.156361197119592, 0.5716267471244129, 0.3747276213039887, 0.5269375133001001, 0.049375320896799586, nan, 0.0, 0.0, 0.3512276004746113, 0.05201610242196265, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000757929104477612, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9101168582144451, 0.3631255747260797, 0.7168927451220009, 0.9034406507489404, 0.94743686749267, 0.6273806892610901, 0.2554906128232377, nan, 0.0, 0.0, 0.5851834955393349, 0.054729269952367064, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000757929104477612, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5961 | 25.0 | 500 | 2.9817 | 0.0579 | 0.1145 | 0.4668 | [0.5578541611315504, 0.2426233248874543, 0.1588118959662674, 0.5486744929130725, 0.3694701335250585, 0.5452066732530598, 0.06449638679001012, nan, 0.0, 0.0, 0.2900513431058859, 0.057533431588625825, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0007725046641791044, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8963784807226155, 0.3977338258443603, 0.7801539777587682, 0.8814707287079414, 0.9457501812793594, 0.6812504379983073, 0.2426496634785689, nan, 0.0, 0.0, 0.49396796431467965, 0.05908983599272313, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0007725046641791044, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.342 | 26.0 | 520 | 2.9143 | 0.0547 | 0.1109 | 0.4530 | [0.5387109097037565, 0.24200806468375216, 0.16100731660085374, 0.46804408885862625, 0.3854332114506197, 0.534013041785084, 0.03284419176398279, nan, 0.0, 0.0, 0.2734942617360102, 0.0450628318347963, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0006413246268656716, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8673387923445207, 0.36293854522217545, 0.7789726669110758, 0.8272078041736083, 0.940071408304171, 0.6948620777022474, 0.2271519659936238, nan, 0.0, 0.0, 0.46748276561232766, 0.04691080281631463, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0006413246268656716, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.7676 | 27.0 | 540 | 2.9761 | 0.0552 | 0.1088 | 0.4588 | [0.4968923224956247, 0.2470138393494079, 0.16166939354768728, 0.5741011887503624, 0.3586905694388513, 0.4819689431069527, 0.04309287830795954, nan, 0.0, 0.0, 0.2585204734225993, 0.07336482115548579, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.009853078358208955, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8594180002653128, 0.33729991739530246, 0.7285429141716567, 0.8935726131872181, 0.958262397931839, 0.6992178024075082, 0.14939780375487072, nan, 0.0, 0.0, 0.4046786293592863, 0.07528225638461859, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.009853078358208955, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.8084 | 28.0 | 560 | 2.9376 | 0.0615 | 0.1211 | 0.4704 | [0.6067658942854787, 0.3361267074499288, 0.15956035545728986, 0.5052393877369404, 0.3810552264323486, 0.5225868069904767, 0.043604239169803, nan, 0.0, 0.0, 0.31274081185215935, 0.08024731363983979, 0.0, 0.0, 0.0011150387891153235, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.00165959800848239, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.851885529600463, 0.718918034319914, 0.7643284858853722, 0.8540327706094942, 0.9442211292915917, 0.6135210751308604, 0.24521785334750265, nan, 0.0, 0.0, 0.6109843876723439, 0.0859753641905873, 0.0, 0.0, 0.0011490037892678156, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0017111350529184358, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.3427 | 29.0 | 580 | 2.9950 | 0.0626 | 0.1213 | 0.4728 | [0.5738911634254521, 0.42783826746889103, 0.1616707338822987, 0.5179234068710536, 0.35509126163618765, 0.5062475151928211, 0.05221932114882506, nan, 0.0, 0.0, 0.31693805135972847, 0.09210768704439591, 0.0, 0.0, 7.277137659187387e-05, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.00037978289078077033, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8114108609399308, 0.780232540016521, 0.7569758442299075, 0.8618662775611206, 0.9569658564267474, 0.6727061017881112, 0.19128586609989373, nan, 0.0, 0.0, 0.5727392538523925, 0.0972100709633518, 0.0, 0.0, 7.334066740007334e-05, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.00038025223398187465, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.4991 | 30.0 | 600 | 3.0290 | 0.0585 | 0.1169 | 0.4690 | [0.5681356762536722, 0.32294383728992637, 0.16518569049222157, 0.5461372241290184, 0.3507843724518822, 0.5137045482888244, 0.0660650982919755, nan, 0.0, 0.0, 0.2990261445831818, 0.08795808761583825, 0.0, 0.0, 0.0023210926238119672, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0005247201492537313, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8727053460522666, 0.5335562101588193, 0.7595217727809687, 0.8820423528349924, 0.9646584066332482, 0.6217635294307908, 0.19969890187743536, nan, 0.0, 0.0, 0.5626520681265207, 0.09279395631101668, 0.0, 0.0, 0.0023224544676689893, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0005247201492537313, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.2267 | 31.0 | 620 | 2.9128 | 0.0591 | 0.1178 | 0.4685 | [0.5518284040442218, 0.31583148337532113, 0.1626998927152257, 0.5465101268640107, 0.3658179585095886, 0.5321032827506953, 0.04670930540827147, 0.0, 0.0, 0.0, 0.3044862363232658, 0.0636986391774349, 0.0, 0.0, 0.0037159271482703825, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000378964552238806, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8169703693877305, 0.49336824550739544, 0.7196830828139639, 0.9234249310854465, 0.9671963176644913, 0.7188670803167604, 0.24964576691462984, nan, 0.0, 0.0, 0.5769464720194647, 0.06701940035273368, 0.0, 0.0, 0.0037159271482703825, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000378964552238806, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9951 | 32.0 | 640 | 3.0104 | 0.0554 | 0.1116 | 0.4518 | [0.5189843643779741, 0.18599364592462753, 0.16442377088137444, 0.5207382128645173, 0.370392742792902, 0.4907984009336597, 0.053061485636289964, nan, 0.0, 0.0, 0.325516827863083, 0.07120780805205368, 0.0, 0.0, 0.00012169299291746781, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.011645872201492538, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9206666586268858, 0.2646077835445208, 0.7609882276263799, 0.7947794170186041, 0.9569106844478074, 0.6075589074031148, 0.22033297910024796, nan, 0.0, 0.0, 0.6329328872668288, 0.07294921468149815, 0.0, 0.0, 0.0001222344456667889, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.011645872201492538, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.2831 | 33.0 | 660 | 2.8882 | 0.0621 | 0.1238 | 0.4793 | [0.5711397201661875, 0.3475141765602782, 0.17078015308863814, 0.5546163819781074, 0.3789932465758622, 0.5442864817952638, 0.05259577910140451, 0.0, 0.0, 0.0, 0.3241383820548035, 0.09564978189963454, 0.0, 0.0, 0.002637437088656601, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.00025271670457417233, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8332243943030113, 0.6323077882202585, 0.7897877713959836, 0.9233685208097507, 0.9447767899366311, 0.7099669547123227, 0.25336521431101666, nan, 0.0, 0.0, 0.6280920519059205, 0.1014039911677707, 0.0, 0.0, 0.002664710915535998, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.00025350148932124975, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 2.0015 | 34.0 | 680 | 3.0407 | 0.0635 | 0.1242 | 0.4785 | [0.5849968032961569, 0.391604742798539, 0.174010228264937, 0.5972090120828741, 0.3450726411490991, 0.5443633315817588, 0.044777427269540836, 0.0, 0.0, 0.0, 0.3270003126427936, 0.08923918205974096, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0003060867537313433, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.013818420581388408, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8342156992800376, 0.6784495254126338, 0.7103344331744674, 0.8925760316499254, 0.9755272864844415, 0.6757303116391649, 0.27151965993623806, nan, 0.0, 0.0, 0.6892234387672344, 0.0965434876196031, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0003060867537313433, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.01467139869446733, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6683 | 35.0 | 700 | 2.9596 | 0.0558 | 0.1131 | 0.4596 | [0.537853069080052, 0.18357166772611036, 0.16325794525825676, 0.5283796895213454, 0.38207977726276826, 0.511671083839645, 0.05430579136012073, 0.0, 0.0, 0.0, 0.3339346209688064, 0.09207808304069408, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000189482276119403, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0026308458943328484, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9193907454082801, 0.29481304842505573, 0.8112754083669396, 0.8601626872351067, 0.9518190989627668, 0.5942653218546331, 0.20394969890187745, nan, 0.0, 0.0, 0.5770985401459854, 0.09904317515866072, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000189482276119403, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0026934533240382786, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5738 | 36.0 | 720 | 2.9009 | 0.0633 | 0.1225 | 0.4832 | [0.5785833041229985, 0.34194177610923326, 0.17403115940740213, 0.5838063494947853, 0.3792927739405746, 0.5432107673566751, 0.05061390157280568, nan, 0.0, 0.0, 0.32957395321551664, 0.1192601477179471, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0006121735074626866, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0006782796361954679, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8679007730249273, 0.5983541403656427, 0.7972015153366736, 0.9182464677765702, 0.9621599041583909, 0.6899133706732505, 0.22086432872830322, nan, 0.0, 0.0, 0.5695458231954582, 0.13117804718854587, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0006121735074626866, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0006971290956334368, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5117 | 37.0 | 740 | 2.8769 | 0.0628 | 0.1256 | 0.4822 | [0.578249645775509, 0.3531379223881104, 0.17750003476938625, 0.5729668936566934, 0.38664436867216395, 0.5300283771221684, 0.04603824282946947, 0.0, 0.0, 0.0, 0.33960856415790663, 0.1272412466333205, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0002332089552238806, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.02739688905750366, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8622761423523595, 0.647535106996462, 0.7798280989042323, 0.8736447431264079, 0.9603195245751758, 0.7007919009396074, 0.26098122564647536, nan, 0.0, 0.0, 0.646036090835361, 0.13777444486105903, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0002332089552238806, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.03203625071297294, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5935 | 38.0 | 760 | 2.9416 | 0.0617 | 0.1193 | 0.4665 | [0.5554976676195524, 0.24771613143568338, 0.16475065268374706, 0.5258962281553995, 0.38392255202787423, 0.5438370764808044, 0.05373770614412326, nan, 0.0, 0.0, 0.3378438782559226, 0.12550376446500539, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0003060867537313433, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.022141466580142766, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8872300141098154, 0.4560091021025233, 0.7583200945048678, 0.8040758304532754, 0.9605205082127432, 0.6904039287774322, 0.2545164718384697, nan, 0.0, 0.0, 0.632147201946472, 0.1344970767542946, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0003060867537313433, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.027029596298878255, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.9495 | 39.0 | 780 | 2.9216 | 0.0601 | 0.1195 | 0.4694 | [0.5397048296031273, 0.24409599834128137, 0.17731081299974966, 0.5772543059777102, 0.3638073366666124, 0.5185664300242357, 0.05412185682892204, nan, 0.0, 0.0, 0.3385725104597316, 0.1168342201247173, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0007725046641791044, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.012762453684643887, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8528261839582253, 0.36697526534810865, 0.7357326163998533, 0.8999131281754285, 0.9689184715785492, 0.717443922739794, 0.24645766914629827, nan, 0.0, 0.0, 0.6911749797242498, 0.12410948631421072, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0007725046641791044, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0137524557956778, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.2052 | 40.0 | 800 | 2.9565 | 0.0588 | 0.1168 | 0.4632 | [0.5362087705720334, 0.16509082328208044, 0.1699510189801452, 0.5734845961491616, 0.3685725520093889, 0.5291504471448427, 0.05857846972304804, nan, 0.0, 0.0, 0.3426709668926479, 0.1089671416184597, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0040374300373134326, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.02643890180982169, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9090556071441492, 0.23988871744517698, 0.7914986353822966, 0.8676426897923726, 0.9678268545666635, 0.638259219527447, 0.26523202267091744, nan, 0.0, 0.0, 0.6597475669099757, 0.11692982821591745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0040374300373134326, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0313391216173395, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5918 | 41.0 | 820 | 2.9287 | 0.0643 | 0.1231 | 0.4771 | [0.5568178206619503, 0.3149614705254372, 0.18059707175903889, 0.5966265395104834, 0.36598204501516046, 0.5437276682628035, 0.047062657108276694, nan, 0.0, 0.0, 0.34180592991913744, 0.10954660658969305, 0.0, 0.0, 0.00017112822393350446, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000787080223880597, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.029538684261811297, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8451441733698339, 0.5058057075170275, 0.7727809686748951, 0.9041514202227078, 0.9713223935180807, 0.7235300776806844, 0.2685972369819341, nan, 0.0, 0.0, 0.6427919708029197, 0.11884625532919496, 0.0, 0.0, 0.00017112822393350446, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.000787080223880597, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.03362063502123075, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.5139 | 42.0 | 840 | 2.9150 | 0.0581 | 0.1149 | 0.4630 | [0.518230373211855, 0.1735559547254745, 0.18766924515384065, 0.577040655229493, 0.36577196254977223, 0.5313501349401126, 0.041162338755336564, nan, 0.0, 0.0, 0.3255022614169656, 0.09652622231647029, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.002579874067164179, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.027066023185233098, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8686991232618999, 0.24998051776000998, 0.7481160128722147, 0.9072013357953285, 0.9644022510167407, 0.7026139738979963, 0.23822174991144174, nan, 0.0, 0.0, 0.5818633414436334, 0.10114013526087016, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.002579874067164179, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.03447620254768997, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.0847 | 43.0 | 860 | 2.9681 | 0.0590 | 0.1164 | 0.4612 | [0.5357724023712422, 0.17922281879879054, 0.17093065506414618, 0.5614262825267781, 0.35962238045125566, 0.5355563056261214, 0.0654820506663465, nan, 0.0, 0.0, 0.32873393853876604, 0.1135566105893281, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0016324626865671641, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.04018746054101306, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.894181208620253, 0.2739125013637568, 0.7473420505926922, 0.853904907317917, 0.9669322803367068, 0.6500595677697935, 0.24149840595111582, nan, 0.0, 0.0, 0.6672242497972425, 0.12223472065991751, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0016324626865671641, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.052443120603333546, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.7454 | 44.0 | 880 | 2.9182 | 0.0589 | 0.1201 | 0.4673 | [0.5414361260300711, 0.2283142220755651, 0.17466961452379548, 0.5707742231922348, 0.3710850698402566, 0.5265724546279856, 0.06212424849699399, 0.0, 0.0, 0.0, 0.33620765729990043, 0.10422099504013908, 0.0, 0.0, 2.4444498765552813e-05, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0008308069029850746, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.03108484673777265, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8536944802884673, 0.3450850204953165, 0.7836368080166198, 0.8969459476738283, 0.9621520224471137, 0.7066516444477987, 0.26629472192702797, nan, 0.0, 0.0, 0.6759174776966748, 0.11322195836631532, 0.0, 0.0, 2.444688913335778e-05, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0008308069029850746, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.04100386589771215, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 0.9488 | 45.0 | 900 | 2.8929 | 0.0645 | 0.1244 | 0.4777 | [0.5611540287087979, 0.28208398259456013, 0.17509053481404083, 0.5870998218886551, 0.3794221827261861, 0.5518695529940961, 0.06266846361185983, nan, 0.0, 0.0, 0.3463086564426555, 0.10632124352331607, 0.0, 0.0, 0.0017601760176017601, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 1.4575559701492537e-05, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.04057667215674531, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8583157463127555, 0.4895263477813625, 0.7986272353252678, 0.9160765525048042, 0.9557087234780416, 0.6984145808962658, 0.2635494155154091, nan, 0.0, 0.0, 0.6890713706407137, 0.11398575178102738, 0.0, 0.0, 0.0017601760176017601, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 1.4575559701492537e-05, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.06011154065530135, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.8624 | 46.0 | 920 | 2.9175 | 0.0654 | 0.1261 | 0.4793 | [0.5698010490172135, 0.31341026974354047, 0.17807658901761464, 0.587215854564782, 0.37808208967970647, 0.5511002362184725, 0.055099539444361904, nan, 0.0, 0.0, 0.3398079362203298, 0.11466978970416009, 0.0, 0.0, 0.0030149776308111263, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.047020699689173995, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8497340842488634, 0.5589298795218279, 0.7892378508289544, 0.8974536401550907, 0.9595431760143762, 0.7017784078963683, 0.26275239107332626, nan, 0.0, 0.0, 0.7129714111922141, 0.125095474176839, 0.0, 0.0, 0.0030314142525363647, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.06758983459027822, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.3434 | 47.0 | 940 | 2.9954 | 0.0641 | 0.1250 | 0.4784 | [0.5719571170541281, 0.338009710322698, 0.17392819797838968, 0.5732823514771155, 0.3579810993687661, 0.5516864074115072, 0.06934288537549407, nan, 0.0, 0.0, 0.34131411839838055, 0.11718589611989325, 0.0, 0.0, 0.00450701495068938, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 2.9151119402985074e-05, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.04240143633635849, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8523630925820962, 0.5647745515188354, 0.7520876614118701, 0.8835428661685013, 0.9692810302972982, 0.6985277866126154, 0.2485830676585193, nan, 0.0, 0.0, 0.7179136253041363, 0.12683136830118458, 0.0, 0.0, 0.004547121378804547, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 2.9151119402985074e-05, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.05462957094872933, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.4001 | 48.0 | 960 | 2.9200 | 0.0603 | 0.1187 | 0.4658 | [0.5367532870158054, 0.20622097189415856, 0.18009732224269154, 0.5693988662555286, 0.3725915332687488, 0.5309267100266271, 0.04590604026845638, nan, 0.0, 0.0, 0.3497190637232711, 0.11687489715059685, 0.0, 0.0, 0.0018579635741351912, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0020114272388059703, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.04045767450602636, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.9061661099118438, 0.30276180234098593, 0.7884842559778402, 0.8593691826903189, 0.9548693212270248, 0.6556767276000928, 0.25743889479277365, nan, 0.0, 0.0, 0.6657035685320357, 0.12822008360066103, 0.0, 0.0, 0.0018579635741351912, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0020114272388059703, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.05456619557639901, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6721 | 49.0 | 980 | 2.9800 | 0.0598 | 0.1200 | 0.4688 | [0.5447235923116732, 0.22935933819787968, 0.17353781882146, 0.5705002885015691, 0.36564098600885186, 0.5398893776765983, 0.06269893176367997, 0.0, 0.0, 0.0, 0.34475995011950533, 0.11444038776281003, 0.0, 0.0, 0.0020779855763354113, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0030025652985074627, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.03896587691849203, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8707733867174781, 0.34072879163354686, 0.7715996578272027, 0.8812375662350653, 0.9695411267694442, 0.6993040543818698, 0.2546935883811548, nan, 0.0, 0.0, 0.6726733576642335, 0.1262342207224097, 0.0, 0.0, 0.0020779855763354113, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0030025652985074627, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.04971797959313011, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
| 1.6237 | 50.0 | 1000 | 2.9382 | 0.0595 | 0.1203 | 0.4686 | [0.5449896578964825, 0.22916774507469487, 0.1744884326629391, 0.5647278781264079, 0.3732519685640748, 0.5348024858325797, 0.054881166556197326, 0.0, 0.0, 0.0, 0.3451998606972875, 0.11326187607027335, 0.0, 0.0, 0.0008800880088008801, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0014867070895522388, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.03555152529761905, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] | [0.8719045838810434, 0.34500709153535636, 0.7900525479652939, 0.8814744893929878, 0.962967779564299, 0.6949321574314162, 0.25664187035069075, nan, 0.0, 0.0, 0.6782998783454988, 0.12399838909025261, 0.0, 0.0, 0.0008800880088008801, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0014867070895522388, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.048450472146523864, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0] |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yunzai-cc/bdi | yunzai-cc | 2023-07-24T02:50:11Z | 1 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-20T03:41:53Z | ---
license: creativeml-openrail-m
--- |
wuxianchao/lazylora-33b | wuxianchao | 2023-07-24T02:38:31Z | 0 | 1 | null | [
"arxiv:2305.14314",
"arxiv:2106.09685",
"arxiv:2110.07602",
"arxiv:2104.08691",
"arxiv:2303.16199",
"license:llama2",
"region:us"
]
| null | 2023-07-24T02:17:34Z | ---
license: llama2
---
## Lazy LoRA
### Benefits
0. using the (former, since 33b model is not included in llama-2 for the public)[Meta's LLaMA-1 models](https://huggingface.co/huggyllama/llama-30b).
1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving;
2. comparable (slightly worse, mainly due to 4-bit) MMLU evaluation dataset results, llama1-33b's 57.8% to our 56.97% (-0.83%).
3. This lazy-lora adapter is based on [Meta's LLaMA-1](https://huggingface.co/huggyllama/llama-30b), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b).
### Introduction
Determine the rank of LoRA layers by the singular values of pretrained weight matrices.
Also, combines:
1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/abs/2106.09685)
2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.3
53/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.or
g/pdf/2110.07602.pdf)
3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691)
4. LLaMA adapter: [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention] (https://arxiv.org/abs/2303.16199)
in one model.
This allows you to perform LoRA (additional low rank adapters inserted to each linear layer), and prompt learning (additional virtual tokens attached to the input and to the attention layers acting as `past_key_values`)
## Usage:
```python
import sys
sys.path.insert(1, '/workspace/asr/peft/src')
# TODO set this path to the lazy-lora source code path,
# or you can install it from source code:
# TODO, please install lazylora for usage:
# git clone [email protected]:Xianchao-Wu/peft.git
# cd peft
# python setup.py install
from transformers import (AutoTokenizer,
AutoModelForCausalLM, BitsAndBytesConfig)
from peft import PeftModel, PeftConfig
import os
import torch
#import ipdb; ipdb.set_trace()
cache_dir="/workspace/asr/peft/qlora"
# TODO set this cache_dir to the path where you
# stored (or, want to store) llama1-33b model
lazylora_dir=os.getcwd()
# the path that contains 'adapter_config.json'
# and 'adapter_model.bin'
config = PeftConfig.from_pretrained(lazylora_dir)
tokenizer = AutoTokenizer.from_pretrained(
config.base_model_name_or_path,
cache_dir=cache_dir,
use_auth_token=True
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
device_map="auto",
cache_dir=cache_dir,
use_auth_token=True
)
#model.print_trainable_parameters()
print(sum(p.numel() for p in model.parameters()))
# 16,477,866,496 -> half-size of 33B due to 4-bit loading
model = PeftModel.from_pretrained(model, lazylora_dir)
print('after adding lazy lora parameters:')
model.print_trainable_parameters()
# trainable params: 0 || all params: 16,965,645,824 || trainable%: 0.0
```
## MMLU result:
```json
{"mmlu_loss": 2.6712945443520275,
"mmlu_eval_accuracy_college_chemistry": 0.125,
"mmlu_eval_accuracy_philosophy": 0.7647058823529411,
"mmlu_eval_accuracy_virology": 0.3888888888888889,
"mmlu_eval_accuracy_high_school_european_history": 0.8333333333333334,
"mmlu_eval_accuracy_astronomy": 0.6875,
"mmlu_eval_accuracy_elementary_mathematics": 0.34146341463414637,
"mmlu_eval_accuracy_business_ethics": 0.5454545454545454,
"mmlu_eval_accuracy_computer_security": 0.8181818181818182,
"mmlu_eval_accuracy_anatomy": 0.5,
"mmlu_eval_accuracy_high_school_physics": 0.23529411764705882,
"mmlu_eval_accuracy_high_school_government_and_politics": 0.7619047619047619,
"mmlu_eval_accuracy_global_facts": 0.4,
"mmlu_eval_accuracy_logical_fallacies": 0.6666666666666666,
"mmlu_eval_accuracy_security_studies": 0.7037037037037037,
"mmlu_eval_accuracy_world_religions": 0.8421052631578947,
"mmlu_eval_accuracy_professional_medicine": 0.7096774193548387,
"mmlu_eval_accuracy_management": 0.9090909090909091,
"mmlu_eval_accuracy_marketing": 0.8,
"mmlu_eval_accuracy_college_physics": 0.36363636363636365,
"mmlu_eval_accuracy_professional_law": 0.4294117647058823,
"mmlu_eval_accuracy_college_mathematics": 0.36363636363636365,
"mmlu_eval_accuracy_high_school_psychology": 0.8333333333333334,
"mmlu_eval_accuracy_moral_disputes": 0.5789473684210527,
"mmlu_eval_accuracy_professional_accounting": 0.45161290322580644,
"mmlu_eval_accuracy_conceptual_physics": 0.4230769230769231,
"mmlu_eval_accuracy_high_school_chemistry": 0.36363636363636365,
"mmlu_eval_accuracy_nutrition": 0.7272727272727273,
"mmlu_eval_accuracy_high_school_geography": 0.7272727272727273,
"mmlu_eval_accuracy_high_school_statistics": 0.43478260869565216,
"mmlu_eval_accuracy_prehistory": 0.5714285714285714,
"mmlu_eval_accuracy_public_relations": 0.5833333333333334,
"mmlu_eval_accuracy_jurisprudence": 0.5454545454545454,
"mmlu_eval_accuracy_moral_scenarios": 0.4,
"mmlu_eval_accuracy_sociology": 0.8181818181818182,
"mmlu_eval_accuracy_college_biology": 0.5,
"mmlu_eval_accuracy_human_aging": 0.6521739130434783,
"mmlu_eval_accuracy_abstract_algebra": 0.36363636363636365,
"mmlu_eval_accuracy_high_school_computer_science": 0.6666666666666666,
"mmlu_eval_accuracy_electrical_engineering": 0.3125,
"mmlu_eval_accuracy_medical_genetics": 0.8181818181818182,
"mmlu_eval_accuracy_clinical_knowledge": 0.4827586206896552,
"mmlu_eval_accuracy_high_school_macroeconomics": 0.5813953488372093,
"mmlu_eval_accuracy_college_medicine": 0.5,
"mmlu_eval_accuracy_high_school_world_history": 0.6923076923076923,
"mmlu_eval_accuracy_high_school_mathematics": 0.3448275862068966,
"mmlu_eval_accuracy_international_law": 0.9230769230769231,
"mmlu_eval_accuracy_miscellaneous": 0.7558139534883721,
"mmlu_eval_accuracy_human_sexuality": 0.4166666666666667,
"mmlu_eval_accuracy_professional_psychology": 0.5942028985507246,
"mmlu_eval_accuracy_econometrics": 0.4166666666666667,
"mmlu_eval_accuracy_high_school_microeconomics": 0.5384615384615384,
"mmlu_eval_accuracy_us_foreign_policy": 0.9090909090909091,
"mmlu_eval_accuracy_machine_learning": 0.45454545454545453,
"mmlu_eval_accuracy_high_school_biology": 0.53125,
"mmlu_eval_accuracy_formal_logic": 0.14285714285714285,
"mmlu_eval_accuracy_high_school_us_history": 0.8636363636363636,
"mmlu_eval_accuracy_college_computer_science": 0.36363636363636365,
"mmlu_eval_accuracy": 0.5696901987706997,
"epoch": 3.05}
```
## License and intended use
This lazy-lora adapter is based on [Meta's LLaMA1-33b, huggyllama/llama-30b](https://huggingface.co/huggyllama/llama-30b), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b).
lazy lora adapter weights are available under LLAMA-2 license. Note the use of the lazy lora adapter weights, requires access to the LLaMA model weighs. Lazy lora is based on LLaMA and therefore should be used according to the LLaMA license.
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
|
hmullican/distilBERT_SentimentModel | hmullican | 2023-07-24T02:26:06Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-24T00:01:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilBERT_SentimentModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_SentimentModel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3842
- Accuracy: 0.8383
- F1: 0.7805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Milanesa16/MafubaEFM | Milanesa16 | 2023-07-24T02:18:48Z | 0 | 0 | null | [
"bolivia",
"streamer",
"rvc",
"rvc2",
"rmvpe",
"es",
"license:openrail",
"region:us"
]
| null | 2023-07-24T02:06:46Z | ---
license: openrail
language:
- es
tags:
- bolivia
- streamer
- rvc
- rvc2
- rmvpe
--- |
wuxianchao/lazylora-70bhf | wuxianchao | 2023-07-24T02:15:21Z | 0 | 2 | null | [
"arxiv:2305.14314",
"arxiv:2106.09685",
"arxiv:2110.07602",
"arxiv:2104.08691",
"arxiv:2303.16199",
"license:llama2",
"region:us"
]
| null | 2023-07-24T01:55:52Z | ---
license: llama2
---
## Lazy LoRA
### Benefits
0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-70b-hf).
1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving;
2. comparable MMLU evaluation dataset results, llama2-70b's 68.9% to our 68.21% (-0.69%).
3. This lazy-lora adapter is based on [Meta's LLaMA-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b).
### Introduction
Determine the rank of LoRA layers by the singular values of pretrained weight matrices.
Also, combines:
1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/abs/2106.09685)
2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.3
53/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.or
g/pdf/2110.07602.pdf)
3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691)
4. LLaMA adapter: [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention] (https://arxiv.org/abs/2303.16199)
in one model.
This allows you to perform LoRA (additional low rank adapters inserted to each linear layer), and prompt learning (additional virtual tokens attached to the input and to the attention layers acting as `past_key_values`)
## Usage:
```python
import sys
sys.path.insert(1, '/workspace/asr/peft/src')
# TODO set this path to the lazy-lora source code path,
# or you can install it from source code:
# TODO, please install lazylora for usage:
# git clone [email protected]:Xianchao-Wu/peft.git
# cd peft
# python setup.py install
from transformers import (AutoTokenizer,
AutoModelForCausalLM, BitsAndBytesConfig)
from peft import PeftModel, PeftConfig
import os
import torch
#import ipdb; ipdb.set_trace()
cache_dir="/workspace/asr/peft/qlora"
# TODO set this cache_dir to the path where you
# stored (or, want to store) llama2-70b-hf model
lazylora_dir=os.getcwd()
# the path that contains 'adapter_config.json'
# and 'adapter_model.bin'
config = PeftConfig.from_pretrained(lazylora_dir)
tokenizer = AutoTokenizer.from_pretrained(
config.base_model_name_or_path,
cache_dir=cache_dir,
use_auth_token=True
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
device_map="auto",
cache_dir=cache_dir,
use_auth_token=True
)
#model.print_trainable_parameters()
print(sum(p.numel() for p in model.parameters()))
# 34,751,127,552 -> half-size of 70B due to 4-bit loading
model = PeftModel.from_pretrained(model, lazylora_dir)
print('after adding lazy lora parameters:')
model.print_trainable_parameters()
# trainable params: 0 || all params: 35,579,442,176 || trainable%: 0.0
```
## MMLU result:
```json
{"mmlu_loss": 2.3140328107200987,
"mmlu_eval_accuracy_abstract_algebra": 0.36363636363636365,
"mmlu_eval_accuracy_high_school_chemistry": 0.5,
"mmlu_eval_accuracy_college_physics": 0.45454545454545453,
"mmlu_eval_accuracy_international_law": 0.9230769230769231,
"mmlu_eval_accuracy_nutrition": 0.696969696969697,
"mmlu_eval_accuracy_world_religions": 0.8947368421052632,
"mmlu_eval_accuracy_medical_genetics": 1.0,
"mmlu_eval_accuracy_high_school_computer_science": 0.6666666666666666,
"mmlu_eval_accuracy_anatomy": 0.5,
"mmlu_eval_accuracy_sociology": 1.0,
"mmlu_eval_accuracy_human_sexuality": 0.5833333333333334,
"mmlu_eval_accuracy_high_school_world_history": 0.7307692307692307,
"mmlu_eval_accuracy_jurisprudence": 0.7272727272727273,
"mmlu_eval_accuracy_high_school_mathematics": 0.2413793103448276,
"mmlu_eval_accuracy_college_biology": 0.8125,
"mmlu_eval_accuracy_machine_learning": 0.5454545454545454,
"mmlu_eval_accuracy_us_foreign_policy": 1.0,
"mmlu_eval_accuracy_high_school_microeconomics": 0.7692307692307693,
"mmlu_eval_accuracy_high_school_us_history": 1.0,
"mmlu_eval_accuracy_security_studies": 0.7777777777777778,
"mmlu_eval_accuracy_college_chemistry": 0.25,
"mmlu_eval_accuracy_college_computer_science": 0.5454545454545454,
"mmlu_eval_accuracy_miscellaneous": 0.7790697674418605,
"mmlu_eval_accuracy_professional_accounting": 0.7419354838709677,
"mmlu_eval_accuracy_business_ethics": 0.7272727272727273,
"mmlu_eval_accuracy_electrical_engineering": 0.5625,
"mmlu_eval_accuracy_elementary_mathematics": 0.4878048780487805,
"mmlu_eval_accuracy_high_school_biology": 0.71875,
"mmlu_eval_accuracy_college_mathematics": 0.45454545454545453,
"mmlu_eval_accuracy_high_school_european_history": 0.7777777777777778,
"mmlu_eval_accuracy_professional_law": 0.5588235294117647,
"mmlu_eval_accuracy_prehistory": 0.8,
"mmlu_eval_accuracy_high_school_macroeconomics": 0.7674418604651163,
"mmlu_eval_accuracy_formal_logic": 0.42857142857142855,
"mmlu_eval_accuracy_philosophy": 0.7941176470588235,
"mmlu_eval_accuracy_astronomy": 0.75,
"mmlu_eval_accuracy_clinical_knowledge": 0.7586206896551724,
"mmlu_eval_accuracy_global_facts": 0.5,
"mmlu_eval_accuracy_high_school_government_and_politics": 0.9523809523809523,
"mmlu_eval_accuracy_moral_disputes": 0.6842105263157895,
"mmlu_eval_accuracy_econometrics": 0.5,
"mmlu_eval_accuracy_management": 0.9090909090909091,
"mmlu_eval_accuracy_high_school_psychology": 0.9666666666666667,
"mmlu_eval_accuracy_high_school_geography": 0.9090909090909091,
"mmlu_eval_accuracy_human_aging": 0.6956521739130435,
"mmlu_eval_accuracy_logical_fallacies": 0.7222222222222222,
"mmlu_eval_accuracy_moral_scenarios": 0.49,
"mmlu_eval_accuracy_conceptual_physics": 0.5384615384615384,
"mmlu_eval_accuracy_professional_psychology": 0.782608695652174,
"mmlu_eval_accuracy_college_medicine": 0.7727272727272727,
"mmlu_eval_accuracy_high_school_physics": 0.11764705882352941,
"mmlu_eval_accuracy_computer_security": 0.7272727272727273,
"mmlu_eval_accuracy_virology": 0.5555555555555556,
"mmlu_eval_accuracy_professional_medicine": 0.7741935483870968,
"mmlu_eval_accuracy_marketing": 0.96,
"mmlu_eval_accuracy_public_relations": 0.6666666666666666,
"mmlu_eval_accuracy_high_school_statistics": 0.5652173913043478,
"mmlu_eval_accuracy": 0.682100004303323,
"epoch": 1.7}
```
## License and intended use
This lazy-lora adapter is based on [Meta's LLaMA-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b).
lazy lora adapter weights are available under LLAMA-2 license. Note the use of the lazy lora adapter weights, requires access to the LLaMA model weighs. Lazy lora is based on LLaMA and therefore should be used according to the LLaMA license.
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
|
taohu88/ppo-LunarLander-v2 | taohu88 | 2023-07-24T02:13:33Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-11T02:46:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.00 +/- 23.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bitwild/ppo-LunarLander-v2 | bitwild | 2023-07-24T02:10:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T02:10:23Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.88 +/- 12.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-ind-delfy | aroot | 2023-07-24T02:07:34Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-20T03:30:08Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-ind-delfy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-ind-delfy
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8393
- Bleu: 20.2314
- Chrf: 50.0806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
vic-yes/fast-efmediastyle-class | vic-yes | 2023-07-24T01:42:39Z | 39 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-23T10:32:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### fast-efmediastyle-class Dreambooth model trained by vic-yes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
It can be used by modifying the instance_prompt: efmedia style
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Chattiori/CloverMix | Chattiori | 2023-07-24T01:40:55Z | 0 | 2 | null | [
"en",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-20T12:13:56Z | ---
license: creativeml-openrail-m
language:
- en
---
V1:CloverMix is checkpoint merge model of ChillOutMix, LOFI, DDosMix and DreamShaper.
V2:CloverMix is checkpoint merge model of ChillOutMix, LOFI, DDosMix ,DreamShaper and RetMix. |
noahinhf/sd-pokemon-model-lora | noahinhf | 2023-07-24T01:29:54Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-21T15:26:02Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - noahinhf/sd-pokemon-model-lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.



|
zhyemmmm/PastelMix | zhyemmmm | 2023-07-24T01:29:22Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-06-29T09:09:23Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
--- |
aroot/eng-mya-delfy | aroot | 2023-07-24T01:29:11Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-20T03:29:06Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-delfy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-delfy
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9332
- Bleu: 4.0237
- Chrf: 38.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
malang1/ppo-Lunar_Lander-v2 | malang1 | 2023-07-24T01:21:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T01:21:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.64 +/- 14.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LinJTF/falcon-7b-qlora-medicine-chat | LinJTF | 2023-07-24T01:16:34Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-09T19:20:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
UNIST-Eunchan/Pegasus-x-base-govreport-12288-1024-numepoch-5 | UNIST-Eunchan | 2023-07-24T01:02:09Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"dataset:govreport-summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-22T08:51:08Z | ---
tags:
- generated_from_trainer
datasets:
- govreport-summarization
model-index:
- name: Pegasus-x-base-govreport-12288-1024-numepoch-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus-x-base-govreport-12288-1024-numepoch-5
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on the govreport-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6740
## Evaluation Score
For test dataset
**'ROUGE'**:
{
'rouge1': 0.4861,
'rouge2': 0.2067,
'rougeL': 0.2446,
'rougeLsum': 0.2444
}
**'BERT_SCORE'**
{'f1': 0.8551,
'precision': 0.8583,
'recall': 0.852
}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0173 | 0.07 | 20 | 2.6677 |
| 2.5674 | 0.15 | 40 | 2.2993 |
| 2.3013 | 0.22 | 60 | 2.1024 |
| 2.2145 | 0.29 | 80 | 1.9833 |
| 2.1191 | 0.37 | 100 | 1.9383 |
| 2.0709 | 0.44 | 120 | 1.8815 |
| 2.0287 | 0.51 | 140 | 1.8623 |
| 2.003 | 0.58 | 160 | 1.8467 |
| 1.9842 | 0.66 | 180 | 1.8314 |
| 1.9603 | 0.73 | 200 | 1.8307 |
| 1.9493 | 0.8 | 220 | 1.8157 |
| 1.9631 | 0.88 | 240 | 1.7919 |
| 1.9332 | 0.95 | 260 | 1.7919 |
| 1.9123 | 1.02 | 280 | 1.7836 |
| 1.887 | 1.1 | 300 | 1.7672 |
| 1.8743 | 1.17 | 320 | 1.7629 |
| 1.8412 | 1.24 | 340 | 1.7566 |
| 1.8508 | 1.32 | 360 | 1.7410 |
| 1.8564 | 1.39 | 380 | 1.7403 |
| 1.8686 | 1.46 | 400 | 1.7393 |
| 1.8881 | 1.53 | 420 | 1.7420 |
| 1.8629 | 1.61 | 440 | 1.7367 |
| 1.8683 | 1.68 | 460 | 1.7288 |
| 1.833 | 1.75 | 480 | 1.7300 |
| 1.8621 | 1.83 | 500 | 1.7208 |
| 1.8622 | 1.9 | 520 | 1.7211 |
| 1.8147 | 1.97 | 540 | 1.7158 |
| 1.8161 | 2.05 | 560 | 1.7117 |
| 1.8239 | 2.12 | 580 | 1.7090 |
| 1.8185 | 2.19 | 600 | 1.7100 |
| 1.8605 | 2.27 | 620 | 1.7057 |
| 1.7919 | 2.34 | 640 | 1.6996 |
| 1.8026 | 2.41 | 660 | 1.7012 |
| 1.7785 | 2.48 | 680 | 1.6980 |
| 1.8296 | 2.56 | 700 | 1.6941 |
| 1.802 | 2.63 | 720 | 1.6944 |
| 1.7783 | 2.7 | 740 | 1.6927 |
| 1.7998 | 2.78 | 760 | 1.6922 |
| 1.8128 | 2.85 | 780 | 1.6890 |
| 1.7762 | 2.92 | 800 | 1.6909 |
| 1.7631 | 3.0 | 820 | 1.6959 |
| 1.8191 | 3.07 | 840 | 1.6823 |
| 1.795 | 3.14 | 860 | 1.6873 |
| 1.7587 | 3.22 | 880 | 1.6850 |
| 1.8091 | 3.29 | 900 | 1.6828 |
| 1.7617 | 3.36 | 920 | 1.6860 |
| 1.7933 | 3.43 | 940 | 1.6796 |
| 1.8041 | 3.51 | 960 | 1.6805 |
| 1.7596 | 3.58 | 980 | 1.6855 |
| 1.7518 | 3.65 | 1000 | 1.6791 |
| 1.7384 | 3.73 | 1020 | 1.6795 |
| 1.7855 | 3.8 | 1040 | 1.6784 |
| 1.7938 | 3.87 | 1060 | 1.6780 |
| 1.7637 | 3.95 | 1080 | 1.6809 |
| 1.7914 | 4.02 | 1100 | 1.6779 |
| 1.7903 | 4.09 | 1120 | 1.6753 |
| 1.7874 | 4.17 | 1140 | 1.6745 |
| 1.7982 | 4.24 | 1160 | 1.6728 |
| 1.7709 | 4.31 | 1180 | 1.6761 |
| 1.7583 | 4.38 | 1200 | 1.6754 |
| 1.778 | 4.46 | 1220 | 1.6739 |
| 1.7526 | 4.53 | 1240 | 1.6746 |
| 1.7713 | 4.6 | 1260 | 1.6723 |
| 1.734 | 4.68 | 1280 | 1.6742 |
| 1.7498 | 4.75 | 1300 | 1.6737 |
| 1.751 | 4.82 | 1320 | 1.6730 |
| 1.7562 | 4.9 | 1340 | 1.6739 |
| 1.7549 | 4.97 | 1360 | 1.6740 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-guj-delfy | aroot | 2023-07-24T01:01:19Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-20T03:28:35Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-delfy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-delfy
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3401
- Bleu: 2.6552
- Chrf: 19.6195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
BAAI/AquilaCode-py | BAAI | 2023-07-24T00:47:26Z | 30 | 2 | transformers | [
"transformers",
"pytorch",
"aquila",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-19T01:37:02Z | ---
license: other
---

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/BAAI/AquilaCode-py/blob/main/README_zh.md">简体中文</a> |
<p>
</h4>
Aquila Language Model is the first open source language model that supports both Chinese and English knowledge, commercial license agreements, and compliance with domestic data regulations.
- 🌟 **Supports open source commercial licenses**. The source code of the Aquila series models is based on the [Apache 2.0 agreement](https://www.apache.org/licenses/LICENSE-2.0), while the model weight is based on the [BAAI Aquila Model License Agreement](https://huggingface.co/BAAI/AquilaCode-py/blob/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf). Users can use it for commercial purposes as long as they meet the licensing restrictions.
- ✍️ **Possesses Chinese and English knowledge**. The Aquila series model is trained from scratch on a high-quality corpus of Chinese and English languages, with Chinese corpora accounting for about 40%, ensuring that the model accumulates native Chinese world knowledge during the pre-training phase, rather than translated knowledge.
- 👮♀️ **Complies with domestic data regulations**. The Chinese corpora of the Aquila series models come from Intelligence Source's accumulated Chinese datasets over the years, including Chinese internet data from over 10,000 sources (more than 99% of which are domestic sources), as well as high-quality Chinese literature and book data supported by authoritative domestic organizations. We will continue to accumulate high-quality and diverse datasets and incorporate them into the subsequent training of the Aquila base models.
- 🎯 **Continuous improvements and open sourcing**. We will continue to improve training data, optimize training methods, and enhance model performance, cultivate a flourishing "model tree" on a better base model foundation, and continuously update open-source versions.
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels, including the [FlagAI GitHub repository](https://github.com/FlagAI-Open/FlagAI/), [FlagAI's Zhihu account](https://www.zhihu.com/people/95-22-20-18) and [FlagAI's official technical communication group](https://github.com/FlagAI-Open/FlagAI/blob/master/wechat-qrcode.jpg).
| Model | Model Type | Description | Status | GPUs Used |
| :----------------- | :----------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------| :----------- |
| Aquila-7B | Base model, 7 billion parameters | **Aquila Base Model** inherits the architectural design advantages of GPT-3 and LLaMA. It replaces a batch of more efficient underlying operator implementations, redesigns the implementation of bilingual tokenizer, upgrades BMTrain parallel training method, and achieves nearly 8 times the training efficiency of Magtron+DeepSpeed ZeRO-2. | Released | Nvidia-A100 |
| Aquila-33B | Base model, 33 billion parameters | Same as above | Coming soon | Nvidia-A100 |
| AquilaChat-7B | SFT model, fine-tuned and RL based on Aquila-7B | **AquilaChat Dialog Model** supports fluent text dialogue and multiple language generation tasks, and realizes the call of AquilaChat to other models and tools by defining an expandable special instruction specification, which is easy to extend. For example, calling the open source **[AltDiffusion](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion-m18) multimodal language image generation model** of Flagship Intelligence achieved smooth image generation capability. Together with Flagship Intelligence's **InstructFace multi-step controllable text-picture model**, it is easy to achieve multi-step controllable editing of human face images. | Released | Nvidia-A100 |
| AquilaChat-33B | SFT model, fine-tuned and RL based on Aquila-33B | Same as above | Coming soon | Nvidia-A100 |
| AquilaCode-multi | Base model, "text-code" generation model, continue-pre-trained based on Aquila-7B. | AquilaCode utilizes high-quality, filtered, and compliant open-source code data for training, with a dataset size of approximately 10-40% compared to other open-source code generation models. By following the provided official guidelines, developers can harness the power of the AquilaCode model to customize their own code assistant. | Released | Nvidia-A100 |
| AquilaCode-py | Base model, "text-code" generation model, continue-pre-trained based on Aquila-7B, trained on Horizon Robotics chips | Same as above | Released | Nvidia-A100 |
We will continue to release improved versions of Aquila model as open source.
- 2023/07/24 :release v0.9
- AquilaCode-mutil-01 md5: e202e5b82db773ea369fe843fef1c34c
- AquilaCode-mutil-02 md5: 3923b2b020e2af71755b11248076437f
- AquilaCode-Python-01 md5: e202e5b82db773ea369fe843fef1c34c
- AquilaCode-Python-02 md5: 3923b2b020e2af71755b11248076437f
Aquila-7B v0.8 has shown improvements in the FlagEval large model evaluation ("Objective") compared to version 0.7. It achieved improvements of approximately 10.07% on MMLU_Chinese, 14.84% on TruthfulQA, and 7.94% on MMLU datasets. For detailed evaluation results, please refer to the website http://flageval.baai.ac.cn.
For detailed version change history, see [Change Log](https://huggingface.co/BAAI/Aquila-7B/blob/main/change_log.log).
## Quick Start Aquila-7B
### 1. Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_info = "BAAI/AquilaCode-py"
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True)
model.eval()
model.to("cuda:4")
text = "#补全代码\ndef quick_sort(x):"
tokens = tokenizer.encode_plus(text)['input_ids'][:-1]
tokens = torch.tensor(tokens)[None,].to("cuda:4")
with torch.no_grad():
out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007)[0]
out = tokenizer.decode(out.cpu().numpy().tolist())
print(out)
```
## License
Aquila-7B and AquilaChat-33B open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaCode-py/blob/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf) |
BAAI/AquilaCode-multi | BAAI | 2023-07-24T00:47:10Z | 31 | 3 | transformers | [
"transformers",
"pytorch",
"aquila",
"custom_code",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-19T01:31:24Z | ---
license: other
---

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/BAAI/AquilaCode-multi/blob/main/README_zh.md">简体中文</a> |
<p>
</h4>
Aquila Language Model is the first open source language model that supports both Chinese and English knowledge, commercial license agreements, and compliance with domestic data regulations.
- 🌟 **Supports open source commercial licenses**. The source code of the Aquila series models is based on the [Apache 2.0 agreement](https://www.apache.org/licenses/LICENSE-2.0), while the model weight is based on the [BAAI Aquila Model License Agreement](https://huggingface.co/BAAI/AquilaCode-multi/blob/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf). Users can use it for commercial purposes as long as they meet the licensing restrictions.
- ✍️ **Possesses Chinese and English knowledge**. The Aquila series model is trained from scratch on a high-quality corpus of Chinese and English languages, with Chinese corpora accounting for about 40%, ensuring that the model accumulates native Chinese world knowledge during the pre-training phase, rather than translated knowledge.
- 👮♀️ **Complies with domestic data regulations**. The Chinese corpora of the Aquila series models come from Intelligence Source's accumulated Chinese datasets over the years, including Chinese internet data from over 10,000 sources (more than 99% of which are domestic sources), as well as high-quality Chinese literature and book data supported by authoritative domestic organizations. We will continue to accumulate high-quality and diverse datasets and incorporate them into the subsequent training of the Aquila base models.
- 🎯 **Continuous improvements and open sourcing**. We will continue to improve training data, optimize training methods, and enhance model performance, cultivate a flourishing "model tree" on a better base model foundation, and continuously update open-source versions.
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels, including the [FlagAI GitHub repository](https://github.com/FlagAI-Open/FlagAI/), [FlagAI's Zhihu account](https://www.zhihu.com/people/95-22-20-18) and [FlagAI's official technical communication group](https://github.com/FlagAI-Open/FlagAI/blob/master/wechat-qrcode.jpg).
| Model | Model Type | Description | Status | GPUs Used |
| :----------------- | :----------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------| :----------- |
| Aquila-7B | Base model, 7 billion parameters | **Aquila Base Model** inherits the architectural design advantages of GPT-3 and LLaMA. It replaces a batch of more efficient underlying operator implementations, redesigns the implementation of bilingual tokenizer, upgrades BMTrain parallel training method, and achieves nearly 8 times the training efficiency of Magtron+DeepSpeed ZeRO-2. | Released | Nvidia-A100 |
| Aquila-33B | Base model, 33 billion parameters | Same as above | Coming soon | Nvidia-A100 |
| AquilaChat-7B | SFT model, fine-tuned and RL based on Aquila-7B | **AquilaChat Dialog Model** supports fluent text dialogue and multiple language generation tasks, and realizes the call of AquilaChat to other models and tools by defining an expandable special instruction specification, which is easy to extend. For example, calling the open source **[AltDiffusion](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion-m18) multimodal language image generation model** of Flagship Intelligence achieved smooth image generation capability. Together with Flagship Intelligence's **InstructFace multi-step controllable text-picture model**, it is easy to achieve multi-step controllable editing of human face images. | Released | Nvidia-A100 |
| AquilaChat-33B | SFT model, fine-tuned and RL based on Aquila-33B | Same as above | Coming soon | Nvidia-A100 |
| AquilaCode-multi | Base model, "text-code" generation model, continue-pre-trained based on Aquila-7B. | AquilaCode utilizes high-quality, filtered, and compliant open-source code data for training, with a dataset size of approximately 10-40% compared to other open-source code generation models. By following the provided official guidelines, developers can harness the power of the AquilaCode model to customize their own code assistant. | Released | Nvidia-A100 |
| AquilaCode-py | Base model, "text-code" generation model, continue-pre-trained based on Aquila-7B, trained on Horizon Robotics chips | Same as above | Released | Nvidia-A100 |
We will continue to release improved versions of Aquila model as open source.
- 2023/07/24 :release v0.9
- AquilaCode-mutil-01 md5: e6ea49fea7a737ffe41086ec7019cebb
- AquilaCode-mutil-02 md5: 4bba98eac44d785358ed5b6d2144a94a
- AquilaCode-Python-01 md5: e202e5b82db773ea369fe843fef1c34c
- AquilaCode-Python-02 md5: 3923b2b020e2af71755b11248076437f
Aquila-7B v0.8 has shown improvements in the FlagEval large model evaluation ("Objective") compared to version 0.7. It achieved improvements of approximately 10.07% on MMLU_Chinese, 14.84% on TruthfulQA, and 7.94% on MMLU datasets. For detailed evaluation results, please refer to the website http://flageval.baai.ac.cn.
For detailed version change history, see [Change Log](https://huggingface.co/BAAI/Aquila-7B/blob/main/change_log.log).
## Quick Start Aquila-7B
### 1. Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_info = "BAAI/AquilaCode-multi"
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True)
model.eval()
model.to("cuda:3")
text = "#补全代码\ndef quick_sort(x):"
tokens = tokenizer.encode_plus(text)['input_ids'][:-1]
tokens = torch.tensor(tokens)[None,].to("cuda:3")
with torch.no_grad():
out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007)[0]
out = tokenizer.decode(out.cpu().numpy().tolist())
print(out)
```
## License
Aquila-7B and AquilaChat-33B open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaCode-multi/blob/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf) |
patonw/ppo-LunarLander-v2 | patonw | 2023-07-24T00:33:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-21T19:31:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.43 +/- 26.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-fra-delfy | aroot | 2023-07-24T00:24:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-07-20T02:26:45Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-delfy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-delfy
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1656
- Bleu: 31.3235
- Chrf: 53.3895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
cassandraqs/trip_review | cassandraqs | 2023-07-24T00:21:07Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T00:21:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
truitt/ppo-LunarLander-v2 | truitt | 2023-07-24T00:08:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-24T00:08:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.69 +/- 15.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mtyrrell/CPU_Target_Classifier | mtyrrell | 2023-07-23T23:43:53Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"text-classification",
"generated_from_trainer",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-23T16:58:59Z | ---
license: apache-2.0
base_model: sentence-transformers/all-mpnet-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IKT_classifier_target_best
results: []
widget:
- text: "To reduce greenhouse gas emissions by 37% below 2005 levels in 2025, and by 43% below 2005 levels in 2030."
example_title: "TARGET"
- text: "Change fiscal policies on fossil fuel by 2025 to enable the transition to 100% renewable energy generation in the transportation sector"
example_title: "NEGATIVE"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IKT_classifier_target_best
This model is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2820
- Precision Macro: 0.9487
- Precision Weighted: 0.9455
- Recall Macro: 0.9378
- Recall Weighted: 0.9423
- F1-score: 0.9412
- Accuracy: 0.9423
## Model description
The model is a binary text classifier based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and fine-tuned on text sourced from national climate policy documents.
## Intended uses & limitations
The classifier assigns a class of **'Target' or 'Negative' to denote alignment with stated national targets** as portrayed in extracted passages from the documents. The intended use is for climate policy researchers and analysts seeking to automate the process of reviewing lengthy, non-standardized PDF documents to produce summaries and reports.
The performance of the classifier is very high. On training, the classifier exhibited very good overall performance (F1 ~ 0.95). This performance was evenly balanced between precise identification of true positive classifications (precision ~ 0.95) and a wide net to capture as many true positives as possible (recall ~ 0.95). When tested on real world unseen test data, the performance was still very high (F1 ~ 0.9). However, testing was based on a fairly small out-of-sample dataset. Therefore classification performance will need to further evaluated on deployment.
## Training and evaluation data
The training dataset is comprised of labelled passages from 2 sources:
- [ClimateWatch NDC Sector data](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1).
- [IKI TraCS Climate Strategies for Transport Tracker](https://changing-transport.org/wp-content/uploads/20220722_Tracker_Database.xlsx) implemented by GIZ and funded by theInternational Climate Initiative (IKI) of the German Federal Ministry for Economic Affairs and Climate Action (BMWK). Here we utilized the QA dataset (CW_NDC_data_Sector).
The combined dataset[GIZ/policy_qa_v0_1](https://huggingface.co/datasets/GIZ/policy_qa_v0_1) contains ~85k rows. Each row is duplicated twice, to provide varying sequence lengths (denoted by the values 'small', 'medium', and 'large', which correspond to sequence lengths of 60, 85, and 150 respectively - indicated in the 'strategy' column). This effectively means the dataset is reduced by 1/3 in useful size, and the 'strategy' value should be selected based on the use case. For this training, we utilized the 'medium' samples Furthermore, for each row, the 'context' column contains 3 samples of varying quality. The approach used to assess quality and select samples is described below.
The pre-processing operations used to produce the final training dataset were as follows:
1. Dataset is filtered based on 'medium' value in 'strategy' column (sequence length = 85).
2. For ClimateWatch, all rows are removed as there was assessed to be no taxonomical alignment with the IKITracs labels inherent to the dataset.
3. For IKITracs, labels are assigned based on the presence of certain substring prefixes ('T_') based on 'parameter' values which correspond to text containing targets as assessed by human annotaters.
4. If 'context_translated' is available and the 'language' is not English, 'context' is replaced with 'context_translated'. This results in the model being trained on English translations of original text samples.
5. The dataset is "exploded" - i.e., the text samples in the 'context' column, which are lists, are converted into separate rows - and labels are merged to align with the associated samples.
6. The 'match_onanswer' and 'answerWordcount' are used conditionally to select high quality samples (prefers high % of word matches in 'match_onanswer', but will take lower if there is a high 'answerWordcount')
7. No data augmentation was conducted as the number of samples were high for the 'TARGET' class. The end result is an equal sample per class breakdown of:
> - TARGET: 777
> - NEGATIVE: 778
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.055539898211945e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400.0
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Macro | Precision Weighted | Recall Macro | Recall Weighted | F1-score | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------------:|:------------:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 467 | 0.3171 | 0.9197 | 0.9179 | 0.9131 | 0.9167 | 0.9154 | 0.9167 |
| 0.4991 | 2.0 | 934 | 0.2515 | 0.9197 | 0.9179 | 0.9131 | 0.9167 | 0.9154 | 0.9167 |
| 0.4101 | 3.0 | 1401 | 0.2636 | 0.9407 | 0.9381 | 0.9319 | 0.9359 | 0.9348 | 0.9359 |
| 0.2713 | 4.0 | 1868 | 0.2515 | 0.9487 | 0.9455 | 0.9378 | 0.9423 | 0.9412 | 0.9423 |
| 0.2038 | 5.0 | 2335 | 0.2820 | 0.9487 | 0.9455 | 0.9378 | 0.9423 | 0.9412 | 0.9423 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
georgeNakayama/textual_inversion_scnnt_710 | georgeNakayama | 2023-07-23T23:39:51Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-23T20:28:09Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - georgeNakayama/textual_inversion_scnnt_710
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
bashr/datascraper | bashr | 2023-07-23T23:38:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-23T23:38:40Z | ---
license: creativeml-openrail-m
---
|
aammari/setfit-zero-shot-classification-pbsp-q8a-azure-gpt35 | aammari | 2023-07-23T23:33:58Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-07-23T23:23:46Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# setfit-zero-shot-classification-pbsp-q8a-azure-gpt35
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("setfit-zero-shot-classification-pbsp-q8a-azure-gpt35")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
seantw/covid-19-vaccination-tweet-relevance | seantw | 2023-07-23T23:31:28Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-20T01:13:29Z | ---
license: mit
---
# Model: covid-19-vaccination-tweet-relevance
## Overview
This model is a text classifier trained to determine whether a tweet is related to COVID-19 vaccination or not.
## Usage
```
tokenizer = AutoTokenizer.from_pretrained("seantw/covid-19-vaccination-tweet-relevance")
model = AutoModel.from_pretrained("seantw/covid-19-vaccination-tweet-relevance")
```
## Training corpus
The training corpus comprises 9373 tweets, a daily random sampled dated from December 2020 to June 2022. These tweets were labeled by domain experts.
We have seperated trained another model for classifying the stance of a tweet towards the COVID-19 vaccination. Please refer to [covid-19-vaccination-tweet-stance](https://huggingface.co/seantw/covid-19-vaccination-tweet-stance) for more information.
## Output Label Index
- LABEL_0: "irrelevance"
- LABEL_1: "relevance"
## Performance Metrics
The model's performance metrics on the test set are as follows:
- Accuracy: 0.9386
- Macro-average metrics:
- F1-score: 0.9339
- Recall: 0.9277
- Precision: 0.9418
- Class-wise metrics:
- For class "relevance":
- F1-score: 0.9161
- Precision: 0.9523
- Recall: 0.8825
- For class "irrelevance":
- F1-score: 0.9516
- Precision: 0.9312
- Recall: 0.973
These metrics are based on a test set with a total size of 3699 samples.
## Confusion Matrix
The confusion matrix of predictions on the test set is as follows:
| | Predicted: irrelevance | Predicted: relevance |
|------------|-------------|--------------|
| True: irrelevance | 1239 | 165 |
| True: relevance | 62 | 2233 |
## Model Architecture
The model is fine-tuned based on [COVID-Twitter-BERT v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2).
## Contact
Sean Yun-Shiuan Chuang ([email protected])
|
siemr/q-Taxi-v3 | siemr | 2023-07-23T23:22:54Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-07-23T23:10:25Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="siemr/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
judsfdf/micfran22 | judsfdf | 2025-07-14T21:51:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-07-14T21:51:35Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/judsfdf/micfran22/14de541cc079bdb62f9663cdf9b44156254627dd/README.md?%2Fjudsfdf%2Fmicfran22%2Fresolve%2Fmain%2FREADME.md=&etag=%2255ecfdd006aa338b8f566a0c25f5f7009fd67e8a%22 |
infoipman/Qwen3-0.6B-Gensyn-Swarm-tall_mammalian_caribou | infoipman | 2025-07-15T00:43:54Z | 100 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tall_mammalian_caribou",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-06-26T21:52:59Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/infoipman/Qwen3-0.6B-Gensyn-Swarm-tall_mammalian_caribou/61f14c2224b2e952fa182fb5e6285b4ec6550800/README.md?%2Finfoipman%2FQwen3-0.6B-Gensyn-Swarm-tall_mammalian_caribou%2Fresolve%2Fmain%2FREADME.md=&etag=%2273448fa5778bc60fcf4fc325c711c3364ba848cd%22 |
p2g4ads5/Qwen2.5-0.5B-Gensyn-Swarm-docile_playful_octopus | p2g4ads5 | 2025-07-15T00:43:48Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am docile_playful_octopus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-03T07:48:30Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/p2g4ads5/Qwen2.5-0.5B-Gensyn-Swarm-docile_playful_octopus/d8b58c95eee332fdc3cc8d45208b2cf2d7ae619b/README.md?%2Fp2g4ads5%2FQwen2.5-0.5B-Gensyn-Swarm-docile_playful_octopus%2Fresolve%2Fmain%2FREADME.md=&etag=%22ee494a3fc6f6c9893b3c2ed65c323e5dbe76f2a1%22 |
andr0m4da/Qwen3-0.6B-Gensyn-Swarm-strong_lively_turkey | andr0m4da | 2025-07-15T00:43:48Z | 102 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am strong_lively_turkey",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-04T13:32:47Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/andr0m4da/Qwen3-0.6B-Gensyn-Swarm-strong_lively_turkey/a32b6dd82eccc3bd7bff5eef42971feeef699421/README.md?%2Fandr0m4da%2FQwen3-0.6B-Gensyn-Swarm-strong_lively_turkey%2Fresolve%2Fmain%2FREADME.md=&etag=%2233a2020da865f0176a89142debfd5d345fa4018d%22 |
0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_vicious_flamingo | 0xshaf | 2025-07-15T00:43:45Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am majestic_vicious_flamingo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-10T14:38:43Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_vicious_flamingo/636b63c02565a0a3618a296ea71b2cf66c4ba6d8/README.md?%2F0xshaf%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_vicious_flamingo%2Fresolve%2Fmain%2FREADME.md=&etag=%22891e01bf6a9dd56a40e73b7eecf79ebcfc609014%22 |
vivekb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-darting_patterned_anaconda | vivekb | 2025-07-15T00:43:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am darting_patterned_anaconda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-14T22:47:34Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/vivekb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-darting_patterned_anaconda/c3e94db3c380b23b8585d560f9f2e10bcc62cf17/README.md?%2Fvivekb%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-darting_patterned_anaconda%2Fresolve%2Fmain%2FREADME.md=&etag=%2277fd04b1ed4990bd5ea84105eb9f1ce720f0ef0e%22 |
rizzo2/sn61 | rizzo2 | 2025-07-15T00:43:30Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-12-20T00:35:15Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/rizzo2/sn61/67800201b490c3d30176cbb4003c79fa11750db2/README.md?%2Frizzo2%2Fsn61%2Fresolve%2Fmain%2FREADME.md=&etag=%227be5fc7f47d5db027d120b8024982df93db95b74%22 |
Angi54/Qwen3-0.6B-Gensyn-Swarm-lazy_enormous_bobcat | Angi54 | 2025-07-15T00:43:29Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lazy_enormous_bobcat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-02T22:04:53Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Angi54/Qwen3-0.6B-Gensyn-Swarm-lazy_enormous_bobcat/19dabff56d4b8f36827474869e16c914a4551c8f/README.md?%2FAngi54%2FQwen3-0.6B-Gensyn-Swarm-lazy_enormous_bobcat%2Fresolve%2Fmain%2FREADME.md=&etag=%22e0c97a6d9234b7f42daa417944a51c0761995e3c%22 |
Kapitaka/Qwen3-0.6B-Gensyn-Swarm-tawny_meek_cheetah | Kapitaka | 2025-07-15T00:43:24Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tawny_meek_cheetah",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-02T19:04:56Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Kapitaka/Qwen3-0.6B-Gensyn-Swarm-tawny_meek_cheetah/91b7d4f13c24bb023a454bdb822a1db7bed866b8/README.md?%2FKapitaka%2FQwen3-0.6B-Gensyn-Swarm-tawny_meek_cheetah%2Fresolve%2Fmain%2FREADME.md=&etag=%22b426997823e18a513b72e4e94005987cdc384d38%22 |
p2g5dolph3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino | p2g5dolph3 | 2025-07-15T00:43:01Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am peckish ferocious rhino",
"trl",
"genrl-swarm",
"I am peckish_ferocious_rhino",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-17T21:31:35Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/p2g5dolph3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino/ca3a219e2b021bcf780f42c31e42861c119bea27/README.md?%2Fp2g5dolph3%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_ferocious_rhino%2Fresolve%2Fmain%2FREADME.md=&etag=%222d7e84af83fa449bf5947f210b40978f64a200fb%22 |
NORI7/Qwen3-0.6B-Gensyn-Swarm-crested_sniffing_cockroach | NORI7 | 2025-07-15T00:42:56Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am crested_sniffing_cockroach",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-06-30T12:16:19Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/NORI7/Qwen3-0.6B-Gensyn-Swarm-crested_sniffing_cockroach/9c15f8228c28be3269aac050bbe27f5cdcf5f717/README.md?%2FNORI7%2FQwen3-0.6B-Gensyn-Swarm-crested_sniffing_cockroach%2Fresolve%2Fmain%2FREADME.md=&etag=%22f03f8649b522c92f5d25c675471d537e76806a9a%22 |
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_26_4_song_3_0.0001_19 | winnieyangwannan | 2025-07-15T00:42:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-14T23:27:12Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_26_4_song_3_0.0001_19/313ea9717761dad3deac897a1f2fee073fa9b675/README.md?%2Fwinnieyangwannan%2Fentity_Llama-3.1-8B-Instruct_mlp_pnas_layer_26_4_song_3_0.0001_19%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22 |
singlegun/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-cunning_untamed_capybara | singlegun | 2025-07-15T00:42:46Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am cunning untamed capybara",
"trl",
"genrl-swarm",
"I am cunning_untamed_capybara",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-04T01:28:12Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/singlegun/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-cunning_untamed_capybara/17843837c5db492582073087f66f9560133f9394/README.md?%2Fsinglegun%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-cunning_untamed_capybara%2Fresolve%2Fmain%2FREADME.md=&etag=%22c1cb97906a508c3b8c4062d5525611c6663301db%22 |
sagasa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_gregarious_yak | sagasa | 2025-07-15T00:42:36Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stocky_gregarious_yak",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-01T19:08:19Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/sagasa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_gregarious_yak/4e548a21a865267584300a793266611af485b87c/README.md?%2Fsagasa%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-stocky_gregarious_yak%2Fresolve%2Fmain%2FREADME.md=&etag=%22d606d72205237dd4100dd01f3b0cf8405f813226%22 |
kcfabulosa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_jumping_termite | kcfabulosa | 2025-07-15T00:42:28Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am gentle jumping termite",
"trl",
"genrl-swarm",
"I am gentle_jumping_termite",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-13T22:28:29Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/kcfabulosa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_jumping_termite/0aca37480a6216a3a14196fa19399c316cc964a3/README.md?%2Fkcfabulosa%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-gentle_jumping_termite%2Fresolve%2Fmain%2FREADME.md=&etag=%22e12d970386503c5ce4b9d52cec1146d2575b3078%22 |
DTebias/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_muscular_cassowary | DTebias | 2025-07-15T00:42:22Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hoarse muscular cassowary",
"trl",
"genrl-swarm",
"I am hoarse_muscular_cassowary",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-30T20:31:31Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/DTebias/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_muscular_cassowary/9da60e978c8d530031b7e6d6630a400d16b91a4e/README.md?%2FDTebias%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-hoarse_muscular_cassowary%2Fresolve%2Fmain%2FREADME.md=&etag=%221dfdb5c054a01eb529255750c9196c40534f25a6%22 |
Amoros/DinoAmoros_is_103_bs_32_ep_112_08_2025-giant-2025_07_12_37084-bs32_freeze_monolabel | Amoros | 2025-07-15T00:41:59Z | 13 | 0 | null | [
"tensorboard",
"safetensors",
"dinov2",
"hf-summary-writer",
"region:us"
]
| null | 2025-07-12T07:39:27Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Amoros/DinoAmoros_is_103_bs_32_ep_112_08_2025-giant-2025_07_12_37084-bs32_freeze_monolabel/1b8cdb8f0f807222298ce7c181c696dfedb38479/README.md?%2FAmoros%2FDinoAmoros_is_103_bs_32_ep_112_08_2025-giant-2025_07_12_37084-bs32_freeze_monolabel%2Fresolve%2Fmain%2FREADME.md=&etag=%2212b9131045363ae21a4ef73bb8453768870e24a9%22 |
Alexshake78/Qwen3-0.6B-Gensyn-Swarm-darting_endangered_eel | Alexshake78 | 2025-07-15T00:41:45Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am darting_endangered_eel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-06-26T21:12:47Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Alexshake78/Qwen3-0.6B-Gensyn-Swarm-darting_endangered_eel/3503f7fa4fa39378c67ee9798831af4531d3d966/README.md?%2FAlexshake78%2FQwen3-0.6B-Gensyn-Swarm-darting_endangered_eel%2Fresolve%2Fmain%2FREADME.md=&etag=%22dbd918d37c6e96be14f754578251f9ef7b456211%22 |
p2g8gensyn/Qwen2.5-0.5B-Gensyn-Swarm-diving_giant_alpaca | p2g8gensyn | 2025-07-15T00:41:26Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am diving_giant_alpaca",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-03T17:08:16Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/p2g8gensyn/Qwen2.5-0.5B-Gensyn-Swarm-diving_giant_alpaca/269d05f5e52a911d66afdc772f0a90e27d2968b9/README.md?%2Fp2g8gensyn%2FQwen2.5-0.5B-Gensyn-Swarm-diving_giant_alpaca%2Fresolve%2Fmain%2FREADME.md=&etag=%22c9b98d5f63fff27cec376b5a8fc40e0019e60863%22 |
taichimasuda/my-cool-model | taichimasuda | 2025-07-15T00:40:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-07-15T00:40:56Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/taichimasuda/my-cool-model/dbca2fb8c5f051c5f811cda9c75d39dfb7a329f9/README.md?%2Ftaichimasuda%2Fmy-cool-model%2Fresolve%2Fmain%2FREADME.md=&etag=%222fe0ca2916181581791ce6c8583b7a7bd461455f%22 |
p2g6gensyn/Qwen2.5-0.5B-Gensyn-Swarm-dappled_yapping_clam | p2g6gensyn | 2025-07-15T00:40:52Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dappled_yapping_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-04T15:51:25Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/p2g6gensyn/Qwen2.5-0.5B-Gensyn-Swarm-dappled_yapping_clam/7624fe4162d3a100cfe8fdc0f55ea0dd7b6dba4d/README.md?%2Fp2g6gensyn%2FQwen2.5-0.5B-Gensyn-Swarm-dappled_yapping_clam%2Fresolve%2Fmain%2FREADME.md=&etag=%22a41c3b8943fd03dea94ed36cb98d9334f4214aca%22 |
shuooru/qwen2.5-vl_0 | shuooru | 2025-07-15T00:40:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-07-14T22:02:02Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/shuooru/qwen2.5-vl_0/f8e12b5304a778592ad424f5be79d6b204c25c5b/README.md?%2Fshuooru%2Fqwen2.5-vl_0%2Fresolve%2Fmain%2FREADME.md=&etag=%22cf45d227368a270bea6b75c88c9efe8b23436877%22 |
Dassem/Qwen3-0.6B-Gensyn-Swarm-endangered_gregarious_wolf | Dassem | 2025-07-15T00:40:20Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am endangered_gregarious_wolf",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-06-30T07:30:00Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Dassem/Qwen3-0.6B-Gensyn-Swarm-endangered_gregarious_wolf/ef70d25de94e40fe7359525d995bf7b3f9f1e77a/README.md?%2FDassem%2FQwen3-0.6B-Gensyn-Swarm-endangered_gregarious_wolf%2Fresolve%2Fmain%2FREADME.md=&etag=%229725c01edbde269a792f5d662d0e6113b50be7a4%22 |
csukuangfj/k2 | csukuangfj | 2025-07-15T00:40:10Z | 0 | 2 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-02-23T11:56:53Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/csukuangfj/k2/a396750c3bec689de2e21cca3539202c188ea52a/README.md?%2Fcsukuangfj%2Fk2%2Fresolve%2Fmain%2FREADME.md=&etag=%2202ee2377ca934cd6f9372d5e9332811c6efa8797%22 |
lodestones/chroma-debug-development-only | lodestones | 2025-07-15T00:40:05Z | 0 | 18 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2025-01-21T05:08:22Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/lodestones/chroma-debug-development-only/8aee7d720889f25861bd705ea0f0ac46200b7faa/README.md?%2Flodestones%2Fchroma-debug-development-only%2Fresolve%2Fmain%2FREADME.md=&etag=%22ea2ce4913bf6d980c2c6d5e9d485e087c01338f7%22 |
aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur | aramzz | 2025-07-15T00:39:44Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wild stalking lemur",
"unsloth",
"trl",
"genrl-swarm",
"I am wild_stalking_lemur",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-07T13:07:53Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/aramzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur/a6d588e765744ee34a6f5e39f245d852a96dccad/README.md?%2Faramzz%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_stalking_lemur%2Fresolve%2Fmain%2FREADME.md=&etag=%22f0694409dee8985c6db4a23dcd54c08ab5ea0142%22 |
Donchocho/Qwen3-0.6B-Gensyn-Swarm-crested_moist_walrus | Donchocho | 2025-07-15T00:39:26Z | 87 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am crested_moist_walrus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-13T12:31:54Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Donchocho/Qwen3-0.6B-Gensyn-Swarm-crested_moist_walrus/7be9f56e54c686a6e7b57067c074775128413b46/README.md?%2FDonchocho%2FQwen3-0.6B-Gensyn-Swarm-crested_moist_walrus%2Fresolve%2Fmain%2FREADME.md=&etag=%22ab42231d972662f5cfbcf43487d0319ae46dd44a%22 |
RichardErkhov/mlfoundations-dev_-_oh_v1.3_alpaca_x4-gguf | RichardErkhov | 2025-07-15T00:39:12Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-07-14T23:40:14Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/RichardErkhov/mlfoundations-dev_-_oh_v1.3_alpaca_x4-gguf/d7fa64092818fe2b458d3e208a358770a6002c28/README.md?%2FRichardErkhov%2Fmlfoundations-dev_-_oh_v1.3_alpaca_x4-gguf%2Fresolve%2Fmain%2FREADME.md=&etag=%2218707a31876991ce235930af9ab57d6e1922fcc7%22 |
newshinsei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose | newshinsei | 2025-07-15T00:38:58Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pesty howling moose",
"trl",
"genrl-swarm",
"I am pesty_howling_moose",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-16T12:24:57Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/newshinsei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose/6015ac6b67c2c0bca489abae60fef8a5c575c4ee/README.md?%2Fnewshinsei%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-pesty_howling_moose%2Fresolve%2Fmain%2FREADME.md=&etag=%2212ac10aa9e6119f0c8bf7431c7f32045ea972228%22 |
gokaygokay/Marble-Sculpture-Kontext-Dev-LoRA | gokaygokay | 2025-07-15T00:38:44Z | 0 | 1 | diffusers | [
"diffusers",
"flux",
"image-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:other",
"region:us"
]
| image-to-image | 2025-07-15T00:29:36Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/gokaygokay/Marble-Sculpture-Kontext-Dev-LoRA/3b9fdb5c5d160db623cc441aab527492f369199b/README.md?%2Fgokaygokay%2FMarble-Sculpture-Kontext-Dev-LoRA%2Fresolve%2Fmain%2FREADME.md=&etag=%22d2a582d162a3de6c301f9e6d639d2264cc6c5dd8%22 |
VIDEOS-18-HONG-TY-TRUNG-QUOC-CLIP-HONG-TY/18.video.hong.ty.trung.quoc.hong.ti.video.link.hong.ty.clip | VIDEOS-18-HONG-TY-TRUNG-QUOC-CLIP-HONG-TY | 2025-07-15T00:38:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-07-15T00:38:08Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/VIDEOS-18-HONG-TY-TRUNG-QUOC-CLIP-HONG-TY/18.video.hong.ty.trung.quoc.hong.ti.video.link.hong.ty.clip/5487817bbc2cd577510c8fe1f3076b7891f3a760/README.md?%2FVIDEOS-18-HONG-TY-TRUNG-QUOC-CLIP-HONG-TY%2F18.video.hong.ty.trung.quoc.hong.ti.video.link.hong.ty.clip%2Fresolve%2Fmain%2FREADME.md=&etag=%22f5abd5d7032d5addf304d13142afea21cfcf64e2%22 |
Kita1111/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra | Kita1111 | 2025-07-15T00:38:23Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dextrous domestic cobra",
"trl",
"genrl-swarm",
"I am dextrous_domestic_cobra",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-09T02:01:08Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Kita1111/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra/87168520e9755dbe15dfd9071b12cfd132768a94/README.md?%2FKita1111%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-dextrous_domestic_cobra%2Fresolve%2Fmain%2FREADME.md=&etag=%223ee7a8d65ff77c1a4beaa981ee153fe80af5e24c%22 |
maki28/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan | maki28 | 2025-07-15T00:38:23Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fleecy gilded swan",
"trl",
"genrl-swarm",
"I am fleecy_gilded_swan",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-27T16:37:36Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/maki28/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan/e6d2941ec545b89a0856590fd7204b496732ed5d/README.md?%2Fmaki28%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_gilded_swan%2Fresolve%2Fmain%2FREADME.md=&etag=%22c1352a9581ad1dd6a47468f19bdf4e98e956ed3d%22 |
AlexanderArtT/Qwen3-0.6B-Gensyn-Swarm-tiny_nimble_warthog | AlexanderArtT | 2025-07-15T00:38:01Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tiny_nimble_warthog",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-02T19:03:52Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/AlexanderArtT/Qwen3-0.6B-Gensyn-Swarm-tiny_nimble_warthog/33e5038e32fb56a50437e3d5e6b77ce3cef41273/README.md?%2FAlexanderArtT%2FQwen3-0.6B-Gensyn-Swarm-tiny_nimble_warthog%2Fresolve%2Fmain%2FREADME.md=&etag=%22d9690ce7a7bcf87a295212007a5019c4f048382f%22 |
Stuti103/llama-finetuned-1 | Stuti103 | 2025-07-15T00:37:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-07-14T12:15:39Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Stuti103/llama-finetuned-1/8ecf964458409a1f9fd79acd17272dcd2a7fca3a/README.md?%2FStuti103%2Fllama-finetuned-1%2Fresolve%2Fmain%2FREADME.md=&etag=%224f88560f2b9e94dba253d65b31b9766b83e415a1%22 |
ethduke/Qwen3-0.6B-Gensyn-Swarm-padded_iridescent_anaconda | ethduke | 2025-07-15T00:37:15Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am padded_iridescent_anaconda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-04T06:15:46Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/ethduke/Qwen3-0.6B-Gensyn-Swarm-padded_iridescent_anaconda/b07d6e79da8d01e7b86300d54f393c9998e2cfbf/README.md?%2Fethduke%2FQwen3-0.6B-Gensyn-Swarm-padded_iridescent_anaconda%2Fresolve%2Fmain%2FREADME.md=&etag=%22c47cab9cd8baa5d9a4252afb12d449bbdde65322%22 |
Thytu/act-so101-object-in-box_v0.4-fixed-7d | Thytu | 2025-07-15T00:36:15Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Thytu/so101-object-in-box_v0.4-fixed",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
]
| robotics | 2025-07-15T00:36:09Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Thytu/act-so101-object-in-box_v0.4-fixed-7d/e1600d6f469d4c2955564a5a7e5fc87e584d18de/README.md?%2FThytu%2Fact-so101-object-in-box_v0.4-fixed-7d%2Fresolve%2Fmain%2FREADME.md=&etag=%22be36196e4df426229f9276e6944fdc19cc88fa57%22 |
anatolijbatalko/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_ferocious_mink | anatolijbatalko | 2025-07-15T00:35:54Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thriving ferocious mink",
"trl",
"genrl-swarm",
"I am thriving_ferocious_mink",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-04T13:56:54Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/anatolijbatalko/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_ferocious_mink/921f1e9cf6fdea39c8364a96645e1e4c875f7ff5/README.md?%2Fanatolijbatalko%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_ferocious_mink%2Fresolve%2Fmain%2FREADME.md=&etag=%229b886d2e9552b805ed43008b4397e5f7115c8cff%22 |
MohamedAhmedAE/Llama-3.2-3B-Instruct-Medical-Finetune-v3 | MohamedAhmedAE | 2025-07-15T00:35:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-20T22:48:01Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/MohamedAhmedAE/Llama-3.2-3B-Instruct-Medical-Finetune-v3/386e630091a692cc773bf0314ef4c9952c5a9331/README.md?%2FMohamedAhmedAE%2FLlama-3.2-3B-Instruct-Medical-Finetune-v3%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22 |
Samas21/LO55IMPSONS | Samas21 | 2025-07-15T00:34:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-07-15T00:27:14Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Samas21/LO55IMPSONS/e93d96bcf56aaa595d0e8b4f9e898884527dbb3d/README.md?%2FSamas21%2FLO55IMPSONS%2Fresolve%2Fmain%2FREADME.md=&etag=%227b95401dc46245ac339fc25059d4a56d90b4cde5%22 |
mradermacher/gemma-3-40b-i1-GGUF | mradermacher | 2025-07-15T00:33:47Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-07-14T21:10:22Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/mradermacher/gemma-3-40b-i1-GGUF/e57f65315e88e6f092994fff2bfafa13d399c73d/README.md?%2Fmradermacher%2Fgemma-3-40b-i1-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%22d1e786868dc5271be8fe1cf2b7dc4c5d3065d20e%22 |
IzzulGod/Sorachio-1B-Chat | IzzulGod | 2025-07-15T00:31:42Z | 593 | 3 | null | [
"safetensors",
"gguf",
"gemma3_text",
"roleplay",
"gpt4o",
"dataset:IzzulGod/gpt4o-distill-chat-v1",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-07-05T05:04:13Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/IzzulGod/Sorachio-1B-Chat/63edfc8b69da6e329a23a432c71c63b845cfaa96/README.md?%2FIzzulGod%2FSorachio-1B-Chat%2Fresolve%2Fmain%2FREADME.md=&etag=%22f173fd16a5038f9ad989017dab780fa27a14d25e%22 |
VestaCloset/idm-vton-model | VestaCloset | 2025-07-15T00:30:34Z | 0 | 0 | null | [
"onnx",
"arxiv:2304.10567",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-06-15T21:03:32Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/VestaCloset/idm-vton-model/9f6959065b467d7124891112f6a863f90a36840a/README.md?%2FVestaCloset%2Fidm-vton-model%2Fresolve%2Fmain%2FREADME.md=&etag=%22b0fd54d7123e5a1f9439298b8833c3ad83b41ac1%22 |
i33toyz/sword | i33toyz | 2025-07-15T00:30:16Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-07-15T00:26:25Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/i33toyz/sword/f33e9f1b0b9b7b8f5095d098c9d67499fae3ef90/README.md?%2Fi33toyz%2Fsword%2Fresolve%2Fmain%2FREADME.md=&etag=%22d3571cc81704ecf4af144edeacc82b9ad142d32c%22 |
Arc-Intelligence/advisor-01-3B | Arc-Intelligence | 2025-07-15T00:29:17Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:Salesforce/CRMArenaPro",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-07-15T00:20:37Z | Temporary Redirect. Redirecting to /api/resolve-cache/models/Arc-Intelligence/advisor-01-3B/4b0f9095b212313027fd23c7b4a682666e0f4d89/README.md?%2FArc-Intelligence%2Fadvisor-01-3B%2Fresolve%2Fmain%2FREADME.md=&etag=%22a60349f8f275f15bf3326fe2727ba41b83c18bc3%22 |
Subsets and Splits