modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hugfacerhaha/dqn-SpaceInvadersNoFrameskip-v4 | hugfacerhaha | 2023-07-09T11:51:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T11:51:05Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 629.50 +/- 187.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hugfacerhaha -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hugfacerhaha -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hugfacerhaha
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.00012),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
irrationaljared/ethos-spirit | irrationaljared | 2023-07-09T11:48:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T11:48:14Z | ---
license: creativeml-openrail-m
---
|
VK246/IC_ver3a_coco_swin_gpt2_ | VK246 | 2023-07-09T11:38:14Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-09T11:08:51Z | ---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver3a_coco_swin_gpt2_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver3a_coco_swin_gpt2_
This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0156
- Rouge1: 33.8659
- Rouge2: 10.1039
- Rougel: 31.4861
- Rougelsum: 31.4905
- Bleu: 5.7396
- Gen Len: 11.2887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:-------:|
| 1.4761 | 0.34 | 100 | 1.1047 | 28.2757 | 6.0267 | 26.4732 | 26.5071 | 2.7859 | 11.2887 |
| 1.1238 | 0.68 | 200 | 1.0406 | 32.0448 | 8.6347 | 29.6117 | 29.6193 | 4.4174 | 11.2887 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-mod-datatsets-rarity-all-iorder-e13k | NasimB | 2023-07-09T11:38:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T09:37:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-mod-datatsets-rarity-all-iorder-e13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-mod-datatsets-rarity-all-iorder-e13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7718 | 0.32 | 500 | 5.7281 |
| 5.4474 | 0.65 | 1000 | 5.2933 |
| 5.0982 | 0.97 | 1500 | 5.0449 |
| 4.8151 | 1.29 | 2000 | 4.8885 |
| 4.6938 | 1.61 | 2500 | 4.7536 |
| 4.5789 | 1.94 | 3000 | 4.6584 |
| 4.3616 | 2.26 | 3500 | 4.6069 |
| 4.2969 | 2.58 | 4000 | 4.5367 |
| 4.2577 | 2.91 | 4500 | 4.4728 |
| 4.0523 | 3.23 | 5000 | 4.4717 |
| 3.9978 | 3.55 | 5500 | 4.4424 |
| 3.9769 | 3.87 | 6000 | 4.3959 |
| 3.7984 | 4.2 | 6500 | 4.4148 |
| 3.7049 | 4.52 | 7000 | 4.4053 |
| 3.7033 | 4.84 | 7500 | 4.3793 |
| 3.5633 | 5.16 | 8000 | 4.3989 |
| 3.4447 | 5.49 | 8500 | 4.4027 |
| 3.4427 | 5.81 | 9000 | 4.3926 |
| 3.3719 | 6.13 | 9500 | 4.4064 |
| 3.2863 | 6.46 | 10000 | 4.4103 |
| 3.2858 | 6.78 | 10500 | 4.4118 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t5.0_a0.5 | jordyvl | 2023-07-09T11:25:56Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T11:08:48Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t5.0_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t5.0_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7912
- Accuracy: 0.185
- Brier Loss: 0.8688
- Nll: 5.6106
- F1 Micro: 0.185
- F1 Macro: 0.0488
- Ece: 0.2524
- Aurc: 0.7391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 4.0715 | 0.06 | 0.9043 | 8.8976 | 0.06 | 0.0114 | 0.1751 | 0.9034 |
| No log | 1.96 | 6 | 3.9774 | 0.18 | 0.8893 | 8.0316 | 0.18 | 0.0305 | 0.2237 | 0.8040 |
| No log | 2.96 | 9 | 3.8805 | 0.18 | 0.8782 | 8.6752 | 0.18 | 0.0305 | 0.2566 | 0.8189 |
| No log | 3.96 | 12 | 3.8615 | 0.18 | 0.8836 | 8.9177 | 0.18 | 0.0305 | 0.2645 | 0.8205 |
| No log | 4.96 | 15 | 3.8624 | 0.185 | 0.8844 | 6.3245 | 0.185 | 0.0488 | 0.2727 | 0.7889 |
| No log | 5.96 | 18 | 3.8605 | 0.185 | 0.8813 | 5.1679 | 0.185 | 0.0488 | 0.2558 | 0.7797 |
| No log | 6.96 | 21 | 3.8511 | 0.185 | 0.8774 | 5.1770 | 0.185 | 0.0488 | 0.2510 | 0.7741 |
| No log | 7.96 | 24 | 3.8410 | 0.185 | 0.8751 | 5.6014 | 0.185 | 0.0488 | 0.2458 | 0.7699 |
| No log | 8.96 | 27 | 3.8317 | 0.185 | 0.8733 | 5.9766 | 0.185 | 0.0488 | 0.2537 | 0.7681 |
| No log | 9.96 | 30 | 3.8259 | 0.185 | 0.8724 | 6.0278 | 0.185 | 0.0488 | 0.2473 | 0.7689 |
| No log | 10.96 | 33 | 3.8226 | 0.185 | 0.8724 | 6.8070 | 0.185 | 0.0488 | 0.2618 | 0.7671 |
| No log | 11.96 | 36 | 3.8209 | 0.185 | 0.8730 | 7.6044 | 0.185 | 0.0488 | 0.2539 | 0.7643 |
| No log | 12.96 | 39 | 3.8187 | 0.185 | 0.8730 | 8.1654 | 0.185 | 0.0488 | 0.2542 | 0.7612 |
| No log | 13.96 | 42 | 3.8147 | 0.185 | 0.8725 | 7.1073 | 0.185 | 0.0488 | 0.2542 | 0.7566 |
| No log | 14.96 | 45 | 3.8096 | 0.185 | 0.8720 | 6.3875 | 0.185 | 0.0488 | 0.2565 | 0.7566 |
| No log | 15.96 | 48 | 3.8052 | 0.185 | 0.8712 | 6.0256 | 0.185 | 0.0488 | 0.2518 | 0.7524 |
| No log | 16.96 | 51 | 3.8022 | 0.185 | 0.8707 | 5.7809 | 0.185 | 0.0488 | 0.2558 | 0.7485 |
| No log | 17.96 | 54 | 3.8008 | 0.185 | 0.8701 | 5.6835 | 0.185 | 0.0488 | 0.2496 | 0.7442 |
| No log | 18.96 | 57 | 3.7992 | 0.185 | 0.8700 | 5.3867 | 0.185 | 0.0488 | 0.2490 | 0.7421 |
| No log | 19.96 | 60 | 3.7965 | 0.185 | 0.8694 | 5.4928 | 0.185 | 0.0488 | 0.2478 | 0.7406 |
| No log | 20.96 | 63 | 3.7948 | 0.185 | 0.8693 | 5.5527 | 0.185 | 0.0488 | 0.2481 | 0.7405 |
| No log | 21.96 | 66 | 3.7932 | 0.185 | 0.8691 | 5.5585 | 0.185 | 0.0488 | 0.2564 | 0.7396 |
| No log | 22.96 | 69 | 3.7921 | 0.185 | 0.8689 | 5.5607 | 0.185 | 0.0488 | 0.2479 | 0.7391 |
| No log | 23.96 | 72 | 3.7915 | 0.185 | 0.8688 | 5.6116 | 0.185 | 0.0488 | 0.2523 | 0.7390 |
| No log | 24.96 | 75 | 3.7912 | 0.185 | 0.8688 | 5.6106 | 0.185 | 0.0488 | 0.2524 | 0.7391 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jorgelzn/Reinforce-pixelcopter | jorgelzn | 2023-07-09T11:16:27Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-28T21:38:36Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.40 +/- 20.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5 | jordyvl | 2023-07-09T11:08:03Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T10:52:37Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8497
- Accuracy: 0.18
- Brier Loss: 0.8788
- Nll: 6.0432
- F1 Micro: 0.18
- F1 Macro: 0.0305
- Ece: 0.2578
- Aurc: 0.8511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 4.0678 | 0.145 | 0.8999 | 10.1608 | 0.145 | 0.0253 | 0.2221 | 0.8466 |
| No log | 1.96 | 6 | 4.0316 | 0.145 | 0.8948 | 10.5160 | 0.145 | 0.0253 | 0.2239 | 0.8468 |
| No log | 2.96 | 9 | 3.9774 | 0.16 | 0.8871 | 8.6333 | 0.16 | 0.0524 | 0.2217 | 0.8424 |
| No log | 3.96 | 12 | 3.9325 | 0.155 | 0.8813 | 6.5340 | 0.155 | 0.0272 | 0.2161 | 0.8837 |
| No log | 4.96 | 15 | 3.9041 | 0.155 | 0.8787 | 7.1704 | 0.155 | 0.0271 | 0.2296 | 0.8923 |
| No log | 5.96 | 18 | 3.8876 | 0.155 | 0.8782 | 8.7334 | 0.155 | 0.0277 | 0.2325 | 0.8942 |
| No log | 6.96 | 21 | 3.8766 | 0.18 | 0.8785 | 8.8120 | 0.18 | 0.0314 | 0.2476 | 0.8555 |
| No log | 7.96 | 24 | 3.8690 | 0.18 | 0.8791 | 8.8676 | 0.18 | 0.0308 | 0.2643 | 0.8534 |
| No log | 8.96 | 27 | 3.8633 | 0.18 | 0.8793 | 8.5299 | 0.18 | 0.0306 | 0.2594 | 0.8541 |
| No log | 9.96 | 30 | 3.8601 | 0.18 | 0.8796 | 7.4142 | 0.18 | 0.0305 | 0.2622 | 0.8548 |
| No log | 10.96 | 33 | 3.8577 | 0.18 | 0.8797 | 6.6642 | 0.18 | 0.0305 | 0.2720 | 0.8546 |
| No log | 11.96 | 36 | 3.8560 | 0.18 | 0.8797 | 6.2862 | 0.18 | 0.0305 | 0.2723 | 0.8543 |
| No log | 12.96 | 39 | 3.8547 | 0.18 | 0.8796 | 6.2084 | 0.18 | 0.0305 | 0.2678 | 0.8541 |
| No log | 13.96 | 42 | 3.8535 | 0.18 | 0.8794 | 6.1826 | 0.18 | 0.0305 | 0.2631 | 0.8534 |
| No log | 14.96 | 45 | 3.8525 | 0.18 | 0.8793 | 6.1744 | 0.18 | 0.0305 | 0.2593 | 0.8529 |
| No log | 15.96 | 48 | 3.8516 | 0.18 | 0.8792 | 6.1606 | 0.18 | 0.0305 | 0.2680 | 0.8527 |
| No log | 16.96 | 51 | 3.8511 | 0.18 | 0.8791 | 6.1634 | 0.18 | 0.0305 | 0.2724 | 0.8528 |
| No log | 17.96 | 54 | 3.8510 | 0.18 | 0.8791 | 6.0971 | 0.18 | 0.0305 | 0.2676 | 0.8525 |
| No log | 18.96 | 57 | 3.8508 | 0.18 | 0.8790 | 6.0686 | 0.18 | 0.0305 | 0.2630 | 0.8522 |
| No log | 19.96 | 60 | 3.8503 | 0.18 | 0.8789 | 6.0495 | 0.18 | 0.0305 | 0.2581 | 0.8518 |
| No log | 20.96 | 63 | 3.8501 | 0.18 | 0.8789 | 6.0918 | 0.18 | 0.0305 | 0.2581 | 0.8516 |
| No log | 21.96 | 66 | 3.8499 | 0.18 | 0.8788 | 6.0464 | 0.18 | 0.0305 | 0.2536 | 0.8516 |
| No log | 22.96 | 69 | 3.8497 | 0.18 | 0.8788 | 6.0419 | 0.18 | 0.0305 | 0.2535 | 0.8513 |
| No log | 23.96 | 72 | 3.8497 | 0.18 | 0.8788 | 6.0432 | 0.18 | 0.0305 | 0.2578 | 0.8511 |
| No log | 24.96 | 75 | 3.8497 | 0.18 | 0.8788 | 6.0432 | 0.18 | 0.0305 | 0.2578 | 0.8511 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NiscR/dqn-SpaceInvadersNoFrameskip-v4 | NiscR | 2023-07-09T10:52:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T10:52:14Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 737.50 +/- 249.10
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NiscR -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NiscR -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NiscR
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
hegelty/KcBERT-Large-finetuned-josa | hegelty | 2023-07-09T10:43:46Z | 70 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T16:53:29Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: hegelty/KcBERT-Large-finetuned-josa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hegelty/KcBERT-Large-finetuned-josa
This model is a fine-tuned version of [beomi/KcBERT-Large](https://huggingface.co/beomi/KcBERT-Large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0058
- Validation Loss: 0.0000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 59393, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0058 | 0.0000 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.9.2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_v2_13b-GGML | TheBloke | 2023-07-09T10:28:34Z | 0 | 24 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:psmathur/orca_minis_uncensored_dataset",
"arxiv:2306.02707",
"arxiv:2302.13971",
"arxiv:2304.12244",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-generation | 2023-07-09T10:07:58Z | ---
inference: false
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- psmathur/orca_minis_uncensored_dataset
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini v2 13B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini v2 13B](https://huggingface.co/psmathur/orca_mini_v2_13b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v2_13b)
## Prompt template: orca_mini
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input, if required
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca_mini_v2_13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| orca_mini_v2_13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca_mini_v2_13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca_mini_v2_13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| orca_mini_v2_13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| orca_mini_v2_13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca_mini_v2_13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| orca_mini_v2_13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca_mini_v2_13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca_mini_v2_13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| orca_mini_v2_13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca_mini_v2_13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini v2 13B
# orca_mini_v2_13b
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
**P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam**
# Evaluation
I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:-------------:|:---------:|
|**Task**|**Value**|**Stderr**|
|*arc_challenge*|0.5572|0.0145|
|*hellaswag*|0.7964|0.0040|
|*mmlu*|0.4969|0.035|
|*truthfulqa_mc*|0.5231|0.0158|
|*Total Average*|0.5933|0.0114|
# Dataset
We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features).
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|48|
|*train_micro_batch_size_per_gpu*|3|
|*gradient_accumulation_steps*|4|
|*Learning rate*|2e-5|
|*Max length*|2048|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui)
```
### System:
{system}
### User:
{instruction}
### Input:
{input}
### Response:
```
Here is sample example:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me how to break into my own car
### Input:
### Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Below shows a code example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_v2_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Tell me how to break into my own car'
print(generate_text(system, instruction))
```
**NOTE: The real response is hidden here with ^^^^^^^^^^^^^.**
```
[!] Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{orca_mini_v2_13b,
author = {Pankaj Mathur},
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b},
}
```
```
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ArisuNguyen/retrain_non_seg_mbart | ArisuNguyen | 2023-07-09T10:26:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T08:50:42Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: retrain_non_seg_mbart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_non_seg_mbart
This model is a fine-tuned version of [ArisuNguyen/retrain_non_seg_mbart](https://huggingface.co/ArisuNguyen/retrain_non_seg_mbart) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vrsen/falcon-7b-instruct-ft-adapters | vrsen | 2023-07-09T10:25:12Z | 8 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-09T10:25:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jordyvl/dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone | jordyvl | 2023-07-09T10:18:24Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:28:41Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1502
- Accuracy: 0.0625
- Brier Loss: 0.9374
- Nll: 9.1398
- F1 Micro: 0.0625
- F1 Macro: 0.0074
- Ece: 0.1015
- Aurc: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.1540 | 0.0625 | 0.9376 | 8.5438 | 0.0625 | 0.0074 | 0.1043 | 0.9530 |
| No log | 1.96 | 24 | 0.1519 | 0.0625 | 0.9376 | 8.2831 | 0.0625 | 0.0074 | 0.1008 | 0.9465 |
| No log | 2.96 | 36 | 0.1512 | 0.0625 | 0.9375 | 8.4629 | 0.0625 | 0.0074 | 0.1028 | 0.9336 |
| No log | 3.96 | 48 | 0.1510 | 0.0625 | 0.9375 | 8.6283 | 0.0625 | 0.0074 | 0.1027 | 0.9365 |
| No log | 4.96 | 60 | 0.1509 | 0.0625 | 0.9375 | 8.5065 | 0.0625 | 0.0074 | 0.1030 | 0.9433 |
| No log | 5.96 | 72 | 0.1508 | 0.0625 | 0.9375 | 8.4779 | 0.0625 | 0.0074 | 0.1017 | 0.9414 |
| No log | 6.96 | 84 | 0.1507 | 0.0625 | 0.9375 | 8.5053 | 0.0625 | 0.0074 | 0.1045 | 0.9438 |
| No log | 7.96 | 96 | 0.1507 | 0.0625 | 0.9375 | 8.7396 | 0.0625 | 0.0074 | 0.1032 | 0.9440 |
| No log | 8.96 | 108 | 0.1506 | 0.0625 | 0.9375 | 8.6420 | 0.0625 | 0.0074 | 0.1031 | 0.9448 |
| No log | 9.96 | 120 | 0.1506 | 0.0625 | 0.9375 | 8.8410 | 0.0625 | 0.0074 | 0.1045 | 0.9438 |
| No log | 10.96 | 132 | 0.1506 | 0.0625 | 0.9374 | 8.9438 | 0.0625 | 0.0074 | 0.1042 | 0.9413 |
| No log | 11.96 | 144 | 0.1505 | 0.0625 | 0.9374 | 8.9847 | 0.0625 | 0.0074 | 0.1032 | 0.9418 |
| No log | 12.96 | 156 | 0.1505 | 0.0625 | 0.9374 | 9.0594 | 0.0625 | 0.0074 | 0.1031 | 0.9397 |
| No log | 13.96 | 168 | 0.1504 | 0.0625 | 0.9374 | 9.0748 | 0.0625 | 0.0074 | 0.1045 | 0.9343 |
| No log | 14.96 | 180 | 0.1504 | 0.0625 | 0.9374 | 9.0912 | 0.0625 | 0.0074 | 0.1018 | 0.9358 |
| No log | 15.96 | 192 | 0.1504 | 0.0625 | 0.9374 | 9.0950 | 0.0625 | 0.0074 | 0.1032 | 0.9331 |
| No log | 16.96 | 204 | 0.1503 | 0.0625 | 0.9374 | 9.2141 | 0.0625 | 0.0074 | 0.1015 | 0.9363 |
| No log | 17.96 | 216 | 0.1503 | 0.0625 | 0.9374 | 9.0918 | 0.0625 | 0.0074 | 0.1046 | 0.9354 |
| No log | 18.96 | 228 | 0.1503 | 0.0625 | 0.9374 | 9.1430 | 0.0625 | 0.0074 | 0.1018 | 0.9385 |
| No log | 19.96 | 240 | 0.1503 | 0.0625 | 0.9374 | 9.2149 | 0.0625 | 0.0074 | 0.0991 | 0.9404 |
| No log | 20.96 | 252 | 0.1503 | 0.0625 | 0.9374 | 9.0900 | 0.0625 | 0.0074 | 0.1043 | 0.9386 |
| No log | 21.96 | 264 | 0.1503 | 0.0625 | 0.9374 | 9.1244 | 0.0625 | 0.0074 | 0.1060 | 0.9395 |
| No log | 22.96 | 276 | 0.1503 | 0.0625 | 0.9374 | 9.1353 | 0.0625 | 0.0074 | 0.1005 | 0.9378 |
| No log | 23.96 | 288 | 0.1502 | 0.0625 | 0.9374 | 9.2063 | 0.0625 | 0.0074 | 0.1032 | 0.9373 |
| No log | 24.96 | 300 | 0.1502 | 0.0625 | 0.9374 | 9.1398 | 0.0625 | 0.0074 | 0.1015 | 0.9383 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t2.5_a0.7 | jordyvl | 2023-07-09T10:17:47Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T10:00:03Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t2.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t2.5_a0.7
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1993
- Accuracy: 0.185
- Brier Loss: 0.8672
- Nll: 6.5703
- F1 Micro: 0.185
- F1 Macro: 0.0488
- Ece: 0.2594
- Aurc: 0.7367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 3.4684 | 0.06 | 0.9042 | 9.2910 | 0.06 | 0.0114 | 0.1755 | 0.9033 |
| No log | 1.96 | 6 | 3.3741 | 0.18 | 0.8886 | 6.5491 | 0.18 | 0.0305 | 0.2324 | 0.8055 |
| No log | 2.96 | 9 | 3.2779 | 0.18 | 0.8767 | 7.2662 | 0.18 | 0.0305 | 0.2493 | 0.8196 |
| No log | 3.96 | 12 | 3.2605 | 0.18 | 0.8816 | 7.0963 | 0.18 | 0.0305 | 0.2628 | 0.8140 |
| No log | 4.96 | 15 | 3.2592 | 0.185 | 0.8814 | 6.9350 | 0.185 | 0.0488 | 0.2584 | 0.7850 |
| No log | 5.96 | 18 | 3.2576 | 0.185 | 0.8782 | 6.3113 | 0.185 | 0.0488 | 0.2561 | 0.7731 |
| No log | 6.96 | 21 | 3.2540 | 0.185 | 0.8747 | 6.0058 | 0.185 | 0.0488 | 0.2446 | 0.7705 |
| No log | 7.96 | 24 | 3.2500 | 0.185 | 0.8731 | 5.9849 | 0.185 | 0.0488 | 0.2442 | 0.7669 |
| No log | 8.96 | 27 | 3.2430 | 0.185 | 0.8717 | 5.9785 | 0.185 | 0.0488 | 0.2483 | 0.7626 |
| No log | 9.96 | 30 | 3.2377 | 0.185 | 0.8711 | 6.2837 | 0.185 | 0.0488 | 0.2462 | 0.7609 |
| No log | 10.96 | 33 | 3.2332 | 0.185 | 0.8713 | 6.8641 | 0.185 | 0.0488 | 0.2560 | 0.7601 |
| No log | 11.96 | 36 | 3.2293 | 0.185 | 0.8719 | 6.8631 | 0.185 | 0.0488 | 0.2523 | 0.7587 |
| No log | 12.96 | 39 | 3.2246 | 0.185 | 0.8717 | 6.8535 | 0.185 | 0.0488 | 0.2526 | 0.7558 |
| No log | 13.96 | 42 | 3.2190 | 0.185 | 0.8709 | 6.8177 | 0.185 | 0.0488 | 0.2565 | 0.7533 |
| No log | 14.96 | 45 | 3.2134 | 0.185 | 0.8700 | 6.7894 | 0.185 | 0.0488 | 0.2630 | 0.7533 |
| No log | 15.96 | 48 | 3.2091 | 0.185 | 0.8691 | 6.7672 | 0.185 | 0.0488 | 0.2585 | 0.7500 |
| No log | 16.96 | 51 | 3.2069 | 0.185 | 0.8687 | 6.6512 | 0.185 | 0.0488 | 0.2536 | 0.7466 |
| No log | 17.96 | 54 | 3.2063 | 0.185 | 0.8682 | 6.5227 | 0.185 | 0.0488 | 0.2520 | 0.7429 |
| No log | 18.96 | 57 | 3.2057 | 0.185 | 0.8682 | 6.5119 | 0.185 | 0.0488 | 0.2514 | 0.7406 |
| No log | 19.96 | 60 | 3.2036 | 0.185 | 0.8678 | 6.5674 | 0.185 | 0.0488 | 0.2501 | 0.7385 |
| No log | 20.96 | 63 | 3.2023 | 0.185 | 0.8677 | 6.5709 | 0.185 | 0.0488 | 0.2506 | 0.7385 |
| No log | 21.96 | 66 | 3.2010 | 0.185 | 0.8675 | 6.5731 | 0.185 | 0.0488 | 0.2631 | 0.7376 |
| No log | 22.96 | 69 | 3.2000 | 0.185 | 0.8673 | 6.5723 | 0.185 | 0.0488 | 0.2591 | 0.7371 |
| No log | 23.96 | 72 | 3.1996 | 0.185 | 0.8673 | 6.5715 | 0.185 | 0.0488 | 0.2593 | 0.7368 |
| No log | 24.96 | 75 | 3.1993 | 0.185 | 0.8672 | 6.5703 | 0.185 | 0.0488 | 0.2594 | 0.7367 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KJan05/ppo-CartPole-v1-unit8-p1 | KJan05 | 2023-07-09T10:09:08Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T08:36:34Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -80.21 +/- 69.99
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'KJan05/ppo-CartPole-v1-unit8-p1'
'batch_size': 512
'minibatch_size': 128}
```
|
DovahYol/Reinforce-Pixelcopter-PLE-v0 | DovahYol | 2023-07-09T10:04:12Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T10:04:05Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 65.90 +/- 39.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.7 | jordyvl | 2023-07-09T09:59:20Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:43:16Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.7
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2510
- Accuracy: 0.18
- Brier Loss: 0.8767
- Nll: 6.8039
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2513
- Aurc: 0.8508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 3.4586 | 0.145 | 0.8999 | 10.1587 | 0.145 | 0.0253 | 0.2221 | 0.8467 |
| No log | 1.96 | 6 | 3.4232 | 0.145 | 0.8946 | 10.5824 | 0.145 | 0.0253 | 0.2242 | 0.8475 |
| No log | 2.96 | 9 | 3.3704 | 0.16 | 0.8867 | 8.6135 | 0.16 | 0.0503 | 0.2171 | 0.8440 |
| No log | 3.96 | 12 | 3.3273 | 0.155 | 0.8807 | 6.5471 | 0.155 | 0.0274 | 0.2248 | 0.8831 |
| No log | 4.96 | 15 | 3.3006 | 0.155 | 0.8779 | 6.8045 | 0.155 | 0.0271 | 0.2331 | 0.8918 |
| No log | 5.96 | 18 | 3.2856 | 0.16 | 0.8773 | 8.2046 | 0.16 | 0.0329 | 0.2361 | 0.8956 |
| No log | 6.96 | 21 | 3.2758 | 0.18 | 0.8774 | 8.0738 | 0.18 | 0.0308 | 0.2561 | 0.8544 |
| No log | 7.96 | 24 | 3.2688 | 0.18 | 0.8778 | 7.1046 | 0.18 | 0.0308 | 0.2647 | 0.8524 |
| No log | 8.96 | 27 | 3.2630 | 0.18 | 0.8778 | 6.9910 | 0.18 | 0.0306 | 0.2591 | 0.8530 |
| No log | 9.96 | 30 | 3.2597 | 0.18 | 0.8778 | 6.9680 | 0.18 | 0.0306 | 0.2736 | 0.8538 |
| No log | 10.96 | 33 | 3.2573 | 0.18 | 0.8776 | 6.9547 | 0.18 | 0.0306 | 0.2698 | 0.8536 |
| No log | 11.96 | 36 | 3.2557 | 0.18 | 0.8775 | 6.9491 | 0.18 | 0.0306 | 0.2653 | 0.8533 |
| No log | 12.96 | 39 | 3.2546 | 0.18 | 0.8773 | 6.8987 | 0.18 | 0.0306 | 0.2606 | 0.8526 |
| No log | 13.96 | 42 | 3.2536 | 0.18 | 0.8771 | 6.8204 | 0.18 | 0.0306 | 0.2601 | 0.8523 |
| No log | 14.96 | 45 | 3.2528 | 0.18 | 0.8771 | 6.8141 | 0.18 | 0.0306 | 0.2521 | 0.8519 |
| No log | 15.96 | 48 | 3.2522 | 0.18 | 0.8769 | 6.8074 | 0.18 | 0.0306 | 0.2606 | 0.8517 |
| No log | 16.96 | 51 | 3.2519 | 0.18 | 0.8769 | 6.8077 | 0.18 | 0.0306 | 0.2607 | 0.8515 |
| No log | 17.96 | 54 | 3.2520 | 0.18 | 0.8769 | 6.8050 | 0.18 | 0.0306 | 0.2561 | 0.8510 |
| No log | 18.96 | 57 | 3.2520 | 0.18 | 0.8769 | 6.8057 | 0.18 | 0.0306 | 0.2519 | 0.8509 |
| No log | 19.96 | 60 | 3.2515 | 0.18 | 0.8768 | 6.8046 | 0.18 | 0.0306 | 0.2556 | 0.8507 |
| No log | 20.96 | 63 | 3.2514 | 0.18 | 0.8768 | 6.8048 | 0.18 | 0.0306 | 0.2515 | 0.8506 |
| No log | 21.96 | 66 | 3.2512 | 0.18 | 0.8767 | 6.8048 | 0.18 | 0.0306 | 0.2556 | 0.8508 |
| No log | 22.96 | 69 | 3.2510 | 0.18 | 0.8767 | 6.8045 | 0.18 | 0.0306 | 0.2513 | 0.8509 |
| No log | 23.96 | 72 | 3.2510 | 0.18 | 0.8767 | 6.8043 | 0.18 | 0.0306 | 0.2513 | 0.8508 |
| No log | 24.96 | 75 | 3.2510 | 0.18 | 0.8767 | 6.8039 | 0.18 | 0.0306 | 0.2513 | 0.8508 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-09T09:42:35Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:29:00Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8936
- Accuracy: 0.185
- Brier Loss: 0.8707
- Nll: 6.6284
- F1 Micro: 0.185
- F1 Macro: 0.0488
- Ece: 0.2527
- Aurc: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 4.2363 | 0.06 | 0.9043 | 9.2962 | 0.06 | 0.0114 | 0.1758 | 0.9032 |
| No log | 1.96 | 6 | 4.1268 | 0.18 | 0.8887 | 6.8683 | 0.18 | 0.0305 | 0.2329 | 0.8055 |
| No log | 2.96 | 9 | 4.0044 | 0.18 | 0.8773 | 7.3055 | 0.18 | 0.0305 | 0.2510 | 0.8219 |
| No log | 3.96 | 12 | 3.9678 | 0.18 | 0.8851 | 7.2435 | 0.18 | 0.0305 | 0.2677 | 0.8214 |
| No log | 4.96 | 15 | 3.9645 | 0.185 | 0.8877 | 6.9806 | 0.185 | 0.0488 | 0.2757 | 0.7934 |
| No log | 5.96 | 18 | 3.9635 | 0.185 | 0.8853 | 6.9543 | 0.185 | 0.0488 | 0.2551 | 0.7812 |
| No log | 6.96 | 21 | 3.9564 | 0.185 | 0.8801 | 6.0556 | 0.185 | 0.0488 | 0.2515 | 0.7771 |
| No log | 7.96 | 24 | 3.9505 | 0.185 | 0.8772 | 6.0356 | 0.185 | 0.0488 | 0.2598 | 0.7724 |
| No log | 8.96 | 27 | 3.9435 | 0.185 | 0.8751 | 6.0288 | 0.185 | 0.0488 | 0.2590 | 0.7697 |
| No log | 9.96 | 30 | 3.9383 | 0.185 | 0.8742 | 6.0724 | 0.185 | 0.0488 | 0.2474 | 0.7712 |
| No log | 10.96 | 33 | 3.9336 | 0.185 | 0.8746 | 6.7953 | 0.185 | 0.0488 | 0.2533 | 0.7685 |
| No log | 11.96 | 36 | 3.9298 | 0.185 | 0.8755 | 6.9469 | 0.185 | 0.0488 | 0.2679 | 0.7659 |
| No log | 12.96 | 39 | 3.9253 | 0.185 | 0.8756 | 6.9654 | 0.185 | 0.0488 | 0.2591 | 0.7640 |
| No log | 13.96 | 42 | 3.9194 | 0.185 | 0.8750 | 6.9522 | 0.185 | 0.0488 | 0.2681 | 0.7604 |
| No log | 14.96 | 45 | 3.9128 | 0.185 | 0.8744 | 6.9200 | 0.185 | 0.0488 | 0.2611 | 0.7617 |
| No log | 15.96 | 48 | 3.9074 | 0.185 | 0.8733 | 6.8369 | 0.185 | 0.0488 | 0.2611 | 0.7600 |
| No log | 16.96 | 51 | 3.9041 | 0.185 | 0.8726 | 6.8278 | 0.185 | 0.0488 | 0.2558 | 0.7566 |
| No log | 17.96 | 54 | 3.9025 | 0.185 | 0.8719 | 6.7039 | 0.185 | 0.0488 | 0.2588 | 0.7510 |
| No log | 18.96 | 57 | 3.9012 | 0.185 | 0.8717 | 6.6384 | 0.185 | 0.0488 | 0.2580 | 0.7484 |
| No log | 19.96 | 60 | 3.8987 | 0.185 | 0.8712 | 6.6323 | 0.185 | 0.0488 | 0.2612 | 0.7450 |
| No log | 20.96 | 63 | 3.8971 | 0.185 | 0.8712 | 6.6319 | 0.185 | 0.0488 | 0.2615 | 0.7443 |
| No log | 21.96 | 66 | 3.8956 | 0.185 | 0.8710 | 6.6323 | 0.185 | 0.0488 | 0.2659 | 0.7439 |
| No log | 22.96 | 69 | 3.8945 | 0.185 | 0.8708 | 6.6307 | 0.185 | 0.0488 | 0.2569 | 0.7436 |
| No log | 23.96 | 72 | 3.8940 | 0.185 | 0.8708 | 6.6295 | 0.185 | 0.0488 | 0.2526 | 0.7434 |
| No log | 24.96 | 75 | 3.8936 | 0.185 | 0.8707 | 6.6284 | 0.185 | 0.0488 | 0.2527 | 0.7434 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tienlansun/distillbert-based-uncased-mnli | tienlansun | 2023-07-09T09:41:56Z | 199 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T09:38:54Z | ---
datasets:
- glue
language:
- en
pipeline_tag: text-classification
--- |
demelianov/mira_model | demelianov | 2023-07-09T09:38:18Z | 31 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-09T09:29:40Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - demelianov/mira_model
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
cgcgcgcgcg/111 | cgcgcgcgcg | 2023-07-09T09:32:21Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-09T09:31:54Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-09T09:28:18Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:16:22Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9560
- Accuracy: 0.18
- Brier Loss: 0.8800
- Nll: 6.8606
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2612
- Aurc: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 4.2281 | 0.145 | 0.8999 | 10.1620 | 0.145 | 0.0253 | 0.2222 | 0.8467 |
| No log | 1.96 | 6 | 4.1872 | 0.145 | 0.8946 | 10.5915 | 0.145 | 0.0253 | 0.2275 | 0.8468 |
| No log | 2.96 | 9 | 4.1248 | 0.155 | 0.8866 | 8.6280 | 0.155 | 0.0360 | 0.2179 | 0.8487 |
| No log | 3.96 | 12 | 4.0716 | 0.155 | 0.8806 | 6.5480 | 0.155 | 0.0272 | 0.2254 | 0.8851 |
| No log | 4.96 | 15 | 4.0359 | 0.155 | 0.8778 | 6.7781 | 0.155 | 0.0271 | 0.2310 | 0.8931 |
| No log | 5.96 | 18 | 4.0135 | 0.155 | 0.8774 | 7.8547 | 0.155 | 0.0271 | 0.2345 | 0.8965 |
| No log | 6.96 | 21 | 3.9978 | 0.185 | 0.8779 | 8.3528 | 0.185 | 0.0468 | 0.2615 | 0.8612 |
| No log | 7.96 | 24 | 3.9867 | 0.18 | 0.8789 | 7.6001 | 0.18 | 0.0308 | 0.2618 | 0.8546 |
| No log | 8.96 | 27 | 3.9782 | 0.18 | 0.8796 | 7.0871 | 0.18 | 0.0306 | 0.2613 | 0.8538 |
| No log | 9.96 | 30 | 3.9726 | 0.18 | 0.8800 | 7.0519 | 0.18 | 0.0306 | 0.2687 | 0.8545 |
| No log | 10.96 | 33 | 3.9684 | 0.18 | 0.8803 | 7.0277 | 0.18 | 0.0306 | 0.2656 | 0.8537 |
| No log | 11.96 | 36 | 3.9654 | 0.18 | 0.8805 | 7.0162 | 0.18 | 0.0306 | 0.2708 | 0.8536 |
| No log | 12.96 | 39 | 3.9633 | 0.18 | 0.8805 | 7.0056 | 0.18 | 0.0306 | 0.2619 | 0.8535 |
| No log | 13.96 | 42 | 3.9614 | 0.18 | 0.8804 | 6.9981 | 0.18 | 0.0306 | 0.2617 | 0.8532 |
| No log | 14.96 | 45 | 3.9598 | 0.18 | 0.8804 | 6.9923 | 0.18 | 0.0306 | 0.2669 | 0.8531 |
| No log | 15.96 | 48 | 3.9586 | 0.18 | 0.8803 | 6.9334 | 0.18 | 0.0306 | 0.2669 | 0.8529 |
| No log | 16.96 | 51 | 3.9578 | 0.18 | 0.8802 | 6.9237 | 0.18 | 0.0306 | 0.2716 | 0.8522 |
| No log | 17.96 | 54 | 3.9576 | 0.18 | 0.8802 | 6.8704 | 0.18 | 0.0306 | 0.2666 | 0.8521 |
| No log | 18.96 | 57 | 3.9574 | 0.18 | 0.8802 | 6.8662 | 0.18 | 0.0306 | 0.2664 | 0.8523 |
| No log | 19.96 | 60 | 3.9568 | 0.18 | 0.8801 | 6.8641 | 0.18 | 0.0306 | 0.2614 | 0.8518 |
| No log | 20.96 | 63 | 3.9566 | 0.18 | 0.8801 | 6.8634 | 0.18 | 0.0306 | 0.2659 | 0.8516 |
| No log | 21.96 | 66 | 3.9563 | 0.18 | 0.8800 | 6.8632 | 0.18 | 0.0306 | 0.2612 | 0.8516 |
| No log | 22.96 | 69 | 3.9561 | 0.18 | 0.8800 | 6.8620 | 0.18 | 0.0306 | 0.2612 | 0.8513 |
| No log | 23.96 | 72 | 3.9561 | 0.18 | 0.8800 | 6.8611 | 0.18 | 0.0306 | 0.2612 | 0.8513 |
| No log | 24.96 | 75 | 3.9560 | 0.18 | 0.8800 | 6.8606 | 0.18 | 0.0306 | 0.2612 | 0.8512 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
crisU8/bert-finetuned-ner-clinical-BETO-1-uncased | crisU8 | 2023-07-09T09:19:58Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T09:06:55Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-1-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-1-uncased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5376
- Precision: 0.7341
- Recall: 0.7772
- F1: 0.7550
- Accuracy: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4682 | 1.0 | 502 | 0.3263 | 0.6124 | 0.7344 | 0.6678 | 0.8939 |
| 0.2443 | 2.0 | 1004 | 0.2778 | 0.6809 | 0.7519 | 0.7147 | 0.9122 |
| 0.1728 | 3.0 | 1506 | 0.2898 | 0.7011 | 0.7481 | 0.7238 | 0.9155 |
| 0.1277 | 4.0 | 2008 | 0.3182 | 0.6970 | 0.7640 | 0.7290 | 0.9118 |
| 0.0928 | 5.0 | 2510 | 0.3578 | 0.6975 | 0.7667 | 0.7305 | 0.9128 |
| 0.0699 | 6.0 | 3012 | 0.3931 | 0.7058 | 0.7794 | 0.7407 | 0.9102 |
| 0.0538 | 7.0 | 3514 | 0.4213 | 0.7225 | 0.7574 | 0.7395 | 0.9140 |
| 0.0413 | 8.0 | 4016 | 0.4387 | 0.7143 | 0.7821 | 0.7467 | 0.9147 |
| 0.033 | 9.0 | 4518 | 0.4997 | 0.7184 | 0.7728 | 0.7446 | 0.9147 |
| 0.0265 | 10.0 | 5020 | 0.5056 | 0.7180 | 0.7728 | 0.7444 | 0.9152 |
| 0.0225 | 11.0 | 5522 | 0.5237 | 0.7250 | 0.7728 | 0.7481 | 0.9164 |
| 0.0176 | 12.0 | 6024 | 0.5376 | 0.7341 | 0.7772 | 0.7550 | 0.9177 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t1.5_a0.9 | jordyvl | 2023-07-09T09:15:40Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:02:07Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2890
- Accuracy: 0.19
- Brier Loss: 0.8648
- Nll: 6.4150
- F1 Micro: 0.19
- F1 Macro: 0.0641
- Ece: 0.2450
- Aurc: 0.7332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.4806 | 0.06 | 0.9041 | 9.2838 | 0.06 | 0.0114 | 0.1750 | 0.9034 |
| No log | 1.96 | 6 | 2.4041 | 0.18 | 0.8884 | 6.3227 | 0.18 | 0.0305 | 0.2317 | 0.8027 |
| No log | 2.96 | 9 | 2.3381 | 0.18 | 0.8760 | 6.9952 | 0.18 | 0.0305 | 0.2424 | 0.8118 |
| No log | 3.96 | 12 | 2.3362 | 0.185 | 0.8771 | 6.9040 | 0.185 | 0.0488 | 0.2544 | 0.7841 |
| No log | 4.96 | 15 | 2.3345 | 0.185 | 0.8747 | 6.8515 | 0.185 | 0.0488 | 0.2476 | 0.7768 |
| No log | 5.96 | 18 | 2.3339 | 0.185 | 0.8725 | 6.0111 | 0.185 | 0.0490 | 0.2457 | 0.7670 |
| No log | 6.96 | 21 | 2.3348 | 0.185 | 0.8718 | 5.9199 | 0.185 | 0.0488 | 0.2328 | 0.7596 |
| No log | 7.96 | 24 | 2.3310 | 0.185 | 0.8711 | 5.9008 | 0.185 | 0.0488 | 0.2443 | 0.7536 |
| No log | 8.96 | 27 | 2.3231 | 0.185 | 0.8699 | 5.8793 | 0.185 | 0.0488 | 0.2337 | 0.7516 |
| No log | 9.96 | 30 | 2.3181 | 0.185 | 0.8694 | 6.6980 | 0.185 | 0.0488 | 0.2507 | 0.7500 |
| No log | 10.96 | 33 | 2.3139 | 0.185 | 0.8692 | 6.7350 | 0.185 | 0.0488 | 0.2481 | 0.7488 |
| No log | 11.96 | 36 | 2.3099 | 0.185 | 0.8690 | 6.7557 | 0.185 | 0.0488 | 0.2484 | 0.7463 |
| No log | 12.96 | 39 | 2.3057 | 0.185 | 0.8684 | 6.6765 | 0.185 | 0.0488 | 0.2598 | 0.7441 |
| No log | 13.96 | 42 | 2.3014 | 0.185 | 0.8676 | 6.6313 | 0.185 | 0.0488 | 0.2478 | 0.7420 |
| No log | 14.96 | 45 | 2.2978 | 0.185 | 0.8669 | 6.6142 | 0.185 | 0.0488 | 0.2496 | 0.7412 |
| No log | 15.96 | 48 | 2.2955 | 0.185 | 0.8664 | 6.5990 | 0.185 | 0.0488 | 0.2379 | 0.7399 |
| No log | 16.96 | 51 | 2.2947 | 0.185 | 0.8662 | 6.4895 | 0.185 | 0.0488 | 0.2452 | 0.7375 |
| No log | 17.96 | 54 | 2.2949 | 0.185 | 0.8661 | 6.4730 | 0.185 | 0.0488 | 0.2438 | 0.7354 |
| No log | 18.96 | 57 | 2.2949 | 0.185 | 0.8661 | 6.4244 | 0.185 | 0.0488 | 0.2435 | 0.7356 |
| No log | 19.96 | 60 | 2.2930 | 0.185 | 0.8657 | 6.3676 | 0.185 | 0.0490 | 0.2389 | 0.7341 |
| No log | 20.96 | 63 | 2.2918 | 0.19 | 0.8654 | 6.4233 | 0.19 | 0.0641 | 0.2446 | 0.7336 |
| No log | 21.96 | 66 | 2.2905 | 0.19 | 0.8651 | 6.4742 | 0.19 | 0.0641 | 0.2485 | 0.7334 |
| No log | 22.96 | 69 | 2.2897 | 0.19 | 0.8649 | 6.4243 | 0.19 | 0.0641 | 0.2448 | 0.7332 |
| No log | 23.96 | 72 | 2.2893 | 0.19 | 0.8648 | 6.4174 | 0.19 | 0.0641 | 0.2450 | 0.7332 |
| No log | 24.96 | 75 | 2.2890 | 0.19 | 0.8648 | 6.4150 | 0.19 | 0.0641 | 0.2450 | 0.7332 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 | jordyvl | 2023-07-09T09:01:17Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T08:49:44Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3286
- Accuracy: 0.18
- Brier Loss: 0.8742
- Nll: 6.7213
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2558
- Aurc: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.4683 | 0.145 | 0.8999 | 10.1538 | 0.145 | 0.0253 | 0.2220 | 0.8466 |
| No log | 1.96 | 6 | 2.4396 | 0.145 | 0.8947 | 10.5704 | 0.145 | 0.0253 | 0.2237 | 0.8463 |
| No log | 2.96 | 9 | 2.3985 | 0.145 | 0.8869 | 8.5511 | 0.145 | 0.0451 | 0.2116 | 0.8036 |
| No log | 3.96 | 12 | 2.3677 | 0.21 | 0.8810 | 6.5446 | 0.2100 | 0.0611 | 0.2566 | 0.8335 |
| No log | 4.96 | 15 | 2.3517 | 0.155 | 0.8780 | 6.8400 | 0.155 | 0.0279 | 0.2309 | 0.8894 |
| No log | 5.96 | 18 | 2.3450 | 0.18 | 0.8771 | 8.1897 | 0.18 | 0.0313 | 0.2495 | 0.8531 |
| No log | 6.96 | 21 | 2.3407 | 0.18 | 0.8767 | 7.3073 | 0.18 | 0.0306 | 0.2551 | 0.8513 |
| No log | 7.96 | 24 | 2.3371 | 0.18 | 0.8763 | 6.9328 | 0.18 | 0.0306 | 0.2501 | 0.8520 |
| No log | 8.96 | 27 | 2.3337 | 0.18 | 0.8757 | 6.8828 | 0.18 | 0.0306 | 0.2507 | 0.8525 |
| No log | 9.96 | 30 | 2.3321 | 0.18 | 0.8753 | 6.8682 | 0.18 | 0.0306 | 0.2508 | 0.8524 |
| No log | 10.96 | 33 | 2.3312 | 0.18 | 0.8751 | 6.7981 | 0.18 | 0.0306 | 0.2462 | 0.8521 |
| No log | 11.96 | 36 | 2.3309 | 0.18 | 0.8749 | 6.7375 | 0.18 | 0.0306 | 0.2531 | 0.8520 |
| No log | 12.96 | 39 | 2.3307 | 0.18 | 0.8748 | 6.7235 | 0.18 | 0.0306 | 0.2524 | 0.8518 |
| No log | 13.96 | 42 | 2.3304 | 0.18 | 0.8747 | 6.7200 | 0.18 | 0.0306 | 0.2482 | 0.8514 |
| No log | 14.96 | 45 | 2.3301 | 0.18 | 0.8746 | 6.7201 | 0.18 | 0.0306 | 0.2410 | 0.8509 |
| No log | 15.96 | 48 | 2.3298 | 0.18 | 0.8746 | 6.7182 | 0.18 | 0.0306 | 0.2449 | 0.8505 |
| No log | 16.96 | 51 | 2.3295 | 0.18 | 0.8745 | 6.7211 | 0.18 | 0.0306 | 0.2412 | 0.8500 |
| No log | 17.96 | 54 | 2.3297 | 0.18 | 0.8745 | 6.7201 | 0.18 | 0.0306 | 0.2449 | 0.8496 |
| No log | 18.96 | 57 | 2.3296 | 0.18 | 0.8745 | 6.7216 | 0.18 | 0.0306 | 0.2392 | 0.8494 |
| No log | 19.96 | 60 | 2.3292 | 0.18 | 0.8744 | 6.7214 | 0.18 | 0.0306 | 0.2371 | 0.8494 |
| No log | 20.96 | 63 | 2.3290 | 0.18 | 0.8744 | 6.7222 | 0.18 | 0.0306 | 0.2371 | 0.8493 |
| No log | 21.96 | 66 | 2.3288 | 0.18 | 0.8743 | 6.7227 | 0.18 | 0.0306 | 0.2408 | 0.8494 |
| No log | 22.96 | 69 | 2.3286 | 0.18 | 0.8743 | 6.7223 | 0.18 | 0.0306 | 0.2558 | 0.8490 |
| No log | 23.96 | 72 | 2.3286 | 0.18 | 0.8743 | 6.7218 | 0.18 | 0.0306 | 0.2558 | 0.8491 |
| No log | 24.96 | 75 | 2.3286 | 0.18 | 0.8742 | 6.7213 | 0.18 | 0.0306 | 0.2558 | 0.8491 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mrizalf7/t5-small-finetuned-indosum-2 | mrizalf7 | 2023-07-09T09:00:29Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-09T07:07:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-indosum-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-indosum-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-BETO-uncased-4 | crisU8 | 2023-07-09T08:59:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T08:54:00Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-uncased-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-uncased-4
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Precision: 0.7142
- Recall: 0.7722
- F1: 0.7421
- Accuracy: 0.9150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0602 | 1.0 | 502 | 0.3957 | 0.7006 | 0.7552 | 0.7269 | 0.9089 |
| 0.0596 | 2.0 | 1004 | 0.3879 | 0.7198 | 0.7629 | 0.7407 | 0.9146 |
| 0.0575 | 3.0 | 1506 | 0.4171 | 0.7142 | 0.7722 | 0.7421 | 0.9150 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t1.5_a0.7 | jordyvl | 2023-07-09T08:49:00Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T08:35:32Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5836
- Accuracy: 0.185
- Brier Loss: 0.8652
- Nll: 6.4546
- F1 Micro: 0.185
- F1 Macro: 0.0488
- Ece: 0.2424
- Aurc: 0.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.8093 | 0.06 | 0.9041 | 9.2868 | 0.06 | 0.0114 | 0.1752 | 0.9033 |
| No log | 1.96 | 6 | 2.7245 | 0.18 | 0.8884 | 6.2166 | 0.18 | 0.0305 | 0.2292 | 0.8036 |
| No log | 2.96 | 9 | 2.6443 | 0.18 | 0.8760 | 6.9627 | 0.18 | 0.0305 | 0.2437 | 0.8179 |
| No log | 3.96 | 12 | 2.6356 | 0.185 | 0.8785 | 6.9306 | 0.185 | 0.0488 | 0.2534 | 0.7877 |
| No log | 4.96 | 15 | 2.6338 | 0.185 | 0.8768 | 6.8870 | 0.185 | 0.0488 | 0.2605 | 0.7787 |
| No log | 5.96 | 18 | 2.6325 | 0.185 | 0.8740 | 6.2086 | 0.185 | 0.0490 | 0.2453 | 0.7699 |
| No log | 6.96 | 21 | 2.6322 | 0.185 | 0.8721 | 5.9554 | 0.185 | 0.0488 | 0.2474 | 0.7629 |
| No log | 7.96 | 24 | 2.6293 | 0.185 | 0.8712 | 5.9359 | 0.185 | 0.0488 | 0.2550 | 0.7576 |
| No log | 8.96 | 27 | 2.6221 | 0.185 | 0.8701 | 5.9468 | 0.185 | 0.0488 | 0.2436 | 0.7536 |
| No log | 9.96 | 30 | 2.6171 | 0.185 | 0.8697 | 6.6875 | 0.185 | 0.0488 | 0.2497 | 0.7541 |
| No log | 10.96 | 33 | 2.6126 | 0.185 | 0.8697 | 6.7549 | 0.185 | 0.0488 | 0.2512 | 0.7517 |
| No log | 11.96 | 36 | 2.6084 | 0.185 | 0.8697 | 6.7827 | 0.185 | 0.0488 | 0.2476 | 0.7489 |
| No log | 12.96 | 39 | 2.6037 | 0.185 | 0.8692 | 6.7652 | 0.185 | 0.0488 | 0.2557 | 0.7476 |
| No log | 13.96 | 42 | 2.5986 | 0.185 | 0.8683 | 6.6847 | 0.185 | 0.0488 | 0.2513 | 0.7446 |
| No log | 14.96 | 45 | 2.5940 | 0.185 | 0.8676 | 6.6600 | 0.185 | 0.0488 | 0.2572 | 0.7447 |
| No log | 15.96 | 48 | 2.5910 | 0.185 | 0.8669 | 6.6410 | 0.185 | 0.0488 | 0.2448 | 0.7424 |
| No log | 16.96 | 51 | 2.5897 | 0.185 | 0.8667 | 6.6371 | 0.185 | 0.0488 | 0.2402 | 0.7402 |
| No log | 17.96 | 54 | 2.5898 | 0.185 | 0.8664 | 6.5096 | 0.185 | 0.0488 | 0.2549 | 0.7371 |
| No log | 18.96 | 57 | 2.5897 | 0.185 | 0.8664 | 6.5160 | 0.185 | 0.0488 | 0.2504 | 0.7363 |
| No log | 19.96 | 60 | 2.5877 | 0.185 | 0.8660 | 6.4661 | 0.185 | 0.0488 | 0.2416 | 0.7346 |
| No log | 20.96 | 63 | 2.5865 | 0.185 | 0.8658 | 6.4833 | 0.185 | 0.0488 | 0.2459 | 0.7347 |
| No log | 21.96 | 66 | 2.5852 | 0.185 | 0.8655 | 6.4690 | 0.185 | 0.0488 | 0.2460 | 0.7343 |
| No log | 22.96 | 69 | 2.5843 | 0.185 | 0.8654 | 6.4625 | 0.185 | 0.0488 | 0.2461 | 0.7340 |
| No log | 23.96 | 72 | 2.5838 | 0.185 | 0.8653 | 6.4568 | 0.185 | 0.0488 | 0.2424 | 0.7342 |
| No log | 24.96 | 75 | 2.5836 | 0.185 | 0.8652 | 6.4546 | 0.185 | 0.0488 | 0.2424 | 0.7342 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
crisU8/bert-finetuned-ner-clinical-BETO-uncased-1 | crisU8 | 2023-07-09T08:40:27Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T08:35:19Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-uncased-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-uncased-1
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3018
- Precision: 0.6953
- Recall: 0.7464
- F1: 0.7200
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4647 | 1.0 | 502 | 0.3156 | 0.6186 | 0.7327 | 0.6709 | 0.8969 |
| 0.2428 | 2.0 | 1004 | 0.2804 | 0.6916 | 0.7470 | 0.7182 | 0.9120 |
| 0.1734 | 3.0 | 1506 | 0.2864 | 0.6923 | 0.7508 | 0.7204 | 0.9161 |
| 0.1353 | 4.0 | 2008 | 0.3018 | 0.6953 | 0.7464 | 0.7200 | 0.9155 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cambioml/rlhf-reward-model | cambioml | 2023-07-09T08:36:38Z | 136 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T07:59:15Z | # 🚀 RLHF Step-2 Reward Model
This repository is home to a RLHF reward model. This model is trained on questions and answers from the Stack Overflow Data Dump (https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences), using the `distilroberta-base` model (https://huggingface.co/distilroberta-base) as a base.
## Usage
You can use this model directly with a pipeline for tasks such as text generation and instruction following:
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
pipeline
)
reward_model = AutoModelForSequenceClassification.from_pretrained(
cambioml/rlhf_reward_model,
num_labels=1,
# torch_dtype=torch.bfloat16,
load_in_8bit=True,
device_map={"": Accelerator().process_index}
)
reward_tokenizer = AutoTokenizer.from_pretrained(cambioml/rlhf_reward_model)
reward_tokenizer.pad_token = reward_tokenizer.eos_token
reward_kwargs = {
"return_all_scores": True,
"function_to_apply": "none",
"batch_size": 32,
"truncation": True,
"max_length": 138
}
reward_pipe = pipeline(
"sentiment-analysis",
model=reward_model,
model_kwargs=reward_kwargs,
tokenizer=reward_tokenizer,
return_token_type_ids=False,
)
``` |
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.7 | jordyvl | 2023-07-09T08:34:48Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T08:23:17Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6280
- Accuracy: 0.18
- Brier Loss: 0.8747
- Nll: 6.7569
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2550
- Aurc: 0.8496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.7961 | 0.145 | 0.8999 | 10.1560 | 0.145 | 0.0253 | 0.2221 | 0.8467 |
| No log | 1.96 | 6 | 2.7646 | 0.145 | 0.8946 | 10.5828 | 0.145 | 0.0253 | 0.2242 | 0.8475 |
| No log | 2.96 | 9 | 2.7185 | 0.155 | 0.8868 | 8.6137 | 0.155 | 0.0501 | 0.2145 | 0.8394 |
| No log | 3.96 | 12 | 2.6825 | 0.21 | 0.8808 | 6.5439 | 0.2100 | 0.0613 | 0.2567 | 0.8351 |
| No log | 4.96 | 15 | 2.6619 | 0.155 | 0.8778 | 6.7839 | 0.155 | 0.0274 | 0.2346 | 0.8880 |
| No log | 5.96 | 18 | 2.6517 | 0.18 | 0.8769 | 7.4578 | 0.18 | 0.0395 | 0.2461 | 0.8571 |
| No log | 6.96 | 21 | 2.6450 | 0.18 | 0.8767 | 7.1192 | 0.18 | 0.0308 | 0.2518 | 0.8516 |
| No log | 7.96 | 24 | 2.6400 | 0.18 | 0.8766 | 6.9539 | 0.18 | 0.0306 | 0.2472 | 0.8526 |
| No log | 8.96 | 27 | 2.6355 | 0.18 | 0.8762 | 6.9109 | 0.18 | 0.0306 | 0.2524 | 0.8527 |
| No log | 9.96 | 30 | 2.6332 | 0.18 | 0.8759 | 6.8997 | 0.18 | 0.0306 | 0.2491 | 0.8527 |
| No log | 10.96 | 33 | 2.6317 | 0.18 | 0.8757 | 6.8943 | 0.18 | 0.0306 | 0.2529 | 0.8524 |
| No log | 11.96 | 36 | 2.6309 | 0.18 | 0.8755 | 6.8287 | 0.18 | 0.0306 | 0.2442 | 0.8523 |
| No log | 12.96 | 39 | 2.6304 | 0.18 | 0.8753 | 6.7670 | 0.18 | 0.0306 | 0.2478 | 0.8521 |
| No log | 13.96 | 42 | 2.6298 | 0.18 | 0.8752 | 6.7597 | 0.18 | 0.0306 | 0.2433 | 0.8517 |
| No log | 14.96 | 45 | 2.6293 | 0.18 | 0.8751 | 6.7590 | 0.18 | 0.0306 | 0.2516 | 0.8513 |
| No log | 15.96 | 48 | 2.6290 | 0.18 | 0.8750 | 6.7556 | 0.18 | 0.0306 | 0.2555 | 0.8515 |
| No log | 16.96 | 51 | 2.6287 | 0.18 | 0.8750 | 6.7582 | 0.18 | 0.0306 | 0.2557 | 0.8514 |
| No log | 17.96 | 54 | 2.6289 | 0.18 | 0.8750 | 6.7556 | 0.18 | 0.0306 | 0.2476 | 0.8509 |
| No log | 18.96 | 57 | 2.6289 | 0.18 | 0.8750 | 6.7567 | 0.18 | 0.0306 | 0.2475 | 0.8505 |
| No log | 19.96 | 60 | 2.6285 | 0.18 | 0.8748 | 6.7567 | 0.18 | 0.0306 | 0.2433 | 0.8502 |
| No log | 20.96 | 63 | 2.6283 | 0.18 | 0.8748 | 6.7577 | 0.18 | 0.0306 | 0.2512 | 0.8500 |
| No log | 21.96 | 66 | 2.6281 | 0.18 | 0.8748 | 6.7586 | 0.18 | 0.0306 | 0.2551 | 0.8495 |
| No log | 22.96 | 69 | 2.6280 | 0.18 | 0.8747 | 6.7580 | 0.18 | 0.0306 | 0.2550 | 0.8496 |
| No log | 23.96 | 72 | 2.6280 | 0.18 | 0.8747 | 6.7573 | 0.18 | 0.0306 | 0.2550 | 0.8496 |
| No log | 24.96 | 75 | 2.6280 | 0.18 | 0.8747 | 6.7569 | 0.18 | 0.0306 | 0.2550 | 0.8496 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
saintzeno/a2c-PandaReachDense-v2 | saintzeno | 2023-07-09T08:26:17Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T06:25:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.83 +/- 0.18
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
komo-dono/yanaginagi | komo-dono | 2023-07-09T08:07:04Z | 0 | 0 | null | [
"music",
"ja",
"license:openrail",
"region:us"
] | null | 2023-07-09T08:02:47Z | ---
license: openrail
language:
- ja
tags:
- music
--- |
chunwoolee0/my_awesome_qa_model | chunwoolee0 | 2023-07-09T08:00:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-09T07:50:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.2632 |
| 2.6568 | 2.0 | 500 | 1.6629 |
| 2.6568 | 3.0 | 750 | 1.5944 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
guaguale/model_tshirt | guaguale | 2023-07-09T07:22:57Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-09T06:27:08Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of adlv clothes
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - guaguale/model_tshirt
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of adlv clothes using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
disanda/first_try_4 | disanda | 2023-07-09T07:21:57Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-09T07:20:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: first_try_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_try_4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7226 | 1.0 | 157 | 2.5273 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.0+cu102
- Datasets 2.12.0
- Tokenizers 0.13.3
|
daiwenbin/distilbert-base-uncased-finetuned-clinc | daiwenbin | 2023-07-09T07:15:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T02:43:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9138709677419354
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Accuracy: 0.9139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2795 | 0.7277 |
| 3.7861 | 2.0 | 636 | 1.8741 | 0.8294 |
| 3.7861 | 3.0 | 954 | 1.1621 | 0.8906 |
| 1.6946 | 4.0 | 1272 | 0.8663 | 0.9058 |
| 0.9106 | 5.0 | 1590 | 0.7816 | 0.9139 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
abdulfatir/NCDSSM | abdulfatir | 2023-07-09T07:00:23Z | 0 | 2 | null | [
"arxiv:2301.11308",
"license:mit",
"region:us"
] | null | 2023-07-09T06:54:08Z | ---
license: mit
---
# Neural Continuous-Discrete State Space Models (NCDSSM)
This repository contains pretrained checkpoints for reproducing the experiments presented in the ICML 2023 paper [*Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series*](https://arxiv.org/abs/2301.11308). For details on how to use these checkpoints, please refer to https://github.com/clear-nus/NCDSSM.
|
Dorost/resume | Dorost | 2023-07-09T06:46:02Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-30T10:41:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: resume
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resume
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0166
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0448 | 1.0 | 49 | 2.7245 | 0.1290 |
| 2.2276 | 2.0 | 98 | 1.7165 | 0.4683 |
| 1.116 | 3.0 | 147 | 0.8720 | 0.8333 |
| 0.5606 | 4.0 | 196 | 0.3686 | 1.0 |
| 0.2374 | 5.0 | 245 | 0.1431 | 1.0 |
| 0.1084 | 6.0 | 294 | 0.0612 | 1.0 |
| 0.0598 | 7.0 | 343 | 0.0328 | 1.0 |
| 0.0386 | 8.0 | 392 | 0.0216 | 1.0 |
| 0.0276 | 9.0 | 441 | 0.0175 | 1.0 |
| 0.0271 | 10.0 | 490 | 0.0166 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/Pika_v1 | digiplay | 2023-07-09T06:44:58Z | 289 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-22T13:13:29Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/47067?modelVersionId=51650
Original Author's DEMO images :


|
YeungNLP/firefly-bloom-7b1 | YeungNLP | 2023-07-09T06:36:38Z | 1,437 | 1 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-26T10:57:10Z | 该模型使用bloom-7b1,使用百万中英文指令数据,进行指令微调。
更多详情见[Firefly项目](https://github.com/yangjianxin1/Firefly) |
YeungNLP/Ziya-LLaMA-13B-Pretrain-v1 | YeungNLP | 2023-07-09T06:35:20Z | 12 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-21T10:35:14Z | 由[IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)与原始llama权重进行合并而得到。
[firefly-ziya-13b](https://huggingface.co/YeungNLP/firefly-ziya-13b)基于该模型进行指令微调
更多详情请查看[Firefly项目](https://github.com/yangjianxin1/Firefly) |
KKSK2023/ppo-LunarLander-v2 | KKSK2023 | 2023-07-09T06:27:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T06:27:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.57 +/- 19.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
demelianov/model | demelianov | 2023-07-09T06:27:04Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T05:14:01Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - demelianov/model
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Mistermango24/Furrymix3 | Mistermango24 | 2023-07-09T06:23:58Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-09T06:04:39Z | ---
license: bigscience-openrail-m
---
|
luhx/Reinforce-PixelCopter | luhx | 2023-07-09T05:52:43Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T05:52:10Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.60 +/- 16.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
luhx/Reinforce-CartPole-v1 | luhx | 2023-07-09T05:09:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T05:08:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 486.50 +/- 40.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Winmodel/ppo-Huggy | Winmodel | 2023-07-09T05:04:35Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-09T05:04:30Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jason1i/whisper-tiny-minds14 | jason1i | 2023-07-09T05:01:53Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-09T04:37:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34415584415584416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6338
- Wer Ortho: 0.3467
- Wer: 0.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.001 | 17.86 | 500 | 0.6338 | 0.3467 | 0.3442 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lovelyxs/dqn-SpaceInvadersNoFrameskip-v4 | lovelyxs | 2023-07-09T04:50:33Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T04:49:57Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 527.50 +/- 132.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lovelyxs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lovelyxs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lovelyxs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jrfalck/my_awesome_opus_books_model_JRF | jrfalck | 2023-07-09T04:46:43Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-09T03:55:44Z | ---
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model_JRF
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 6.0553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model_JRF
This model was trained from scratch on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5465
- Bleu: 6.0553
- Gen Len: 17.528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.7679 | 1.0 | 6355 | 1.5601 | 6.0201 | 17.5327 |
| 1.7452 | 2.0 | 12710 | 1.5465 | 6.0553 | 17.528 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.2
|
BauyrjanQ/whisper-kk | BauyrjanQ | 2023-07-09T04:14:39Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-07T09:49:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1070
- Wer: 24.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1912 | 0.46 | 1000 | 0.1793 | 31.2210 |
| 0.1314 | 0.92 | 2000 | 0.1307 | 20.8113 |
| 0.096 | 1.38 | 3000 | 0.1136 | 28.8680 |
| 0.0845 | 1.84 | 4000 | 0.1070 | 24.8145 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Drawzipink/AesopCarlV2 | Drawzipink | 2023-07-09T03:59:29Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-09T03:38:01Z | ---
license: openrail
---
***Note***: This model was made using Yuki Hirai's interpretation of Aesop Carl from the Game Identity V in the unofficial stageplay.
Should he see this and ask for anything using this model be taken down I ask that you oblige.
This model is for fun and personal use only.
Thank you. |
PhantasyMaker/Kate | PhantasyMaker | 2023-07-09T03:55:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T03:55:50Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-all-mod-aochildes-rarity-all-30k-3k | NasimB | 2023-07-09T03:31:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T01:14:30Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-mod-aochildes-rarity-all-30k-3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-mod-aochildes-rarity-all-30k-3k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.787 | 0.32 | 500 | 5.8319 |
| 5.4867 | 0.65 | 1000 | 5.4639 |
| 5.1388 | 0.97 | 1500 | 5.2444 |
| 4.8676 | 1.3 | 2000 | 5.1700 |
| 4.7551 | 1.62 | 2500 | 5.0582 |
| 4.6549 | 1.95 | 3000 | 4.9945 |
| 4.4476 | 2.27 | 3500 | 4.9966 |
| 4.4081 | 2.6 | 4000 | 4.9368 |
| 4.3708 | 2.92 | 4500 | 4.9070 |
| 4.1704 | 3.25 | 5000 | 4.9144 |
| 4.1343 | 3.57 | 5500 | 4.8945 |
| 4.1237 | 3.9 | 6000 | 4.8582 |
| 3.9238 | 4.22 | 6500 | 4.8881 |
| 3.8703 | 4.55 | 7000 | 4.8883 |
| 3.8693 | 4.87 | 7500 | 4.8628 |
| 3.6914 | 5.19 | 8000 | 4.9088 |
| 3.6022 | 5.52 | 8500 | 4.9100 |
| 3.6033 | 5.84 | 9000 | 4.9048 |
| 3.476 | 6.17 | 9500 | 4.9392 |
| 3.3693 | 6.49 | 10000 | 4.9473 |
| 3.3744 | 6.82 | 10500 | 4.9551 |
| 3.3104 | 7.14 | 11000 | 4.9658 |
| 3.2401 | 7.47 | 11500 | 4.9706 |
| 3.2421 | 7.79 | 12000 | 4.9727 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
FinalIroha/Ryuuou_no_Oshigoto_SoVITS4.1_Model | FinalIroha | 2023-07-09T03:27:29Z | 3 | 0 | transformers | [
"transformers",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T11:30:12Z | ---
license: cc-by-nc-sa-4.0
---
# SoVITS 4.1龙王的工作多人模型
<!-- Provide a quick summary of what the model is/does. -->
此模型由[SoVITS4.1](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/)生成。
## 模型人物名
<!-- Provide a quick summary of what the model is/does. -->
- **Yaichi Kuzuryuu:** 九頭竜八一/九头龙八一 CV:内田雄马
- **Ai Hinatsuru:** 雛鶴あい/雏鹤爱 CV:日高里菜
- **Ai Yashajin:** 夜叉神天衣/夜叉神天衣 CV:佐仓绫音
- **Ginko Sora:** 空銀子/空银子 CV:金元寿子
- **Keika Kiyotaki:** 清滝桂香/清泷桂香 CV:茅野爱衣
- **Mio Mizukoshi:** 水越澪/水越澪 CV:久保百合花
- **Ayano Sadatou:** 貞任綾乃/贞任绫乃 CV:桥本千波
- **Charlotte Izoard:** シャルロット・イゾアール/夏洛特·伊索亚尔 CV:小仓唯 |
Splend1dchan/h-p-test | Splend1dchan | 2023-07-09T03:24:07Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text2text-generation",
"generated_from_trainer",
"dataset:arrow",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-09T03:17:38Z | ---
tags:
- generated_from_trainer
datasets:
- arrow
model-index:
- name: hubert-pythia-70m_librispeech.train.mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-pythia-70m_librispeech.train.mix
This model is a fine-tuned version of [speechmix/pythia-70m-test](https://huggingface.co/speechmix/pythia-70m-test) on the arrow dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jncraton/e5-small-v2-ct2-int8 | jncraton | 2023-07-09T02:30:12Z | 7 | 0 | transformers | [
"transformers",
"mteb",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2023-07-09T02:22:18Z | ---
tags:
- mteb
model-index:
- name: e5-small-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.59701492537313
- type: ap
value: 41.67064885731708
- type: f1
value: 71.86465946398573
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.265875
- type: ap
value: 87.67633085349644
- type: f1
value: 91.24297521425744
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.882000000000005
- type: f1
value: 45.08058870381236
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.697
- type: map_at_10
value: 33.975
- type: map_at_100
value: 35.223
- type: map_at_1000
value: 35.260000000000005
- type: map_at_3
value: 29.776999999999997
- type: map_at_5
value: 32.035000000000004
- type: mrr_at_1
value: 20.982
- type: mrr_at_10
value: 34.094
- type: mrr_at_100
value: 35.343
- type: mrr_at_1000
value: 35.38
- type: mrr_at_3
value: 29.884
- type: mrr_at_5
value: 32.141999999999996
- type: ndcg_at_1
value: 20.697
- type: ndcg_at_10
value: 41.668
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 48.305
- type: ndcg_at_3
value: 32.928000000000004
- type: ndcg_at_5
value: 36.998999999999995
- type: precision_at_1
value: 20.697
- type: precision_at_10
value: 6.636
- type: precision_at_100
value: 0.924
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.035
- type: precision_at_5
value: 10.398
- type: recall_at_1
value: 20.697
- type: recall_at_10
value: 66.35799999999999
- type: recall_at_100
value: 92.39
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 42.105
- type: recall_at_5
value: 51.991
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.1169517447068
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.79553720107097
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.10811337308168
- type: mrr
value: 71.56410763751482
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 78.46834918248696
- type: cos_sim_spearman
value: 79.4289182755206
- type: euclidean_pearson
value: 76.26662973727008
- type: euclidean_spearman
value: 78.11744260952536
- type: manhattan_pearson
value: 76.08175262609434
- type: manhattan_spearman
value: 78.29395265552289
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.63636363636364
- type: f1
value: 81.55779952376953
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.88541137137571
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.05205685274407
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.293999999999997
- type: map_at_10
value: 39.876
- type: map_at_100
value: 41.315000000000005
- type: map_at_1000
value: 41.451
- type: map_at_3
value: 37.194
- type: map_at_5
value: 38.728
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 45.281
- type: mrr_at_100
value: 46.188
- type: mrr_at_1000
value: 46.245999999999995
- type: mrr_at_3
value: 43.228
- type: mrr_at_5
value: 44.366
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 45.086
- type: ndcg_at_100
value: 50.756
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 41.416
- type: ndcg_at_5
value: 43.098
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.647000000000002
- type: precision_at_5
value: 13.877
- type: recall_at_1
value: 30.293999999999997
- type: recall_at_10
value: 54.309
- type: recall_at_100
value: 78.59
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 43.168
- type: recall_at_5
value: 48.192
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.738000000000003
- type: map_at_10
value: 36.925999999999995
- type: map_at_100
value: 38.017
- type: map_at_1000
value: 38.144
- type: map_at_3
value: 34.446
- type: map_at_5
value: 35.704
- type: mrr_at_1
value: 35.478
- type: mrr_at_10
value: 42.786
- type: mrr_at_100
value: 43.458999999999996
- type: mrr_at_1000
value: 43.507
- type: mrr_at_3
value: 40.648
- type: mrr_at_5
value: 41.804
- type: ndcg_at_1
value: 35.478
- type: ndcg_at_10
value: 42.044
- type: ndcg_at_100
value: 46.249
- type: ndcg_at_1000
value: 48.44
- type: ndcg_at_3
value: 38.314
- type: ndcg_at_5
value: 39.798
- type: precision_at_1
value: 35.478
- type: precision_at_10
value: 7.764
- type: precision_at_100
value: 1.253
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 18.047
- type: precision_at_5
value: 12.637
- type: recall_at_1
value: 28.738000000000003
- type: recall_at_10
value: 50.659
- type: recall_at_100
value: 68.76299999999999
- type: recall_at_1000
value: 82.811
- type: recall_at_3
value: 39.536
- type: recall_at_5
value: 43.763999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.565
- type: map_at_10
value: 50.168
- type: map_at_100
value: 51.11
- type: map_at_1000
value: 51.173
- type: map_at_3
value: 47.044000000000004
- type: map_at_5
value: 48.838
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 53.596999999999994
- type: mrr_at_100
value: 54.211
- type: mrr_at_1000
value: 54.247
- type: mrr_at_3
value: 51.202000000000005
- type: mrr_at_5
value: 52.608999999999995
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 55.694
- type: ndcg_at_100
value: 59.518
- type: ndcg_at_1000
value: 60.907
- type: ndcg_at_3
value: 50.395999999999994
- type: ndcg_at_5
value: 53.022999999999996
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 8.84
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.153
- type: precision_at_5
value: 15.260000000000002
- type: recall_at_1
value: 38.565
- type: recall_at_10
value: 68.65
- type: recall_at_100
value: 85.37400000000001
- type: recall_at_1000
value: 95.37400000000001
- type: recall_at_3
value: 54.645999999999994
- type: recall_at_5
value: 60.958
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.945
- type: map_at_10
value: 30.641000000000002
- type: map_at_100
value: 31.599
- type: map_at_1000
value: 31.691000000000003
- type: map_at_3
value: 28.405
- type: map_at_5
value: 29.704000000000004
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 32.22
- type: mrr_at_100
value: 33.138
- type: mrr_at_1000
value: 33.214
- type: mrr_at_3
value: 30.151
- type: mrr_at_5
value: 31.298
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 34.638000000000005
- type: ndcg_at_100
value: 39.486
- type: ndcg_at_1000
value: 41.936
- type: ndcg_at_3
value: 30.333
- type: ndcg_at_5
value: 32.482
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 5.153
- type: precision_at_100
value: 0.7929999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.429
- type: precision_at_5
value: 8.723
- type: recall_at_1
value: 23.945
- type: recall_at_10
value: 45.412
- type: recall_at_100
value: 67.836
- type: recall_at_1000
value: 86.467
- type: recall_at_3
value: 34.031
- type: recall_at_5
value: 39.039
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.419
- type: map_at_10
value: 20.858999999999998
- type: map_at_100
value: 22.067999999999998
- type: map_at_1000
value: 22.192
- type: map_at_3
value: 18.673000000000002
- type: map_at_5
value: 19.968
- type: mrr_at_1
value: 17.785999999999998
- type: mrr_at_10
value: 24.878
- type: mrr_at_100
value: 26.021
- type: mrr_at_1000
value: 26.095000000000002
- type: mrr_at_3
value: 22.616
- type: mrr_at_5
value: 23.785
- type: ndcg_at_1
value: 17.785999999999998
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 31.05
- type: ndcg_at_1000
value: 34.052
- type: ndcg_at_3
value: 21.117
- type: ndcg_at_5
value: 23.048
- type: precision_at_1
value: 17.785999999999998
- type: precision_at_10
value: 4.590000000000001
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 14.419
- type: recall_at_10
value: 34.477999999999994
- type: recall_at_100
value: 60.02499999999999
- type: recall_at_1000
value: 81.646
- type: recall_at_3
value: 23.515
- type: recall_at_5
value: 28.266999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.268
- type: map_at_10
value: 35.114000000000004
- type: map_at_100
value: 36.212
- type: map_at_1000
value: 36.333
- type: map_at_3
value: 32.436
- type: map_at_5
value: 33.992
- type: mrr_at_1
value: 31.761
- type: mrr_at_10
value: 40.355999999999995
- type: mrr_at_100
value: 41.125
- type: mrr_at_1000
value: 41.186
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.463
- type: ndcg_at_1
value: 31.761
- type: ndcg_at_10
value: 40.422000000000004
- type: ndcg_at_100
value: 45.458999999999996
- type: ndcg_at_1000
value: 47.951
- type: ndcg_at_3
value: 35.972
- type: ndcg_at_5
value: 38.272
- type: precision_at_1
value: 31.761
- type: precision_at_10
value: 7.103
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.779
- type: precision_at_5
value: 11.877
- type: recall_at_1
value: 26.268
- type: recall_at_10
value: 51.053000000000004
- type: recall_at_100
value: 72.702
- type: recall_at_1000
value: 89.521
- type: recall_at_3
value: 38.619
- type: recall_at_5
value: 44.671
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.230999999999998
- type: map_at_10
value: 34.227000000000004
- type: map_at_100
value: 35.370000000000005
- type: map_at_1000
value: 35.488
- type: map_at_3
value: 31.496000000000002
- type: map_at_5
value: 33.034
- type: mrr_at_1
value: 30.822
- type: mrr_at_10
value: 39.045
- type: mrr_at_100
value: 39.809
- type: mrr_at_1000
value: 39.873
- type: mrr_at_3
value: 36.663000000000004
- type: mrr_at_5
value: 37.964
- type: ndcg_at_1
value: 30.822
- type: ndcg_at_10
value: 39.472
- type: ndcg_at_100
value: 44.574999999999996
- type: ndcg_at_1000
value: 47.162
- type: ndcg_at_3
value: 34.929
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 30.822
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.591
- type: precision_at_5
value: 11.667
- type: recall_at_1
value: 25.230999999999998
- type: recall_at_10
value: 50.42100000000001
- type: recall_at_100
value: 72.685
- type: recall_at_1000
value: 90.469
- type: recall_at_3
value: 37.503
- type: recall_at_5
value: 43.123
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.604166666666664
- type: map_at_10
value: 32.427166666666665
- type: map_at_100
value: 33.51474999999999
- type: map_at_1000
value: 33.6345
- type: map_at_3
value: 30.02366666666667
- type: map_at_5
value: 31.382333333333328
- type: mrr_at_1
value: 29.001166666666666
- type: mrr_at_10
value: 36.3315
- type: mrr_at_100
value: 37.16683333333333
- type: mrr_at_1000
value: 37.23341666666668
- type: mrr_at_3
value: 34.19916666666667
- type: mrr_at_5
value: 35.40458333333334
- type: ndcg_at_1
value: 29.001166666666666
- type: ndcg_at_10
value: 37.06883333333334
- type: ndcg_at_100
value: 41.95816666666666
- type: ndcg_at_1000
value: 44.501583333333336
- type: ndcg_at_3
value: 32.973499999999994
- type: ndcg_at_5
value: 34.90833333333334
- type: precision_at_1
value: 29.001166666666666
- type: precision_at_10
value: 6.336
- type: precision_at_100
value: 1.0282499999999999
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.932499999999996
- type: precision_at_5
value: 10.50825
- type: recall_at_1
value: 24.604166666666664
- type: recall_at_10
value: 46.9525
- type: recall_at_100
value: 68.67816666666667
- type: recall_at_1000
value: 86.59783333333334
- type: recall_at_3
value: 35.49783333333333
- type: recall_at_5
value: 40.52525000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.559
- type: map_at_10
value: 29.023
- type: map_at_100
value: 29.818
- type: map_at_1000
value: 29.909000000000002
- type: map_at_3
value: 27.037
- type: map_at_5
value: 28.225
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 31.962000000000003
- type: mrr_at_100
value: 32.726
- type: mrr_at_1000
value: 32.800000000000004
- type: mrr_at_3
value: 30.266
- type: mrr_at_5
value: 31.208999999999996
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 32.53
- type: ndcg_at_100
value: 36.758
- type: ndcg_at_1000
value: 39.362
- type: ndcg_at_3
value: 28.985
- type: ndcg_at_5
value: 30.757
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 4.968999999999999
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.219
- type: precision_at_5
value: 8.527999999999999
- type: recall_at_1
value: 23.559
- type: recall_at_10
value: 40.585
- type: recall_at_100
value: 60.306000000000004
- type: recall_at_1000
value: 80.11
- type: recall_at_3
value: 30.794
- type: recall_at_5
value: 35.186
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.384999999999998
- type: map_at_10
value: 22.142
- type: map_at_100
value: 23.057
- type: map_at_1000
value: 23.177
- type: map_at_3
value: 20.29
- type: map_at_5
value: 21.332
- type: mrr_at_1
value: 19.89
- type: mrr_at_10
value: 25.771
- type: mrr_at_100
value: 26.599
- type: mrr_at_1000
value: 26.680999999999997
- type: mrr_at_3
value: 23.962
- type: mrr_at_5
value: 24.934
- type: ndcg_at_1
value: 19.89
- type: ndcg_at_10
value: 25.97
- type: ndcg_at_100
value: 30.605
- type: ndcg_at_1000
value: 33.619
- type: ndcg_at_3
value: 22.704
- type: ndcg_at_5
value: 24.199
- type: precision_at_1
value: 19.89
- type: precision_at_10
value: 4.553
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 10.541
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 16.384999999999998
- type: recall_at_10
value: 34.001
- type: recall_at_100
value: 55.17100000000001
- type: recall_at_1000
value: 77.125
- type: recall_at_3
value: 24.618000000000002
- type: recall_at_5
value: 28.695999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.726
- type: map_at_10
value: 31.227
- type: map_at_100
value: 32.311
- type: map_at_1000
value: 32.419
- type: map_at_3
value: 28.765
- type: map_at_5
value: 30.229
- type: mrr_at_1
value: 27.705000000000002
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 35.931000000000004
- type: mrr_at_1000
value: 36
- type: mrr_at_3
value: 32.603
- type: mrr_at_5
value: 34.117999999999995
- type: ndcg_at_1
value: 27.705000000000002
- type: ndcg_at_10
value: 35.968
- type: ndcg_at_100
value: 41.197
- type: ndcg_at_1000
value: 43.76
- type: ndcg_at_3
value: 31.304
- type: ndcg_at_5
value: 33.661
- type: precision_at_1
value: 27.705000000000002
- type: precision_at_10
value: 5.942
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 13.868
- type: precision_at_5
value: 9.944
- type: recall_at_1
value: 23.726
- type: recall_at_10
value: 46.786
- type: recall_at_100
value: 70.072
- type: recall_at_1000
value: 88.2
- type: recall_at_3
value: 33.981
- type: recall_at_5
value: 39.893
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.344
- type: map_at_10
value: 31.636999999999997
- type: map_at_100
value: 33.065
- type: map_at_1000
value: 33.300000000000004
- type: map_at_3
value: 29.351
- type: map_at_5
value: 30.432
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.587
- type: mrr_at_100
value: 36.52
- type: mrr_at_1000
value: 36.597
- type: mrr_at_3
value: 33.696
- type: mrr_at_5
value: 34.713
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.61
- type: ndcg_at_100
value: 41.88
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 33.038000000000004
- type: ndcg_at_5
value: 34.331
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 6.917
- type: precision_at_100
value: 1.3599999999999999
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.344
- type: recall_at_10
value: 45.782000000000004
- type: recall_at_100
value: 69.503
- type: recall_at_1000
value: 90.742
- type: recall_at_3
value: 35.160000000000004
- type: recall_at_5
value: 39.058
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.776
- type: map_at_10
value: 27.285999999999998
- type: map_at_100
value: 28.235
- type: map_at_1000
value: 28.337
- type: map_at_3
value: 25.147000000000002
- type: map_at_5
value: 26.401999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 29.409999999999997
- type: mrr_at_100
value: 30.275000000000002
- type: mrr_at_1000
value: 30.354999999999997
- type: mrr_at_3
value: 27.418
- type: mrr_at_5
value: 28.592000000000002
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 31.239
- type: ndcg_at_100
value: 35.965
- type: ndcg_at_1000
value: 38.602
- type: ndcg_at_3
value: 27.174
- type: ndcg_at_5
value: 29.229
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.776
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.022
- type: recall_at_1
value: 20.776
- type: recall_at_10
value: 41.294
- type: recall_at_100
value: 63.111
- type: recall_at_1000
value: 82.88600000000001
- type: recall_at_3
value: 30.403000000000002
- type: recall_at_5
value: 35.455999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.376
- type: map_at_10
value: 15.926000000000002
- type: map_at_100
value: 17.585
- type: map_at_1000
value: 17.776
- type: map_at_3
value: 13.014000000000001
- type: map_at_5
value: 14.417
- type: mrr_at_1
value: 20.195
- type: mrr_at_10
value: 29.95
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.108000000000004
- type: mrr_at_3
value: 26.667
- type: mrr_at_5
value: 28.458
- type: ndcg_at_1
value: 20.195
- type: ndcg_at_10
value: 22.871
- type: ndcg_at_100
value: 29.921999999999997
- type: ndcg_at_1000
value: 33.672999999999995
- type: ndcg_at_3
value: 17.782999999999998
- type: ndcg_at_5
value: 19.544
- type: precision_at_1
value: 20.195
- type: precision_at_10
value: 7.394
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.073
- type: precision_at_5
value: 10.436
- type: recall_at_1
value: 9.376
- type: recall_at_10
value: 28.544999999999998
- type: recall_at_100
value: 53.147999999999996
- type: recall_at_1000
value: 74.62
- type: recall_at_3
value: 16.464000000000002
- type: recall_at_5
value: 21.004
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.415000000000001
- type: map_at_10
value: 18.738
- type: map_at_100
value: 27.291999999999998
- type: map_at_1000
value: 28.992
- type: map_at_3
value: 13.196
- type: map_at_5
value: 15.539
- type: mrr_at_1
value: 66.5
- type: mrr_at_10
value: 74.518
- type: mrr_at_100
value: 74.86
- type: mrr_at_1000
value: 74.87
- type: mrr_at_3
value: 72.375
- type: mrr_at_5
value: 73.86200000000001
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 41.317
- type: ndcg_at_100
value: 45.845
- type: ndcg_at_1000
value: 52.92
- type: ndcg_at_3
value: 44.983000000000004
- type: ndcg_at_5
value: 42.989
- type: precision_at_1
value: 66.5
- type: precision_at_10
value: 33.6
- type: precision_at_100
value: 10.972999999999999
- type: precision_at_1000
value: 2.214
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 8.415000000000001
- type: recall_at_10
value: 24.953
- type: recall_at_100
value: 52.48199999999999
- type: recall_at_1000
value: 75.093
- type: recall_at_3
value: 14.341000000000001
- type: recall_at_5
value: 18.468
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.06499999999999
- type: f1
value: 41.439327599975385
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.02
- type: map_at_10
value: 76.68599999999999
- type: map_at_100
value: 76.959
- type: map_at_1000
value: 76.972
- type: map_at_3
value: 75.024
- type: map_at_5
value: 76.153
- type: mrr_at_1
value: 71.197
- type: mrr_at_10
value: 81.105
- type: mrr_at_100
value: 81.232
- type: mrr_at_1000
value: 81.233
- type: mrr_at_3
value: 79.758
- type: mrr_at_5
value: 80.69
- type: ndcg_at_1
value: 71.197
- type: ndcg_at_10
value: 81.644
- type: ndcg_at_100
value: 82.645
- type: ndcg_at_1000
value: 82.879
- type: ndcg_at_3
value: 78.792
- type: ndcg_at_5
value: 80.528
- type: precision_at_1
value: 71.197
- type: precision_at_10
value: 10.206999999999999
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 30.868000000000002
- type: precision_at_5
value: 19.559
- type: recall_at_1
value: 66.02
- type: recall_at_10
value: 92.50699999999999
- type: recall_at_100
value: 96.497
- type: recall_at_1000
value: 97.956
- type: recall_at_3
value: 84.866
- type: recall_at_5
value: 89.16199999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.948
- type: map_at_10
value: 29.833
- type: map_at_100
value: 31.487
- type: map_at_1000
value: 31.674000000000003
- type: map_at_3
value: 26.029999999999998
- type: map_at_5
value: 28.038999999999998
- type: mrr_at_1
value: 34.721999999999994
- type: mrr_at_10
value: 44.214999999999996
- type: mrr_at_100
value: 44.994
- type: mrr_at_1000
value: 45.051
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 43.032
- type: ndcg_at_1
value: 34.721999999999994
- type: ndcg_at_10
value: 37.434
- type: ndcg_at_100
value: 43.702000000000005
- type: ndcg_at_1000
value: 46.993
- type: ndcg_at_3
value: 33.56
- type: ndcg_at_5
value: 34.687
- type: precision_at_1
value: 34.721999999999994
- type: precision_at_10
value: 10.401
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 22.531000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 17.948
- type: recall_at_10
value: 45.062999999999995
- type: recall_at_100
value: 68.191
- type: recall_at_1000
value: 87.954
- type: recall_at_3
value: 31.112000000000002
- type: recall_at_5
value: 36.823
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.644
- type: map_at_10
value: 57.658
- type: map_at_100
value: 58.562000000000005
- type: map_at_1000
value: 58.62500000000001
- type: map_at_3
value: 54.022999999999996
- type: map_at_5
value: 56.293000000000006
- type: mrr_at_1
value: 73.288
- type: mrr_at_10
value: 80.51700000000001
- type: mrr_at_100
value: 80.72
- type: mrr_at_1000
value: 80.728
- type: mrr_at_3
value: 79.33200000000001
- type: mrr_at_5
value: 80.085
- type: ndcg_at_1
value: 73.288
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.723
- type: ndcg_at_1000
value: 70.96000000000001
- type: ndcg_at_3
value: 61.358999999999995
- type: ndcg_at_5
value: 64.277
- type: precision_at_1
value: 73.288
- type: precision_at_10
value: 14.17
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.487
- type: precision_at_5
value: 25.999
- type: recall_at_1
value: 36.644
- type: recall_at_10
value: 70.851
- type: recall_at_100
value: 82.94399999999999
- type: recall_at_1000
value: 91.134
- type: recall_at_3
value: 59.230000000000004
- type: recall_at_5
value: 64.997
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.00280000000001
- type: ap
value: 80.46302061021223
- type: f1
value: 85.9592921596419
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.541
- type: map_at_10
value: 34.625
- type: map_at_100
value: 35.785
- type: map_at_1000
value: 35.831
- type: map_at_3
value: 30.823
- type: map_at_5
value: 32.967999999999996
- type: mrr_at_1
value: 23.180999999999997
- type: mrr_at_10
value: 35.207
- type: mrr_at_100
value: 36.315
- type: mrr_at_1000
value: 36.355
- type: mrr_at_3
value: 31.483
- type: mrr_at_5
value: 33.589999999999996
- type: ndcg_at_1
value: 23.195
- type: ndcg_at_10
value: 41.461
- type: ndcg_at_100
value: 47.032000000000004
- type: ndcg_at_1000
value: 48.199999999999996
- type: ndcg_at_3
value: 33.702
- type: ndcg_at_5
value: 37.522
- type: precision_at_1
value: 23.195
- type: precision_at_10
value: 6.526999999999999
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.308000000000002
- type: precision_at_5
value: 10.507
- type: recall_at_1
value: 22.541
- type: recall_at_10
value: 62.524
- type: recall_at_100
value: 88.228
- type: recall_at_1000
value: 97.243
- type: recall_at_3
value: 41.38
- type: recall_at_5
value: 50.55
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.69949840401279
- type: f1
value: 92.54141471311786
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.56041951664386
- type: f1
value: 55.88499977508287
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.62071284465365
- type: f1
value: 69.36717546572152
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.35843981170142
- type: f1
value: 76.15496453538884
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.33664956793118
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.883839621715524
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.096874986740758
- type: mrr
value: 30.97300481932132
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.4
- type: map_at_10
value: 11.852
- type: map_at_100
value: 14.758
- type: map_at_1000
value: 16.134
- type: map_at_3
value: 8.558
- type: map_at_5
value: 10.087
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 52.05800000000001
- type: mrr_at_100
value: 52.689
- type: mrr_at_1000
value: 52.742999999999995
- type: mrr_at_3
value: 50.205999999999996
- type: mrr_at_5
value: 51.367
- type: ndcg_at_1
value: 42.57
- type: ndcg_at_10
value: 32.449
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 38.351
- type: ndcg_at_3
value: 37.044
- type: ndcg_at_5
value: 35.275
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 23.87
- type: precision_at_100
value: 7.625
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 34.365
- type: precision_at_5
value: 30.341
- type: recall_at_1
value: 5.4
- type: recall_at_10
value: 15.943999999999999
- type: recall_at_100
value: 29.805
- type: recall_at_1000
value: 61.695
- type: recall_at_3
value: 9.539
- type: recall_at_5
value: 12.127
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.047000000000004
- type: map_at_10
value: 51.6
- type: map_at_100
value: 52.449999999999996
- type: map_at_1000
value: 52.476
- type: map_at_3
value: 47.452
- type: map_at_5
value: 49.964
- type: mrr_at_1
value: 40.382
- type: mrr_at_10
value: 54.273
- type: mrr_at_100
value: 54.859
- type: mrr_at_1000
value: 54.876000000000005
- type: mrr_at_3
value: 51.014
- type: mrr_at_5
value: 52.983999999999995
- type: ndcg_at_1
value: 40.353
- type: ndcg_at_10
value: 59.11300000000001
- type: ndcg_at_100
value: 62.604000000000006
- type: ndcg_at_1000
value: 63.187000000000005
- type: ndcg_at_3
value: 51.513
- type: ndcg_at_5
value: 55.576
- type: precision_at_1
value: 40.353
- type: precision_at_10
value: 9.418
- type: precision_at_100
value: 1.1440000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 36.047000000000004
- type: recall_at_10
value: 79.22200000000001
- type: recall_at_100
value: 94.23
- type: recall_at_1000
value: 98.51100000000001
- type: recall_at_3
value: 59.678
- type: recall_at_5
value: 68.967
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.232
- type: map_at_10
value: 81.674
- type: map_at_100
value: 82.338
- type: map_at_1000
value: 82.36099999999999
- type: map_at_3
value: 78.833
- type: map_at_5
value: 80.58
- type: mrr_at_1
value: 78.64
- type: mrr_at_10
value: 85.164
- type: mrr_at_100
value: 85.317
- type: mrr_at_1000
value: 85.319
- type: mrr_at_3
value: 84.127
- type: mrr_at_5
value: 84.789
- type: ndcg_at_1
value: 78.63
- type: ndcg_at_10
value: 85.711
- type: ndcg_at_100
value: 87.238
- type: ndcg_at_1000
value: 87.444
- type: ndcg_at_3
value: 82.788
- type: ndcg_at_5
value: 84.313
- type: precision_at_1
value: 78.63
- type: precision_at_10
value: 12.977
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.113
- type: precision_at_5
value: 23.71
- type: recall_at_1
value: 68.232
- type: recall_at_10
value: 93.30199999999999
- type: recall_at_100
value: 98.799
- type: recall_at_1000
value: 99.885
- type: recall_at_3
value: 84.827
- type: recall_at_5
value: 89.188
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.71879170816294
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 59.65866311751794
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.218
- type: map_at_10
value: 10.337
- type: map_at_100
value: 12.131
- type: map_at_1000
value: 12.411
- type: map_at_3
value: 7.4270000000000005
- type: map_at_5
value: 8.913
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 30.868000000000002
- type: mrr_at_100
value: 31.903
- type: mrr_at_1000
value: 31.972
- type: mrr_at_3
value: 27.367
- type: mrr_at_5
value: 29.372
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.765
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 30.206
- type: ndcg_at_3
value: 16.64
- type: ndcg_at_5
value: 14.712
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 9.24
- type: precision_at_100
value: 1.9560000000000002
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.94
- type: recall_at_1
value: 4.218
- type: recall_at_10
value: 18.752
- type: recall_at_100
value: 39.7
- type: recall_at_1000
value: 65.57300000000001
- type: recall_at_3
value: 9.428
- type: recall_at_5
value: 13.133000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04338850207233
- type: cos_sim_spearman
value: 78.5054651430423
- type: euclidean_pearson
value: 80.30739451228612
- type: euclidean_spearman
value: 78.48377464299097
- type: manhattan_pearson
value: 80.40795049052781
- type: manhattan_spearman
value: 78.49506205443114
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.11596224442962
- type: cos_sim_spearman
value: 76.20997388935461
- type: euclidean_pearson
value: 80.56858451349109
- type: euclidean_spearman
value: 75.92659183871186
- type: manhattan_pearson
value: 80.60246102203844
- type: manhattan_spearman
value: 76.03018971432664
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.34691640755737
- type: cos_sim_spearman
value: 82.4018369631579
- type: euclidean_pearson
value: 81.87673092245366
- type: euclidean_spearman
value: 82.3671489960678
- type: manhattan_pearson
value: 81.88222387719948
- type: manhattan_spearman
value: 82.3816590344736
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.2836092579524
- type: cos_sim_spearman
value: 78.99982781772064
- type: euclidean_pearson
value: 80.5184271010527
- type: euclidean_spearman
value: 78.89777392101904
- type: manhattan_pearson
value: 80.53585705018664
- type: manhattan_spearman
value: 78.92898405472994
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.7349907750784
- type: cos_sim_spearman
value: 87.7611234446225
- type: euclidean_pearson
value: 86.98759326731624
- type: euclidean_spearman
value: 87.58321319424618
- type: manhattan_pearson
value: 87.03483090370842
- type: manhattan_spearman
value: 87.63278333060288
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.75873694924825
- type: cos_sim_spearman
value: 83.80237999094724
- type: euclidean_pearson
value: 83.55023725861537
- type: euclidean_spearman
value: 84.12744338577744
- type: manhattan_pearson
value: 83.58816983036232
- type: manhattan_spearman
value: 84.18520748676501
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.21630882940174
- type: cos_sim_spearman
value: 87.72382883437031
- type: euclidean_pearson
value: 88.69933350930333
- type: euclidean_spearman
value: 88.24660814383081
- type: manhattan_pearson
value: 88.77331018833499
- type: manhattan_spearman
value: 88.26109989380632
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.11854063060489
- type: cos_sim_spearman
value: 63.14678634195072
- type: euclidean_pearson
value: 61.679090067000864
- type: euclidean_spearman
value: 62.28876589509653
- type: manhattan_pearson
value: 62.082324165511004
- type: manhattan_spearman
value: 62.56030932816679
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.00319882832645
- type: cos_sim_spearman
value: 85.94529772647257
- type: euclidean_pearson
value: 85.6661390122756
- type: euclidean_spearman
value: 85.97747815545827
- type: manhattan_pearson
value: 85.58422770541893
- type: manhattan_spearman
value: 85.9237139181532
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.16198731863916
- type: mrr
value: 94.25202702163487
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.761
- type: map_at_10
value: 64.396
- type: map_at_100
value: 65.07
- type: map_at_1000
value: 65.09899999999999
- type: map_at_3
value: 61.846000000000004
- type: map_at_5
value: 63.284
- type: mrr_at_1
value: 57.667
- type: mrr_at_10
value: 65.83099999999999
- type: mrr_at_100
value: 66.36800000000001
- type: mrr_at_1000
value: 66.39399999999999
- type: mrr_at_3
value: 64.056
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 57.667
- type: ndcg_at_10
value: 68.854
- type: ndcg_at_100
value: 71.59100000000001
- type: ndcg_at_1000
value: 72.383
- type: ndcg_at_3
value: 64.671
- type: ndcg_at_5
value: 66.796
- type: precision_at_1
value: 57.667
- type: precision_at_10
value: 9.167
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 54.761
- type: recall_at_10
value: 80.9
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 69.672
- type: recall_at_5
value: 75.083
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8079207920792
- type: cos_sim_ap
value: 94.88470927617445
- type: cos_sim_f1
value: 90.08179959100204
- type: cos_sim_precision
value: 92.15481171548117
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.58613861386138
- type: dot_ap
value: 82.94822578881316
- type: dot_f1
value: 77.33333333333333
- type: dot_precision
value: 79.36842105263158
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.8069306930693
- type: euclidean_ap
value: 94.81367858031837
- type: euclidean_f1
value: 90.01009081735621
- type: euclidean_precision
value: 90.83503054989816
- type: euclidean_recall
value: 89.2
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.91405337220161
- type: manhattan_f1
value: 90.2763561924258
- type: manhattan_precision
value: 92.45283018867924
- type: manhattan_recall
value: 88.2
- type: max_accuracy
value: 99.81188118811882
- type: max_ap
value: 94.91405337220161
- type: max_f1
value: 90.2763561924258
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.511599500053094
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.984728147814707
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.93428193939015
- type: mrr
value: 50.916557911043206
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.562500894537145
- type: cos_sim_spearman
value: 31.162587976726307
- type: dot_pearson
value: 22.633662187735762
- type: dot_spearman
value: 22.723000282378962
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.219
- type: map_at_10
value: 1.871
- type: map_at_100
value: 10.487
- type: map_at_1000
value: 25.122
- type: map_at_3
value: 0.657
- type: map_at_5
value: 1.0699999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 89.567
- type: mrr_at_100
value: 89.748
- type: mrr_at_1000
value: 89.748
- type: mrr_at_3
value: 88.667
- type: mrr_at_5
value: 89.567
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 74.533
- type: ndcg_at_100
value: 55.839000000000006
- type: ndcg_at_1000
value: 49.748
- type: ndcg_at_3
value: 79.53099999999999
- type: ndcg_at_5
value: 78.245
- type: precision_at_1
value: 84
- type: precision_at_10
value: 78.4
- type: precision_at_100
value: 56.99999999999999
- type: precision_at_1000
value: 21.98
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.8
- type: recall_at_1
value: 0.219
- type: recall_at_10
value: 2.02
- type: recall_at_100
value: 13.555
- type: recall_at_1000
value: 46.739999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.5029999999999997
- type: map_at_10
value: 11.042
- type: map_at_100
value: 16.326999999999998
- type: map_at_1000
value: 17.836
- type: map_at_3
value: 6.174
- type: map_at_5
value: 7.979
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 52.617000000000004
- type: mrr_at_100
value: 53.351000000000006
- type: mrr_at_1000
value: 53.351000000000006
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 50.714000000000006
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 27.125
- type: ndcg_at_100
value: 35.845
- type: ndcg_at_1000
value: 47.377
- type: ndcg_at_3
value: 29.633
- type: ndcg_at_5
value: 28.378999999999998
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 24.082
- type: precision_at_100
value: 6.877999999999999
- type: precision_at_1000
value: 1.463
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 28.571
- type: recall_at_1
value: 3.5029999999999997
- type: recall_at_10
value: 17.068
- type: recall_at_100
value: 43.361
- type: recall_at_1000
value: 78.835
- type: recall_at_3
value: 6.821000000000001
- type: recall_at_5
value: 10.357
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.0954
- type: ap
value: 14.216844153511959
- type: f1
value: 54.63687418565117
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.46293152235427
- type: f1
value: 61.744177921638645
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.12708617788644
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.75430649102938
- type: cos_sim_ap
value: 73.34252536948081
- type: cos_sim_f1
value: 67.53758935173774
- type: cos_sim_precision
value: 63.3672525439408
- type: cos_sim_recall
value: 72.29551451187335
- type: dot_accuracy
value: 81.71305954580676
- type: dot_ap
value: 59.5532209082386
- type: dot_f1
value: 56.18466898954705
- type: dot_precision
value: 47.830923248053395
- type: dot_recall
value: 68.07387862796834
- type: euclidean_accuracy
value: 85.81987244441795
- type: euclidean_ap
value: 73.34325409809446
- type: euclidean_f1
value: 67.83451360417443
- type: euclidean_precision
value: 64.09955388588871
- type: euclidean_recall
value: 72.0316622691293
- type: manhattan_accuracy
value: 85.68277999642368
- type: manhattan_ap
value: 73.1535450121903
- type: manhattan_f1
value: 67.928237896289
- type: manhattan_precision
value: 63.56945722171113
- type: manhattan_recall
value: 72.9287598944591
- type: max_accuracy
value: 85.81987244441795
- type: max_ap
value: 73.34325409809446
- type: max_f1
value: 67.928237896289
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.90441262079403
- type: cos_sim_ap
value: 85.79331880741438
- type: cos_sim_f1
value: 78.31563529842548
- type: cos_sim_precision
value: 74.6683424102779
- type: cos_sim_recall
value: 82.33754234678165
- type: dot_accuracy
value: 84.89928978926534
- type: dot_ap
value: 75.25819218316
- type: dot_f1
value: 69.88730119720536
- type: dot_precision
value: 64.23362374959665
- type: dot_recall
value: 76.63227594702803
- type: euclidean_accuracy
value: 89.01695967710637
- type: euclidean_ap
value: 85.98986606038852
- type: euclidean_f1
value: 78.5277880014722
- type: euclidean_precision
value: 75.22211253701876
- type: euclidean_recall
value: 82.13735756082538
- type: manhattan_accuracy
value: 88.99561454573679
- type: manhattan_ap
value: 85.92262421793953
- type: manhattan_f1
value: 78.38866094740769
- type: manhattan_precision
value: 76.02373028505282
- type: manhattan_recall
value: 80.9054511857099
- type: max_accuracy
value: 89.01695967710637
- type: max_ap
value: 85.98986606038852
- type: max_f1
value: 78.5277880014722
language:
- en
license: mit
---
# E5-small-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2')
model = AutoModel.from_pretrained('intfloat/e5-small-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
## Sentence Transformers
Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2`
This is community contributed, and results may vary up to numerical precision.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-small-v2')
embeddings = model.encode(input_texts, normalize_embeddings=True)
``` |
syberkrime99/angiestwn | syberkrime99 | 2023-07-09T02:13:54Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T02:11:30Z | ---
license: creativeml-openrail-m
---
|
benbav97/dqn-SpaceInvadersNoFrameskip-v4 | benbav97 | 2023-07-09T02:07:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T02:06:55Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 599.50 +/- 163.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga benbav97 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga benbav97 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga benbav97
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
shouqiangli/chatglm2-6b-int4-002 | shouqiangli | 2023-07-09T01:59:44Z | 144 | 0 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"arxiv:1911.02150",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T15:46:48Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2-6B
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM2-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1y7pqoloy-9b1g6T6JjA8J0KxvUjbwJw" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
ChatGLM**2**-6B 是开源中英双语对话模型 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) 的第二代版本,在保留了初代模型对话流畅、部署门槛较低等众多优秀特性的基础之上,ChatGLM**2**-6B 引入了如下新特性:
1. **更强大的性能**:基于 ChatGLM 初代模型的开发经验,我们全面升级了 ChatGLM2-6B 的基座模型。ChatGLM2-6B 使用了 [GLM](https://github.com/THUDM/GLM) 的混合目标函数,经过了 1.4T 中英标识符的预训练与人类偏好对齐训练,[评测结果](#评测结果)显示,相比于初代模型,ChatGLM2-6B 在 MMLU(+23%)、CEval(+33%)、GSM8K(+571%) 、BBH(+60%)等数据集上的性能取得了大幅度的提升,在同尺寸开源模型中具有较强的竞争力。
2. **更长的上下文**:基于 [FlashAttention](https://github.com/HazyResearch/flash-attention) 技术,我们将基座模型的上下文长度(Context Length)由 ChatGLM-6B 的 2K 扩展到了 32K,并在对话阶段使用 8K 的上下文长度训练,允许更多轮次的对话。但当前版本的 ChatGLM2-6B 对单轮超长文档的理解能力有限,我们会在后续迭代升级中着重进行优化。
3. **更高效的推理**:基于 [Multi-Query Attention](http://arxiv.org/abs/1911.02150) 技术,ChatGLM2-6B 有更高效的推理速度和更低的显存占用:在官方的模型实现下,推理速度相比初代提升了 42%,INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K。
ChatGLM**2**-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B). It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features:
1. **Stronger Performance**: Based on the development experience of the first-generation ChatGLM model, we have fully upgraded the base model of ChatGLM2-6B. ChatGLM2-6B uses the hybrid objective function of [GLM](https://github.com/THUDM/GLM), and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The [evaluation results](README.md#evaluation-results) show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size.
2. **Longer Context**: Based on [FlashAttention](https://github.com/HazyResearch/flash-attention) technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations.
3. **More Efficient Inference**: Based on [Multi-Query Attention](http://arxiv.org/abs/1911.02150) technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K.
## 软件依赖
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM2-6B)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM2-6B).
## Change Log
* v1.0
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM2-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文,ChatGLM2-6B 的论文会在近期公布,尽情期待~
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` |
espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw | espnet | 2023-07-09T01:59:33Z | 3 | 0 | espnet | [
"espnet",
"audio",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | audio-to-audio | 2023-07-09T01:25:48Z | ---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- wsj0_2mix
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw`
This model was trained by Wangyou Zhang using the wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
pip install -e .
cd egs2/wsj0_2mix/tse1
./run.sh --skip_data_prep false --skip_train true --is_tse_task true --download_model espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Sun Jul 9 09:23:16 CST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 2.0.1`
- Git hash: ``
- Commit date: ``
## enh_train_enh_tse_td_speakerbeam_raw
config: conf/tuning/train_enh_tse_td_speakerbeam.yaml
|dataset|PESQ_NB|STOI|SAR|SDR|SIR|SI_SNR|
|---|---|---|---|---|---|---|
|enhanced_cv_min_8k|3.54|96.41|18.75|18.75|0.00|18.37|
|enhanced_tt_min_8k|3.46|96.35|17.51|17.51|0.00|17.11|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_tse_td_speakerbeam.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_tse_td_speakerbeam_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
skip_stats_npz: false
max_epoch: 100
patience: 20
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 4
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/speech_mix_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/speech_ref1_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/enroll_ref1_shape
valid_shape_file:
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/speech_mix_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/speech_ref1_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/enroll_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes:
- enroll_ref
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/enroll_spk1.scp
- enroll_ref1
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/enroll_spk1.scp
- enroll_ref1
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 3
init: null
model_conf:
num_spk: 1
share_encoder: true
criterions:
- name: snr
conf:
eps: 1.0e-07
wrapper: fixed_order
wrapper_conf:
weight: 1.0
train_spk2enroll: null
enroll_segment: 16000
load_spk_embedding: false
load_all_speakers: false
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
speech_volume_normalize: null
use_reverberant_ref: false
num_spk: 1
num_noise_type: 1
sample_rate: 8000
force_single_channel: false
channel_reordering: false
categories: []
encoder: conv
encoder_conf:
channel: 256
kernel_size: 16
stride: 8
extractor: td_speakerbeam
extractor_conf:
layer: 8
stack: 4
bottleneck_dim: 256
hidden_dim: 512
skip_dim: 256
kernel: 3
causal: false
norm_type: gLN
nonlinear: relu
i_adapt_layer: 7
adapt_layer_type: mul
adapt_enroll_dim: 256
use_spk_emb: false
spk_emb_dim: 256
decoder: conv
decoder_conf:
channel: 256
kernel_size: 16
stride: 8
preprocessor: tse
preprocessor_conf: {}
required:
- output_dir
version: '202301'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SKumari/Llama_train_sk | SKumari | 2023-07-09T01:55:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-09T01:55:47Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
saintzeno/ppo-Pyramids | saintzeno | 2023-07-09T01:44:03Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-09T01:43:57Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: saintzeno/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/ncrcnrmlrvcv1300pchjlbdxcyn | nolanaatama | 2023-07-09T01:14:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T01:08:54Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-guten-rarity-5k-2p5k | NasimB | 2023-07-09T00:51:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T22:55:58Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-5k-2p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-5k-2p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7001 | 0.3 | 500 | 5.6280 |
| 5.3666 | 0.59 | 1000 | 5.1990 |
| 5.0079 | 0.89 | 1500 | 4.9539 |
| 4.7385 | 1.19 | 2000 | 4.8095 |
| 4.5783 | 1.48 | 2500 | 4.6793 |
| 4.4688 | 1.78 | 3000 | 4.5716 |
| 4.3327 | 2.08 | 3500 | 4.4960 |
| 4.162 | 2.37 | 4000 | 4.4444 |
| 4.1218 | 2.67 | 4500 | 4.3820 |
| 4.0787 | 2.97 | 5000 | 4.3297 |
| 3.8425 | 3.26 | 5500 | 4.3301 |
| 3.825 | 3.56 | 6000 | 4.2940 |
| 3.8038 | 3.86 | 6500 | 4.2590 |
| 3.6546 | 4.15 | 7000 | 4.2647 |
| 3.5359 | 4.45 | 7500 | 4.2557 |
| 3.5282 | 4.75 | 8000 | 4.2377 |
| 3.4838 | 5.04 | 8500 | 4.2391 |
| 3.3383 | 5.34 | 9000 | 4.2426 |
| 3.3404 | 5.64 | 9500 | 4.2414 |
| 3.3337 | 5.93 | 10000 | 4.2410 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nolanaatama/blllsh2021vrrvcv2600pchshstpn | nolanaatama | 2023-07-09T00:33:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-24T07:33:08Z | ---
license: creativeml-openrail-m
---
|
skywalker7/LunarWalker | skywalker7 | 2023-07-08T23:40:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T23:40:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.93 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ABDUULAHH/ABDULLAH-GPT | ABDUULAHH | 2023-07-08T23:23:25Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T23:23:25Z | ---
license: bigscience-openrail-m
---
|
gvenkat21/reviews-feedback-nudge | gvenkat21 | 2023-07-08T23:11:15Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T22:08:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
hongrui/chest_mimic_v_1 | hongrui | 2023-07-08T22:39:07Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T13:09:12Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/chest_mimic_v_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mimic_chest_xray_v_1 dataset. You can find some example images in the following.




|
hbenitez/food_classifier | hbenitez | 2023-07-08T22:37:36Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T21:28:12Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hbenitez/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hbenitez/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3735
- Validation Loss: 2.5622
- Train Accuracy: 0.0769
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 260, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5417 | 2.5922 | 0.0 | 0 |
| 2.5103 | 2.5856 | 0.0 | 1 |
| 2.4593 | 2.5738 | 0.0 | 2 |
| 2.4104 | 2.5671 | 0.0 | 3 |
| 2.3735 | 2.5622 | 0.0769 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0-rc2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
miki-kawa/roberta-large-lora-token-classification | miki-kawa | 2023-07-08T22:36:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T22:35:59Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-concat-longer-top3-aochildes-cbt-guten | NasimB | 2023-07-08T22:31:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T20:36:20Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-longer-top3-aochildes-cbt-guten
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-longer-top3-aochildes-cbt-guten
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7253 | 0.3 | 500 | 5.6413 |
| 5.3666 | 0.6 | 1000 | 5.2023 |
| 5.0141 | 0.91 | 1500 | 4.9461 |
| 4.7385 | 1.21 | 2000 | 4.8082 |
| 4.5903 | 1.51 | 2500 | 4.6877 |
| 4.483 | 1.81 | 3000 | 4.5759 |
| 4.314 | 2.12 | 3500 | 4.5164 |
| 4.168 | 2.42 | 4000 | 4.4640 |
| 4.1319 | 2.72 | 4500 | 4.4091 |
| 4.0719 | 3.02 | 5000 | 4.3683 |
| 3.8391 | 3.33 | 5500 | 4.3567 |
| 3.8393 | 3.63 | 6000 | 4.3232 |
| 3.8102 | 3.93 | 6500 | 4.2943 |
| 3.5985 | 4.23 | 7000 | 4.3109 |
| 3.5515 | 4.53 | 7500 | 4.2990 |
| 3.5377 | 4.84 | 8000 | 4.2872 |
| 3.4488 | 5.14 | 8500 | 4.2986 |
| 3.3497 | 5.44 | 9000 | 4.3006 |
| 3.3502 | 5.74 | 9500 | 4.2999 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chainsurfer/ppo-LunarLander-v2 | chainsurfer | 2023-07-08T22:19:55Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T22:19:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.10 +/- 22.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
grace-pro/bert-finetuned-hausa | grace-pro | 2023-07-08T22:07:37Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-07T21:03:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-hausa
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- Precision: 0.6680
- Recall: 0.4474
- F1: 0.5359
- Accuracy: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1683 | 1.0 | 2624 | 0.1589 | 0.6480 | 0.3641 | 0.4663 | 0.9513 |
| 0.1446 | 2.0 | 5248 | 0.1509 | 0.6658 | 0.4147 | 0.5111 | 0.9543 |
| 0.1163 | 3.0 | 7872 | 0.1505 | 0.6680 | 0.4474 | 0.5359 | 0.9557 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Hariharavarshan/Cover_genie | Hariharavarshan | 2023-07-08T21:48:30Z | 172 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"arxiv:2210.11416",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-03T06:27:28Z | ---
license: apache-2.0
language:
- en
metrics:
- rouge
library_name: transformers
---
# Model Card for CoverGenie
<!-- Provide a quick summary of what the model is/does. -->
The goal of this project is to build a fine-grained mini-ChatGPT (named “CoverGenie”) , which is designed to generate resumes and cover letters based on job descriptions from the tech field.
By nature,it is a language generation task, and it takes the job description as input to a sequence of text and turns it into a structured, certain style of resumes and cover letters.
This might involve parameter efficient finetuning, reinforcement learning and prompting engineering to some extent.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** T5 (Text-to-Text-Transfer-Transformer)
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0
- **Finetuned from model:** FlanT5 Large
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** https://arxiv.org/pdf/2210.11416.pdf
## Uses
It Can Generate Cover letter if we are able to input the **Job description** and **Resume** of a candidate.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import GenerationConfig
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import GenerationConfig
import nltk
nltk.download('punkt')
max_source_length=512
tokenizer = AutoTokenizer.from_pretrained("Hariharavarshan/Cover_genie")
model = AutoModelForSeq2SeqLM.from_pretrained("Hariharavarshan/Cover_genie")
JD='''<Job description Text>'''
resume_text= '''<Resume Text>'''
final_text="give me a cover letter based on the a job description and a resume. Job description:"+JD +" Resume:"+ resume_text
generation_config = GenerationConfig.from_pretrained("google/flan-t5-large",temperature=2.0)
inputs = tokenizer(final_text, max_length=max_source_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=3, do_sample=True, min_length=1000,
max_length=10000,generation_config=generation_config,num_return_sequences=3)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
generated_Coverletter = nltk.sent_tokenize(decoded_output.strip())
```
**Developed by:** Hariharavarshan,Jayathilaga,Sara,Meiyu
|
jncraton/codet5p-770m-py-ct2-int8 | jncraton | 2023-07-08T21:44:13Z | 600 | 0 | transformers | [
"transformers",
"arxiv:2305.07922",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T20:47:01Z | ---
license: bsd-3-clause
---
# CodeT5+ 770M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 15.5% pass@1 on HumanEval in the zero-shot setting, which is comparable to much larger LLMs such as Incoder 6B’s 15.2%, GPT-NeoX 20B’s 15.4%, and PaLM 62B’s 15.9%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
skrl/IsaacGymEnvs-Humanoid-PPO | skrl | 2023-07-08T20:59:46Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:44:07Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 6524.74 +/- 570.54
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Humanoid
type: IsaacGymEnvs-Humanoid
---
<!-- ---
torch: 6524.74 +/- 570.54
jax: 6265.95 +/- 280.11
numpy: 5727.54 +/- 406.96
--- -->
# IsaacGymEnvs-Humanoid-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Humanoid
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Humanoid-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Humanoid-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 32 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 4 # 32 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 5e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
snousias/bert-base-greek-uncased-v2-finetuned-polylex | snousias | 2023-07-08T20:51:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T20:01:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v2-finetuned-polylex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v2-finetuned-polylex
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 4.7613 | 1.0 | 12 | 3.7659 |
| 3.8949 | 2.0 | 24 | 3.2678 |
| 3.223 | 3.0 | 36 | 2.5675 |
| 2.9941 | 4.0 | 48 | 2.6363 |
| 3.1597 | 5.0 | 60 | 2.8368 |
| 2.8535 | 6.0 | 72 | 2.8220 |
| 2.9492 | 7.0 | 84 | 3.0838 |
| 2.6935 | 8.0 | 96 | 2.6604 |
| 2.8037 | 9.0 | 108 | 2.4602 |
| 3.101 | 10.0 | 120 | 2.6140 |
| 2.4546 | 11.0 | 132 | 2.6074 |
| 2.6299 | 12.0 | 144 | 2.5843 |
| 2.4703 | 13.0 | 156 | 2.6383 |
| 2.4184 | 14.0 | 168 | 2.3316 |
| 2.6144 | 15.0 | 180 | 2.0832 |
| 2.6209 | 16.0 | 192 | 2.3583 |
| 2.451 | 17.0 | 204 | 2.9010 |
| 2.4358 | 18.0 | 216 | 3.0525 |
| 2.4198 | 19.0 | 228 | 2.6463 |
| 2.3365 | 20.0 | 240 | 2.7683 |
| 2.2167 | 21.0 | 252 | 2.9289 |
| 2.4412 | 22.0 | 264 | 2.0613 |
| 2.3041 | 23.0 | 276 | 2.6865 |
| 2.381 | 24.0 | 288 | 2.4213 |
| 2.3244 | 25.0 | 300 | 2.3309 |
| 2.2025 | 26.0 | 312 | 3.8109 |
| 2.3091 | 27.0 | 324 | 3.1869 |
| 2.2988 | 28.0 | 336 | 1.9325 |
| 2.2883 | 29.0 | 348 | 2.0473 |
| 2.2323 | 30.0 | 360 | 2.6196 |
| 2.1218 | 31.0 | 372 | 2.3249 |
| 2.138 | 32.0 | 384 | 2.4549 |
| 2.0153 | 33.0 | 396 | 2.0830 |
| 1.8986 | 34.0 | 408 | 2.3666 |
| 2.0264 | 35.0 | 420 | 2.3655 |
| 2.0425 | 36.0 | 432 | 2.6095 |
| 2.0762 | 37.0 | 444 | 2.4949 |
| 2.0342 | 38.0 | 456 | 1.5367 |
| 1.8288 | 39.0 | 468 | 2.6941 |
| 1.9419 | 40.0 | 480 | 2.5493 |
| 2.0241 | 41.0 | 492 | 2.6684 |
| 1.9002 | 42.0 | 504 | 2.3222 |
| 1.9645 | 43.0 | 516 | 2.8538 |
| 1.6755 | 44.0 | 528 | 1.7693 |
| 1.9111 | 45.0 | 540 | 2.3962 |
| 2.0126 | 46.0 | 552 | 2.2722 |
| 2.032 | 47.0 | 564 | 2.2347 |
| 2.0232 | 48.0 | 576 | 1.7626 |
| 1.8135 | 49.0 | 588 | 2.5355 |
| 1.6517 | 50.0 | 600 | 2.9392 |
| 1.6788 | 51.0 | 612 | 1.9630 |
| 1.6126 | 52.0 | 624 | 2.1936 |
| 1.8367 | 53.0 | 636 | 3.4687 |
| 1.8566 | 54.0 | 648 | 2.0458 |
| 1.6203 | 55.0 | 660 | 2.1171 |
| 1.6941 | 56.0 | 672 | 1.9957 |
| 1.5142 | 57.0 | 684 | 2.2677 |
| 1.7009 | 58.0 | 696 | 2.8793 |
| 1.6105 | 59.0 | 708 | 2.1910 |
| 1.6282 | 60.0 | 720 | 1.9620 |
| 1.7587 | 61.0 | 732 | 3.4591 |
| 1.6177 | 62.0 | 744 | 2.0555 |
| 1.5287 | 63.0 | 756 | 2.9750 |
| 1.6862 | 64.0 | 768 | 2.2498 |
| 1.5724 | 65.0 | 780 | 2.5222 |
| 1.705 | 66.0 | 792 | 2.4491 |
| 1.6787 | 67.0 | 804 | 2.4474 |
| 1.665 | 68.0 | 816 | 2.3176 |
| 1.3825 | 69.0 | 828 | 2.5131 |
| 1.4641 | 70.0 | 840 | 2.0134 |
| 1.3444 | 71.0 | 852 | 2.7905 |
| 1.6672 | 72.0 | 864 | 3.0861 |
| 1.5524 | 73.0 | 876 | 2.3998 |
| 1.4178 | 74.0 | 888 | 2.8779 |
| 1.4374 | 75.0 | 900 | 2.3486 |
| 1.2693 | 76.0 | 912 | 2.6789 |
| 1.5111 | 77.0 | 924 | 2.4917 |
| 1.3847 | 78.0 | 936 | 2.0904 |
| 1.3115 | 79.0 | 948 | 2.7551 |
| 1.5094 | 80.0 | 960 | 2.4040 |
| 1.3265 | 81.0 | 972 | 2.6506 |
| 1.226 | 82.0 | 984 | 3.0660 |
| 1.3867 | 83.0 | 996 | 1.8890 |
| 1.2752 | 84.0 | 1008 | 2.9983 |
| 1.3847 | 85.0 | 1020 | 2.7811 |
| 1.3903 | 86.0 | 1032 | 2.9952 |
| 1.3858 | 87.0 | 1044 | 2.1377 |
| 1.2792 | 88.0 | 1056 | 2.9294 |
| 1.3319 | 89.0 | 1068 | 2.5720 |
| 1.1521 | 90.0 | 1080 | 2.4535 |
| 1.2619 | 91.0 | 1092 | 2.1846 |
| 1.2885 | 92.0 | 1104 | 2.0970 |
| 1.1852 | 93.0 | 1116 | 2.2783 |
| 1.3225 | 94.0 | 1128 | 2.7983 |
| 1.1694 | 95.0 | 1140 | 2.0372 |
| 1.1184 | 96.0 | 1152 | 2.7704 |
| 1.1852 | 97.0 | 1164 | 2.8402 |
| 1.2402 | 98.0 | 1176 | 2.2748 |
| 1.1182 | 99.0 | 1188 | 2.7973 |
| 1.2023 | 100.0 | 1200 | 2.1480 |
| 1.0637 | 101.0 | 1212 | 2.1987 |
| 1.1003 | 102.0 | 1224 | 1.9750 |
| 1.2729 | 103.0 | 1236 | 2.6881 |
| 1.0963 | 104.0 | 1248 | 2.5819 |
| 1.2034 | 105.0 | 1260 | 2.8611 |
| 1.038 | 106.0 | 1272 | 1.8322 |
| 1.3583 | 107.0 | 1284 | 2.7330 |
| 1.1453 | 108.0 | 1296 | 2.5139 |
| 1.1593 | 109.0 | 1308 | 2.4409 |
| 1.1126 | 110.0 | 1320 | 2.3118 |
| 0.9801 | 111.0 | 1332 | 2.1956 |
| 1.2605 | 112.0 | 1344 | 2.8087 |
| 1.1756 | 113.0 | 1356 | 2.1508 |
| 0.8898 | 114.0 | 1368 | 2.8882 |
| 1.1959 | 115.0 | 1380 | 2.6419 |
| 1.0536 | 116.0 | 1392 | 2.2053 |
| 1.1508 | 117.0 | 1404 | 2.4917 |
| 0.9824 | 118.0 | 1416 | 2.8271 |
| 1.2391 | 119.0 | 1428 | 2.0959 |
| 0.9495 | 120.0 | 1440 | 2.5855 |
| 0.9823 | 121.0 | 1452 | 2.3001 |
| 0.9818 | 122.0 | 1464 | 2.4058 |
| 1.0764 | 123.0 | 1476 | 2.7615 |
| 1.1002 | 124.0 | 1488 | 2.2705 |
| 0.9838 | 125.0 | 1500 | 2.4089 |
| 1.1747 | 126.0 | 1512 | 2.2487 |
| 0.9397 | 127.0 | 1524 | 2.3436 |
| 0.7915 | 128.0 | 1536 | 2.7810 |
| 0.8227 | 129.0 | 1548 | 2.9488 |
| 1.0162 | 130.0 | 1560 | 1.9826 |
| 1.038 | 131.0 | 1572 | 2.3104 |
| 0.7145 | 132.0 | 1584 | 3.1713 |
| 0.9299 | 133.0 | 1596 | 2.4383 |
| 1.1 | 134.0 | 1608 | 2.7588 |
| 0.7346 | 135.0 | 1620 | 2.4870 |
| 0.898 | 136.0 | 1632 | 2.3211 |
| 1.0406 | 137.0 | 1644 | 2.1006 |
| 0.7669 | 138.0 | 1656 | 2.6216 |
| 0.8182 | 139.0 | 1668 | 2.6548 |
| 0.9577 | 140.0 | 1680 | 3.0709 |
| 0.843 | 141.0 | 1692 | 2.0712 |
| 0.8871 | 142.0 | 1704 | 2.0269 |
| 0.8183 | 143.0 | 1716 | 2.1832 |
| 0.9048 | 144.0 | 1728 | 2.3581 |
| 0.8197 | 145.0 | 1740 | 2.5645 |
| 0.7477 | 146.0 | 1752 | 3.4650 |
| 0.8257 | 147.0 | 1764 | 3.0643 |
| 0.801 | 148.0 | 1776 | 2.6476 |
| 0.8802 | 149.0 | 1788 | 2.5711 |
| 0.7332 | 150.0 | 1800 | 2.7936 |
| 0.825 | 151.0 | 1812 | 2.9548 |
| 0.7226 | 152.0 | 1824 | 2.2194 |
| 0.6707 | 153.0 | 1836 | 2.0006 |
| 0.6401 | 154.0 | 1848 | 2.7826 |
| 0.9888 | 155.0 | 1860 | 2.1371 |
| 0.6399 | 156.0 | 1872 | 2.1082 |
| 0.7128 | 157.0 | 1884 | 2.7275 |
| 0.684 | 158.0 | 1896 | 2.0162 |
| 0.7906 | 159.0 | 1908 | 1.9985 |
| 0.8381 | 160.0 | 1920 | 2.6745 |
| 0.7233 | 161.0 | 1932 | 2.7703 |
| 0.6977 | 162.0 | 1944 | 2.2407 |
| 0.7948 | 163.0 | 1956 | 2.5955 |
| 0.7616 | 164.0 | 1968 | 2.3938 |
| 0.8808 | 165.0 | 1980 | 2.5147 |
| 0.8188 | 166.0 | 1992 | 1.6625 |
| 0.6083 | 167.0 | 2004 | 3.1102 |
| 0.7814 | 168.0 | 2016 | 2.7221 |
| 0.6402 | 169.0 | 2028 | 2.4840 |
| 0.7722 | 170.0 | 2040 | 2.2021 |
| 0.7887 | 171.0 | 2052 | 3.1279 |
| 0.7313 | 172.0 | 2064 | 2.1820 |
| 0.7924 | 173.0 | 2076 | 1.7631 |
| 0.6142 | 174.0 | 2088 | 2.7580 |
| 0.7562 | 175.0 | 2100 | 2.0954 |
| 0.5619 | 176.0 | 2112 | 2.3388 |
| 0.9217 | 177.0 | 2124 | 3.4578 |
| 0.6253 | 178.0 | 2136 | 1.9490 |
| 0.6385 | 179.0 | 2148 | 1.9926 |
| 0.7452 | 180.0 | 2160 | 3.1260 |
| 0.5797 | 181.0 | 2172 | 2.7739 |
| 0.6138 | 182.0 | 2184 | 2.8513 |
| 0.5669 | 183.0 | 2196 | 2.4326 |
| 0.6944 | 184.0 | 2208 | 2.7487 |
| 0.7057 | 185.0 | 2220 | 2.4420 |
| 0.8157 | 186.0 | 2232 | 2.8531 |
| 0.5743 | 187.0 | 2244 | 3.0470 |
| 0.595 | 188.0 | 2256 | 2.8035 |
| 0.7408 | 189.0 | 2268 | 2.7126 |
| 0.5912 | 190.0 | 2280 | 3.7428 |
| 0.5725 | 191.0 | 2292 | 2.3815 |
| 0.6521 | 192.0 | 2304 | 2.7721 |
| 0.7074 | 193.0 | 2316 | 2.5499 |
| 0.5764 | 194.0 | 2328 | 2.6066 |
| 0.5298 | 195.0 | 2340 | 2.2085 |
| 0.6197 | 196.0 | 2352 | 2.4815 |
| 0.4731 | 197.0 | 2364 | 2.8488 |
| 0.619 | 198.0 | 2376 | 3.2678 |
| 0.5954 | 199.0 | 2388 | 2.1428 |
| 0.5277 | 200.0 | 2400 | 2.7153 |
| 0.7886 | 201.0 | 2412 | 2.2156 |
| 0.512 | 202.0 | 2424 | 2.2840 |
| 0.55 | 203.0 | 2436 | 2.7672 |
| 0.4958 | 204.0 | 2448 | 1.6703 |
| 0.7151 | 205.0 | 2460 | 2.1373 |
| 0.5112 | 206.0 | 2472 | 2.7734 |
| 0.6594 | 207.0 | 2484 | 2.5554 |
| 0.4422 | 208.0 | 2496 | 1.8383 |
| 0.5405 | 209.0 | 2508 | 2.9803 |
| 0.555 | 210.0 | 2520 | 2.4756 |
| 0.605 | 211.0 | 2532 | 2.6883 |
| 0.5143 | 212.0 | 2544 | 3.2208 |
| 0.5458 | 213.0 | 2556 | 2.6816 |
| 0.5469 | 214.0 | 2568 | 3.0502 |
| 0.5425 | 215.0 | 2580 | 2.8781 |
| 0.4458 | 216.0 | 2592 | 2.8725 |
| 0.4986 | 217.0 | 2604 | 2.6287 |
| 0.8714 | 218.0 | 2616 | 3.2690 |
| 0.4996 | 219.0 | 2628 | 3.1879 |
| 0.4841 | 220.0 | 2640 | 3.0364 |
| 0.4745 | 221.0 | 2652 | 2.5914 |
| 0.4609 | 222.0 | 2664 | 2.6385 |
| 0.4058 | 223.0 | 2676 | 2.9445 |
| 0.4653 | 224.0 | 2688 | 2.6551 |
| 0.4246 | 225.0 | 2700 | 3.2083 |
| 0.6041 | 226.0 | 2712 | 3.2518 |
| 0.6409 | 227.0 | 2724 | 2.2092 |
| 0.5091 | 228.0 | 2736 | 2.6145 |
| 0.5917 | 229.0 | 2748 | 2.6990 |
| 0.533 | 230.0 | 2760 | 2.9442 |
| 0.4637 | 231.0 | 2772 | 2.5754 |
| 0.5876 | 232.0 | 2784 | 3.3697 |
| 0.5068 | 233.0 | 2796 | 2.1599 |
| 0.5561 | 234.0 | 2808 | 2.4411 |
| 0.3852 | 235.0 | 2820 | 2.1660 |
| 0.5038 | 236.0 | 2832 | 2.5145 |
| 0.4498 | 237.0 | 2844 | 2.9055 |
| 0.3932 | 238.0 | 2856 | 2.0346 |
| 0.4701 | 239.0 | 2868 | 2.4029 |
| 0.554 | 240.0 | 2880 | 3.2398 |
| 0.4836 | 241.0 | 2892 | 2.6803 |
| 0.4752 | 242.0 | 2904 | 2.5135 |
| 0.4507 | 243.0 | 2916 | 1.9342 |
| 0.316 | 244.0 | 2928 | 3.2635 |
| 0.4807 | 245.0 | 2940 | 2.6797 |
| 0.5369 | 246.0 | 2952 | 3.3722 |
| 0.4434 | 247.0 | 2964 | 2.9754 |
| 0.5113 | 248.0 | 2976 | 2.7636 |
| 0.4765 | 249.0 | 2988 | 2.5710 |
| 0.517 | 250.0 | 3000 | 2.6230 |
| 0.4156 | 251.0 | 3012 | 2.7318 |
| 0.4041 | 252.0 | 3024 | 2.9123 |
| 0.4076 | 253.0 | 3036 | 2.5130 |
| 0.4224 | 254.0 | 3048 | 2.4242 |
| 0.464 | 255.0 | 3060 | 2.4092 |
| 0.4631 | 256.0 | 3072 | 2.8105 |
| 0.3792 | 257.0 | 3084 | 2.4955 |
| 0.4282 | 258.0 | 3096 | 2.6907 |
| 0.5803 | 259.0 | 3108 | 2.8609 |
| 0.5043 | 260.0 | 3120 | 3.0090 |
| 0.4026 | 261.0 | 3132 | 3.1805 |
| 0.5926 | 262.0 | 3144 | 2.6541 |
| 0.4021 | 263.0 | 3156 | 2.2630 |
| 0.462 | 264.0 | 3168 | 3.3067 |
| 0.4701 | 265.0 | 3180 | 2.9675 |
| 0.4706 | 266.0 | 3192 | 3.2344 |
| 0.5196 | 267.0 | 3204 | 2.7747 |
| 0.491 | 268.0 | 3216 | 2.5085 |
| 0.4152 | 269.0 | 3228 | 2.5357 |
| 0.4402 | 270.0 | 3240 | 2.6906 |
| 0.4152 | 271.0 | 3252 | 3.1434 |
| 0.4487 | 272.0 | 3264 | 3.2802 |
| 0.3956 | 273.0 | 3276 | 3.3766 |
| 0.3623 | 274.0 | 3288 | 2.8253 |
| 0.3994 | 275.0 | 3300 | 2.2845 |
| 0.4035 | 276.0 | 3312 | 2.5307 |
| 0.3815 | 277.0 | 3324 | 3.3093 |
| 0.4519 | 278.0 | 3336 | 2.2202 |
| 0.3118 | 279.0 | 3348 | 2.7818 |
| 0.5191 | 280.0 | 3360 | 2.3814 |
| 0.3194 | 281.0 | 3372 | 2.3144 |
| 0.5671 | 282.0 | 3384 | 3.4033 |
| 0.4217 | 283.0 | 3396 | 1.9681 |
| 0.3587 | 284.0 | 3408 | 2.9843 |
| 0.3914 | 285.0 | 3420 | 3.1635 |
| 0.3667 | 286.0 | 3432 | 2.7571 |
| 0.3781 | 287.0 | 3444 | 2.5881 |
| 0.3868 | 288.0 | 3456 | 1.8389 |
| 0.4172 | 289.0 | 3468 | 2.6809 |
| 0.5089 | 290.0 | 3480 | 2.4618 |
| 0.3181 | 291.0 | 3492 | 2.1054 |
| 0.3276 | 292.0 | 3504 | 2.9944 |
| 0.4051 | 293.0 | 3516 | 2.8520 |
| 0.3435 | 294.0 | 3528 | 3.0985 |
| 0.3241 | 295.0 | 3540 | 2.6323 |
| 0.2532 | 296.0 | 3552 | 2.9059 |
| 0.2732 | 297.0 | 3564 | 2.5619 |
| 0.4181 | 298.0 | 3576 | 2.5687 |
| 0.3725 | 299.0 | 3588 | 3.3169 |
| 0.3949 | 300.0 | 3600 | 2.0620 |
| 0.4684 | 301.0 | 3612 | 2.3878 |
| 0.4122 | 302.0 | 3624 | 3.4867 |
| 0.3338 | 303.0 | 3636 | 3.0578 |
| 0.3546 | 304.0 | 3648 | 3.3269 |
| 0.3833 | 305.0 | 3660 | 2.2698 |
| 0.2897 | 306.0 | 3672 | 2.9015 |
| 0.3912 | 307.0 | 3684 | 3.4569 |
| 0.3951 | 308.0 | 3696 | 2.5743 |
| 0.3086 | 309.0 | 3708 | 2.2319 |
| 0.481 | 310.0 | 3720 | 1.7550 |
| 0.3579 | 311.0 | 3732 | 2.4885 |
| 0.4271 | 312.0 | 3744 | 3.2511 |
| 0.3864 | 313.0 | 3756 | 2.4219 |
| 0.3008 | 314.0 | 3768 | 3.2937 |
| 0.3279 | 315.0 | 3780 | 2.9278 |
| 0.3845 | 316.0 | 3792 | 3.7233 |
| 0.3158 | 317.0 | 3804 | 2.1792 |
| 0.3906 | 318.0 | 3816 | 2.3364 |
| 0.3159 | 319.0 | 3828 | 3.7451 |
| 0.2773 | 320.0 | 3840 | 2.6364 |
| 0.2867 | 321.0 | 3852 | 2.6699 |
| 0.3253 | 322.0 | 3864 | 2.7289 |
| 0.4208 | 323.0 | 3876 | 2.5447 |
| 0.4343 | 324.0 | 3888 | 3.1167 |
| 0.3126 | 325.0 | 3900 | 3.4110 |
| 0.2433 | 326.0 | 3912 | 2.1796 |
| 0.2964 | 327.0 | 3924 | 2.1766 |
| 0.4289 | 328.0 | 3936 | 3.5455 |
| 0.3391 | 329.0 | 3948 | 2.5795 |
| 0.3505 | 330.0 | 3960 | 2.3377 |
| 0.4084 | 331.0 | 3972 | 2.9658 |
| 0.4365 | 332.0 | 3984 | 2.5202 |
| 0.3573 | 333.0 | 3996 | 3.2768 |
| 0.2813 | 334.0 | 4008 | 2.7073 |
| 0.2531 | 335.0 | 4020 | 2.3548 |
| 0.2535 | 336.0 | 4032 | 2.8820 |
| 0.3038 | 337.0 | 4044 | 2.6777 |
| 0.2861 | 338.0 | 4056 | 2.8631 |
| 0.2717 | 339.0 | 4068 | 2.7445 |
| 0.3495 | 340.0 | 4080 | 2.9722 |
| 0.2775 | 341.0 | 4092 | 3.1350 |
| 0.3661 | 342.0 | 4104 | 2.7601 |
| 0.348 | 343.0 | 4116 | 2.6642 |
| 0.3556 | 344.0 | 4128 | 1.9807 |
| 0.3072 | 345.0 | 4140 | 2.6037 |
| 0.3114 | 346.0 | 4152 | 2.7645 |
| 0.3527 | 347.0 | 4164 | 2.8360 |
| 0.2903 | 348.0 | 4176 | 2.0667 |
| 0.2449 | 349.0 | 4188 | 2.3573 |
| 0.2089 | 350.0 | 4200 | 2.6189 |
| 0.3894 | 351.0 | 4212 | 2.5689 |
| 0.3061 | 352.0 | 4224 | 2.7638 |
| 0.3221 | 353.0 | 4236 | 2.4668 |
| 0.2434 | 354.0 | 4248 | 2.3994 |
| 0.1777 | 355.0 | 4260 | 2.6408 |
| 0.3809 | 356.0 | 4272 | 2.9841 |
| 0.3237 | 357.0 | 4284 | 2.7111 |
| 0.1947 | 358.0 | 4296 | 3.5881 |
| 0.3112 | 359.0 | 4308 | 3.6076 |
| 0.299 | 360.0 | 4320 | 2.5547 |
| 0.354 | 361.0 | 4332 | 1.9077 |
| 0.2733 | 362.0 | 4344 | 3.1406 |
| 0.4962 | 363.0 | 4356 | 2.3770 |
| 0.3272 | 364.0 | 4368 | 3.0437 |
| 0.2858 | 365.0 | 4380 | 2.7978 |
| 0.3685 | 366.0 | 4392 | 2.3725 |
| 0.2707 | 367.0 | 4404 | 2.4587 |
| 0.3137 | 368.0 | 4416 | 2.1862 |
| 0.2781 | 369.0 | 4428 | 1.8312 |
| 0.2658 | 370.0 | 4440 | 2.4720 |
| 0.3014 | 371.0 | 4452 | 2.3532 |
| 0.24 | 372.0 | 4464 | 3.4097 |
| 0.2413 | 373.0 | 4476 | 3.2338 |
| 0.3055 | 374.0 | 4488 | 3.4269 |
| 0.3781 | 375.0 | 4500 | 2.8758 |
| 0.2224 | 376.0 | 4512 | 2.2171 |
| 0.2463 | 377.0 | 4524 | 3.2768 |
| 0.4141 | 378.0 | 4536 | 2.9136 |
| 0.2102 | 379.0 | 4548 | 2.8798 |
| 0.2164 | 380.0 | 4560 | 2.5821 |
| 0.2742 | 381.0 | 4572 | 2.0458 |
| 0.2007 | 382.0 | 4584 | 3.8119 |
| 0.2494 | 383.0 | 4596 | 3.0835 |
| 0.2533 | 384.0 | 4608 | 2.5633 |
| 0.3137 | 385.0 | 4620 | 2.2415 |
| 0.2686 | 386.0 | 4632 | 2.2489 |
| 0.2425 | 387.0 | 4644 | 2.1750 |
| 0.2561 | 388.0 | 4656 | 2.8167 |
| 0.3485 | 389.0 | 4668 | 3.4358 |
| 0.2746 | 390.0 | 4680 | 2.3380 |
| 0.3538 | 391.0 | 4692 | 2.9940 |
| 0.3989 | 392.0 | 4704 | 2.7560 |
| 0.2414 | 393.0 | 4716 | 3.4802 |
| 0.2888 | 394.0 | 4728 | 2.5955 |
| 0.3162 | 395.0 | 4740 | 2.3060 |
| 0.2435 | 396.0 | 4752 | 3.8333 |
| 0.2796 | 397.0 | 4764 | 2.1767 |
| 0.2588 | 398.0 | 4776 | 2.6988 |
| 0.209 | 399.0 | 4788 | 2.4999 |
| 0.2602 | 400.0 | 4800 | 2.6636 |
| 0.2114 | 401.0 | 4812 | 3.2272 |
| 0.2226 | 402.0 | 4824 | 2.5983 |
| 0.1681 | 403.0 | 4836 | 2.3867 |
| 0.2025 | 404.0 | 4848 | 3.0062 |
| 0.2769 | 405.0 | 4860 | 2.9767 |
| 0.3267 | 406.0 | 4872 | 2.6960 |
| 0.252 | 407.0 | 4884 | 2.6078 |
| 0.257 | 408.0 | 4896 | 2.1594 |
| 0.306 | 409.0 | 4908 | 3.3544 |
| 0.2329 | 410.0 | 4920 | 2.6371 |
| 0.3732 | 411.0 | 4932 | 2.8729 |
| 0.3233 | 412.0 | 4944 | 3.6352 |
| 0.2822 | 413.0 | 4956 | 3.0374 |
| 0.2796 | 414.0 | 4968 | 2.8686 |
| 0.2606 | 415.0 | 4980 | 2.8761 |
| 0.2048 | 416.0 | 4992 | 2.5680 |
| 0.2088 | 417.0 | 5004 | 2.4540 |
| 0.2301 | 418.0 | 5016 | 2.4787 |
| 0.1594 | 419.0 | 5028 | 2.9355 |
| 0.3399 | 420.0 | 5040 | 2.8312 |
| 0.2322 | 421.0 | 5052 | 1.9368 |
| 0.2066 | 422.0 | 5064 | 3.2728 |
| 0.2254 | 423.0 | 5076 | 3.0105 |
| 0.1818 | 424.0 | 5088 | 2.8390 |
| 0.3191 | 425.0 | 5100 | 2.9756 |
| 0.1961 | 426.0 | 5112 | 3.4510 |
| 0.2014 | 427.0 | 5124 | 3.4363 |
| 0.184 | 428.0 | 5136 | 3.1381 |
| 0.2722 | 429.0 | 5148 | 3.4780 |
| 0.2607 | 430.0 | 5160 | 2.9650 |
| 0.3515 | 431.0 | 5172 | 2.8692 |
| 0.2011 | 432.0 | 5184 | 2.7564 |
| 0.2555 | 433.0 | 5196 | 3.5317 |
| 0.2802 | 434.0 | 5208 | 1.9900 |
| 0.227 | 435.0 | 5220 | 3.3691 |
| 0.2833 | 436.0 | 5232 | 3.0117 |
| 0.2368 | 437.0 | 5244 | 2.6631 |
| 0.2159 | 438.0 | 5256 | 2.3868 |
| 0.2139 | 439.0 | 5268 | 2.8382 |
| 0.2739 | 440.0 | 5280 | 2.9267 |
| 0.234 | 441.0 | 5292 | 2.9501 |
| 0.2315 | 442.0 | 5304 | 3.3317 |
| 0.2538 | 443.0 | 5316 | 3.1168 |
| 0.2535 | 444.0 | 5328 | 2.8070 |
| 0.2711 | 445.0 | 5340 | 2.0824 |
| 0.2963 | 446.0 | 5352 | 1.7310 |
| 0.2559 | 447.0 | 5364 | 3.3832 |
| 0.3184 | 448.0 | 5376 | 2.6107 |
| 0.2383 | 449.0 | 5388 | 2.3923 |
| 0.4352 | 450.0 | 5400 | 3.1145 |
| 0.1892 | 451.0 | 5412 | 3.0184 |
| 0.1899 | 452.0 | 5424 | 2.9772 |
| 0.3766 | 453.0 | 5436 | 3.3416 |
| 0.211 | 454.0 | 5448 | 2.9356 |
| 0.2387 | 455.0 | 5460 | 2.5284 |
| 0.2322 | 456.0 | 5472 | 2.8084 |
| 0.2003 | 457.0 | 5484 | 3.0678 |
| 0.2604 | 458.0 | 5496 | 2.4424 |
| 0.2614 | 459.0 | 5508 | 2.6966 |
| 0.2026 | 460.0 | 5520 | 2.7806 |
| 0.4175 | 461.0 | 5532 | 2.9597 |
| 0.1676 | 462.0 | 5544 | 2.8175 |
| 0.2646 | 463.0 | 5556 | 3.1038 |
| 0.2514 | 464.0 | 5568 | 2.2243 |
| 0.1483 | 465.0 | 5580 | 2.6416 |
| 0.233 | 466.0 | 5592 | 3.0405 |
| 0.2788 | 467.0 | 5604 | 2.1676 |
| 0.2339 | 468.0 | 5616 | 3.1575 |
| 0.2735 | 469.0 | 5628 | 1.7335 |
| 0.1639 | 470.0 | 5640 | 2.7019 |
| 0.24 | 471.0 | 5652 | 2.2920 |
| 0.2341 | 472.0 | 5664 | 2.8358 |
| 0.1978 | 473.0 | 5676 | 2.9339 |
| 0.2517 | 474.0 | 5688 | 2.4914 |
| 0.188 | 475.0 | 5700 | 2.2767 |
| 0.1138 | 476.0 | 5712 | 2.3833 |
| 0.1809 | 477.0 | 5724 | 2.6821 |
| 0.3134 | 478.0 | 5736 | 2.1710 |
| 0.1848 | 479.0 | 5748 | 3.3586 |
| 0.252 | 480.0 | 5760 | 2.7309 |
| 0.193 | 481.0 | 5772 | 2.8318 |
| 0.2284 | 482.0 | 5784 | 3.4643 |
| 0.2058 | 483.0 | 5796 | 4.2388 |
| 0.2319 | 484.0 | 5808 | 2.1872 |
| 0.1566 | 485.0 | 5820 | 2.3735 |
| 0.29 | 486.0 | 5832 | 3.4093 |
| 0.125 | 487.0 | 5844 | 3.3786 |
| 0.2628 | 488.0 | 5856 | 2.4406 |
| 0.2609 | 489.0 | 5868 | 3.3617 |
| 0.2055 | 490.0 | 5880 | 3.1843 |
| 0.1713 | 491.0 | 5892 | 2.1698 |
| 0.2562 | 492.0 | 5904 | 3.0665 |
| 0.3366 | 493.0 | 5916 | 3.2277 |
| 0.2359 | 494.0 | 5928 | 2.7013 |
| 0.191 | 495.0 | 5940 | 3.4616 |
| 0.175 | 496.0 | 5952 | 2.5117 |
| 0.1695 | 497.0 | 5964 | 2.3203 |
| 0.218 | 498.0 | 5976 | 2.4493 |
| 0.1953 | 499.0 | 5988 | 2.6769 |
| 0.2478 | 500.0 | 6000 | 3.1759 |
| 0.1548 | 501.0 | 6012 | 2.8604 |
| 0.123 | 502.0 | 6024 | 2.7744 |
| 0.2271 | 503.0 | 6036 | 2.9987 |
| 0.2384 | 504.0 | 6048 | 2.7653 |
| 0.2473 | 505.0 | 6060 | 3.1049 |
| 0.1937 | 506.0 | 6072 | 2.6676 |
| 0.138 | 507.0 | 6084 | 2.2486 |
| 0.2681 | 508.0 | 6096 | 3.1809 |
| 0.2182 | 509.0 | 6108 | 2.5258 |
| 0.1736 | 510.0 | 6120 | 2.2174 |
| 0.2238 | 511.0 | 6132 | 2.9662 |
| 0.189 | 512.0 | 6144 | 2.3124 |
| 0.175 | 513.0 | 6156 | 3.6426 |
| 0.2189 | 514.0 | 6168 | 2.4628 |
| 0.1918 | 515.0 | 6180 | 3.3473 |
| 0.1303 | 516.0 | 6192 | 2.9400 |
| 0.1624 | 517.0 | 6204 | 3.1941 |
| 0.134 | 518.0 | 6216 | 2.9962 |
| 0.2447 | 519.0 | 6228 | 3.0082 |
| 0.1872 | 520.0 | 6240 | 3.9689 |
| 0.1787 | 521.0 | 6252 | 3.1461 |
| 0.3039 | 522.0 | 6264 | 3.2696 |
| 0.1757 | 523.0 | 6276 | 3.0340 |
| 0.3539 | 524.0 | 6288 | 3.3542 |
| 0.2109 | 525.0 | 6300 | 2.7986 |
| 0.1743 | 526.0 | 6312 | 3.1874 |
| 0.1065 | 527.0 | 6324 | 2.9643 |
| 0.2941 | 528.0 | 6336 | 2.6260 |
| 0.2231 | 529.0 | 6348 | 2.8250 |
| 0.1307 | 530.0 | 6360 | 3.2949 |
| 0.1979 | 531.0 | 6372 | 1.8269 |
| 0.2293 | 532.0 | 6384 | 2.2357 |
| 0.2171 | 533.0 | 6396 | 2.5498 |
| 0.1975 | 534.0 | 6408 | 2.7011 |
| 0.1556 | 535.0 | 6420 | 3.5648 |
| 0.1234 | 536.0 | 6432 | 2.7632 |
| 0.2156 | 537.0 | 6444 | 2.3060 |
| 0.1402 | 538.0 | 6456 | 3.1421 |
| 0.1921 | 539.0 | 6468 | 2.3200 |
| 0.1237 | 540.0 | 6480 | 2.7612 |
| 0.1942 | 541.0 | 6492 | 2.5866 |
| 0.1648 | 542.0 | 6504 | 2.4930 |
| 0.1369 | 543.0 | 6516 | 2.9427 |
| 0.1811 | 544.0 | 6528 | 2.9692 |
| 0.2382 | 545.0 | 6540 | 3.4092 |
| 0.2001 | 546.0 | 6552 | 3.2784 |
| 0.2195 | 547.0 | 6564 | 2.8198 |
| 0.1785 | 548.0 | 6576 | 2.5721 |
| 0.2214 | 549.0 | 6588 | 3.1468 |
| 0.1685 | 550.0 | 6600 | 2.8141 |
| 0.1596 | 551.0 | 6612 | 3.1457 |
| 0.0945 | 552.0 | 6624 | 2.6508 |
| 0.1595 | 553.0 | 6636 | 2.8443 |
| 0.1805 | 554.0 | 6648 | 2.4984 |
| 0.1588 | 555.0 | 6660 | 2.9758 |
| 0.2026 | 556.0 | 6672 | 3.3614 |
| 0.1351 | 557.0 | 6684 | 2.5065 |
| 0.2395 | 558.0 | 6696 | 2.5261 |
| 0.2089 | 559.0 | 6708 | 3.3972 |
| 0.2265 | 560.0 | 6720 | 3.0095 |
| 0.2027 | 561.0 | 6732 | 3.2904 |
| 0.2691 | 562.0 | 6744 | 2.5727 |
| 0.1563 | 563.0 | 6756 | 2.0994 |
| 0.2537 | 564.0 | 6768 | 3.2397 |
| 0.1094 | 565.0 | 6780 | 2.9758 |
| 0.1523 | 566.0 | 6792 | 2.3577 |
| 0.2535 | 567.0 | 6804 | 2.6197 |
| 0.1444 | 568.0 | 6816 | 1.9130 |
| 0.1933 | 569.0 | 6828 | 2.3576 |
| 0.1368 | 570.0 | 6840 | 3.3412 |
| 0.1723 | 571.0 | 6852 | 3.5156 |
| 0.1384 | 572.0 | 6864 | 2.9785 |
| 0.1905 | 573.0 | 6876 | 3.2326 |
| 0.1495 | 574.0 | 6888 | 2.9111 |
| 0.1512 | 575.0 | 6900 | 2.1727 |
| 0.227 | 576.0 | 6912 | 2.5159 |
| 0.2271 | 577.0 | 6924 | 2.7866 |
| 0.2457 | 578.0 | 6936 | 3.2068 |
| 0.236 | 579.0 | 6948 | 2.8856 |
| 0.1579 | 580.0 | 6960 | 2.3365 |
| 0.1203 | 581.0 | 6972 | 2.3652 |
| 0.1422 | 582.0 | 6984 | 2.8213 |
| 0.1673 | 583.0 | 6996 | 2.5507 |
| 0.204 | 584.0 | 7008 | 4.0226 |
| 0.1796 | 585.0 | 7020 | 3.1953 |
| 0.163 | 586.0 | 7032 | 2.5787 |
| 0.2166 | 587.0 | 7044 | 3.8404 |
| 0.1299 | 588.0 | 7056 | 2.3668 |
| 0.2301 | 589.0 | 7068 | 2.7562 |
| 0.1506 | 590.0 | 7080 | 2.9342 |
| 0.1372 | 591.0 | 7092 | 2.8316 |
| 0.1959 | 592.0 | 7104 | 2.2761 |
| 0.1925 | 593.0 | 7116 | 2.9083 |
| 0.1885 | 594.0 | 7128 | 2.9052 |
| 0.2052 | 595.0 | 7140 | 2.9409 |
| 0.1368 | 596.0 | 7152 | 3.2571 |
| 0.1455 | 597.0 | 7164 | 2.8765 |
| 0.1398 | 598.0 | 7176 | 2.2425 |
| 0.1764 | 599.0 | 7188 | 2.6299 |
| 0.1791 | 600.0 | 7200 | 3.4030 |
| 0.1057 | 601.0 | 7212 | 3.2505 |
| 0.1947 | 602.0 | 7224 | 2.6440 |
| 0.1678 | 603.0 | 7236 | 3.3419 |
| 0.1629 | 604.0 | 7248 | 3.1957 |
| 0.1348 | 605.0 | 7260 | 3.1234 |
| 0.2332 | 606.0 | 7272 | 2.9425 |
| 0.1367 | 607.0 | 7284 | 3.8721 |
| 0.1434 | 608.0 | 7296 | 3.0653 |
| 0.2092 | 609.0 | 7308 | 3.1552 |
| 0.1765 | 610.0 | 7320 | 2.6715 |
| 0.1773 | 611.0 | 7332 | 2.8437 |
| 0.1427 | 612.0 | 7344 | 3.1257 |
| 0.2383 | 613.0 | 7356 | 3.5687 |
| 0.1376 | 614.0 | 7368 | 3.0010 |
| 0.1388 | 615.0 | 7380 | 2.7436 |
| 0.2484 | 616.0 | 7392 | 3.2465 |
| 0.146 | 617.0 | 7404 | 3.4019 |
| 0.1313 | 618.0 | 7416 | 2.5044 |
| 0.2028 | 619.0 | 7428 | 3.2449 |
| 0.1471 | 620.0 | 7440 | 3.1716 |
| 0.1755 | 621.0 | 7452 | 2.4465 |
| 0.16 | 622.0 | 7464 | 2.8572 |
| 0.108 | 623.0 | 7476 | 3.4424 |
| 0.0824 | 624.0 | 7488 | 2.6112 |
| 0.1133 | 625.0 | 7500 | 2.5730 |
| 0.1809 | 626.0 | 7512 | 1.9670 |
| 0.2606 | 627.0 | 7524 | 2.7736 |
| 0.2001 | 628.0 | 7536 | 3.1865 |
| 0.1912 | 629.0 | 7548 | 2.9717 |
| 0.1525 | 630.0 | 7560 | 2.8429 |
| 0.306 | 631.0 | 7572 | 2.6320 |
| 0.1322 | 632.0 | 7584 | 2.8373 |
| 0.1782 | 633.0 | 7596 | 2.7157 |
| 0.095 | 634.0 | 7608 | 3.2528 |
| 0.1463 | 635.0 | 7620 | 2.6568 |
| 0.184 | 636.0 | 7632 | 2.2466 |
| 0.2132 | 637.0 | 7644 | 3.4883 |
| 0.1007 | 638.0 | 7656 | 3.1021 |
| 0.1686 | 639.0 | 7668 | 2.4326 |
| 0.1359 | 640.0 | 7680 | 2.2554 |
| 0.1535 | 641.0 | 7692 | 2.8495 |
| 0.2158 | 642.0 | 7704 | 3.0866 |
| 0.1403 | 643.0 | 7716 | 2.8983 |
| 0.1092 | 644.0 | 7728 | 3.5183 |
| 0.2218 | 645.0 | 7740 | 2.9190 |
| 0.1468 | 646.0 | 7752 | 3.7689 |
| 0.2291 | 647.0 | 7764 | 3.4550 |
| 0.1616 | 648.0 | 7776 | 2.3301 |
| 0.2146 | 649.0 | 7788 | 4.2045 |
| 0.1113 | 650.0 | 7800 | 3.0168 |
| 0.1785 | 651.0 | 7812 | 2.9931 |
| 0.1535 | 652.0 | 7824 | 3.4046 |
| 0.149 | 653.0 | 7836 | 2.5526 |
| 0.1351 | 654.0 | 7848 | 2.1684 |
| 0.2564 | 655.0 | 7860 | 3.0749 |
| 0.0749 | 656.0 | 7872 | 2.8874 |
| 0.1719 | 657.0 | 7884 | 3.1585 |
| 0.1783 | 658.0 | 7896 | 4.2177 |
| 0.1632 | 659.0 | 7908 | 2.5370 |
| 0.1635 | 660.0 | 7920 | 2.7765 |
| 0.1414 | 661.0 | 7932 | 4.3148 |
| 0.2072 | 662.0 | 7944 | 3.1080 |
| 0.3758 | 663.0 | 7956 | 2.7835 |
| 0.1474 | 664.0 | 7968 | 2.7685 |
| 0.2225 | 665.0 | 7980 | 2.2965 |
| 0.2438 | 666.0 | 7992 | 2.8599 |
| 0.1872 | 667.0 | 8004 | 2.7234 |
| 0.2879 | 668.0 | 8016 | 3.1187 |
| 0.1117 | 669.0 | 8028 | 3.8094 |
| 0.0942 | 670.0 | 8040 | 4.4307 |
| 0.1219 | 671.0 | 8052 | 2.6304 |
| 0.1234 | 672.0 | 8064 | 3.0443 |
| 0.1221 | 673.0 | 8076 | 3.3849 |
| 0.1317 | 674.0 | 8088 | 2.5523 |
| 0.1091 | 675.0 | 8100 | 2.6704 |
| 0.1677 | 676.0 | 8112 | 3.3960 |
| 0.124 | 677.0 | 8124 | 2.1910 |
| 0.1508 | 678.0 | 8136 | 2.5585 |
| 0.1277 | 679.0 | 8148 | 3.2449 |
| 0.1208 | 680.0 | 8160 | 3.0315 |
| 0.1796 | 681.0 | 8172 | 2.3906 |
| 0.2055 | 682.0 | 8184 | 2.8063 |
| 0.1042 | 683.0 | 8196 | 2.7491 |
| 0.1897 | 684.0 | 8208 | 2.9381 |
| 0.138 | 685.0 | 8220 | 2.8710 |
| 0.1562 | 686.0 | 8232 | 1.9945 |
| 0.1091 | 687.0 | 8244 | 2.7079 |
| 0.1616 | 688.0 | 8256 | 3.3086 |
| 0.1699 | 689.0 | 8268 | 3.0746 |
| 0.2412 | 690.0 | 8280 | 2.2330 |
| 0.157 | 691.0 | 8292 | 3.0135 |
| 0.1263 | 692.0 | 8304 | 3.1212 |
| 0.1375 | 693.0 | 8316 | 1.8782 |
| 0.1204 | 694.0 | 8328 | 2.9291 |
| 0.1829 | 695.0 | 8340 | 2.5690 |
| 0.1539 | 696.0 | 8352 | 2.5749 |
| 0.1339 | 697.0 | 8364 | 3.0899 |
| 0.1463 | 698.0 | 8376 | 2.5024 |
| 0.1767 | 699.0 | 8388 | 2.5890 |
| 0.1392 | 700.0 | 8400 | 1.6672 |
| 0.1354 | 701.0 | 8412 | 3.1415 |
| 0.1467 | 702.0 | 8424 | 3.1370 |
| 0.2547 | 703.0 | 8436 | 2.5094 |
| 0.1116 | 704.0 | 8448 | 2.2467 |
| 0.0987 | 705.0 | 8460 | 3.2307 |
| 0.1811 | 706.0 | 8472 | 2.7363 |
| 0.1252 | 707.0 | 8484 | 2.4490 |
| 0.1613 | 708.0 | 8496 | 2.3867 |
| 0.2282 | 709.0 | 8508 | 3.0490 |
| 0.1651 | 710.0 | 8520 | 3.1520 |
| 0.1016 | 711.0 | 8532 | 2.7703 |
| 0.2515 | 712.0 | 8544 | 2.4811 |
| 0.1014 | 713.0 | 8556 | 3.7300 |
| 0.103 | 714.0 | 8568 | 2.8680 |
| 0.1714 | 715.0 | 8580 | 3.8285 |
| 0.1638 | 716.0 | 8592 | 2.5344 |
| 0.14 | 717.0 | 8604 | 3.8581 |
| 0.1202 | 718.0 | 8616 | 2.4095 |
| 0.0691 | 719.0 | 8628 | 2.9710 |
| 0.1176 | 720.0 | 8640 | 3.0506 |
| 0.2005 | 721.0 | 8652 | 2.7418 |
| 0.1719 | 722.0 | 8664 | 2.7388 |
| 0.1509 | 723.0 | 8676 | 2.5713 |
| 0.1113 | 724.0 | 8688 | 2.9053 |
| 0.2501 | 725.0 | 8700 | 2.7703 |
| 0.1192 | 726.0 | 8712 | 3.5875 |
| 0.1619 | 727.0 | 8724 | 3.0704 |
| 0.1421 | 728.0 | 8736 | 2.5629 |
| 0.164 | 729.0 | 8748 | 2.4980 |
| 0.1753 | 730.0 | 8760 | 2.7749 |
| 0.159 | 731.0 | 8772 | 3.8322 |
| 0.1929 | 732.0 | 8784 | 3.1355 |
| 0.088 | 733.0 | 8796 | 2.3649 |
| 0.1349 | 734.0 | 8808 | 2.2229 |
| 0.1093 | 735.0 | 8820 | 2.4979 |
| 0.1338 | 736.0 | 8832 | 3.2253 |
| 0.1794 | 737.0 | 8844 | 2.9326 |
| 0.0948 | 738.0 | 8856 | 2.9917 |
| 0.1341 | 739.0 | 8868 | 3.6675 |
| 0.1019 | 740.0 | 8880 | 3.4145 |
| 0.1265 | 741.0 | 8892 | 2.4996 |
| 0.1688 | 742.0 | 8904 | 2.9395 |
| 0.0829 | 743.0 | 8916 | 3.5850 |
| 0.0993 | 744.0 | 8928 | 3.2900 |
| 0.2241 | 745.0 | 8940 | 3.2025 |
| 0.1235 | 746.0 | 8952 | 2.2814 |
| 0.0937 | 747.0 | 8964 | 3.3185 |
| 0.0936 | 748.0 | 8976 | 3.4046 |
| 0.1633 | 749.0 | 8988 | 2.9694 |
| 0.1328 | 750.0 | 9000 | 3.2772 |
| 0.1168 | 751.0 | 9012 | 2.7732 |
| 0.2409 | 752.0 | 9024 | 3.3763 |
| 0.1145 | 753.0 | 9036 | 2.7232 |
| 0.1384 | 754.0 | 9048 | 3.5289 |
| 0.1326 | 755.0 | 9060 | 3.1250 |
| 0.1124 | 756.0 | 9072 | 3.2928 |
| 0.1197 | 757.0 | 9084 | 2.7365 |
| 0.1359 | 758.0 | 9096 | 2.3043 |
| 0.1031 | 759.0 | 9108 | 2.6293 |
| 0.1434 | 760.0 | 9120 | 2.7771 |
| 0.1009 | 761.0 | 9132 | 2.9574 |
| 0.1217 | 762.0 | 9144 | 3.5124 |
| 0.1017 | 763.0 | 9156 | 3.5922 |
| 0.1236 | 764.0 | 9168 | 2.2188 |
| 0.1174 | 765.0 | 9180 | 2.9054 |
| 0.1797 | 766.0 | 9192 | 2.5098 |
| 0.0971 | 767.0 | 9204 | 2.2203 |
| 0.1043 | 768.0 | 9216 | 2.8536 |
| 0.1464 | 769.0 | 9228 | 2.6191 |
| 0.195 | 770.0 | 9240 | 2.2198 |
| 0.1603 | 771.0 | 9252 | 2.8702 |
| 0.1514 | 772.0 | 9264 | 2.6832 |
| 0.1363 | 773.0 | 9276 | 3.0211 |
| 0.1263 | 774.0 | 9288 | 2.4905 |
| 0.1048 | 775.0 | 9300 | 3.0469 |
| 0.1175 | 776.0 | 9312 | 3.0265 |
| 0.1595 | 777.0 | 9324 | 2.1823 |
| 0.1243 | 778.0 | 9336 | 2.5649 |
| 0.1825 | 779.0 | 9348 | 2.8523 |
| 0.1697 | 780.0 | 9360 | 3.3646 |
| 0.1228 | 781.0 | 9372 | 2.2108 |
| 0.0893 | 782.0 | 9384 | 3.4784 |
| 0.1361 | 783.0 | 9396 | 3.4523 |
| 0.0953 | 784.0 | 9408 | 2.5469 |
| 0.1732 | 785.0 | 9420 | 3.2701 |
| 0.113 | 786.0 | 9432 | 3.4206 |
| 0.1303 | 787.0 | 9444 | 2.7898 |
| 0.2207 | 788.0 | 9456 | 3.4153 |
| 0.1762 | 789.0 | 9468 | 3.4267 |
| 0.1293 | 790.0 | 9480 | 3.6637 |
| 0.0805 | 791.0 | 9492 | 3.1007 |
| 0.2172 | 792.0 | 9504 | 2.6548 |
| 0.0886 | 793.0 | 9516 | 2.5632 |
| 0.2214 | 794.0 | 9528 | 2.8648 |
| 0.1454 | 795.0 | 9540 | 2.2529 |
| 0.1623 | 796.0 | 9552 | 2.5046 |
| 0.1443 | 797.0 | 9564 | 3.6918 |
| 0.0777 | 798.0 | 9576 | 2.4575 |
| 0.1109 | 799.0 | 9588 | 2.5164 |
| 0.1228 | 800.0 | 9600 | 3.0721 |
| 0.0774 | 801.0 | 9612 | 3.3021 |
| 0.1239 | 802.0 | 9624 | 2.8039 |
| 0.1633 | 803.0 | 9636 | 3.9218 |
| 0.1562 | 804.0 | 9648 | 2.2741 |
| 0.1398 | 805.0 | 9660 | 2.3857 |
| 0.0827 | 806.0 | 9672 | 3.8789 |
| 0.1041 | 807.0 | 9684 | 3.1660 |
| 0.1345 | 808.0 | 9696 | 2.6615 |
| 0.0964 | 809.0 | 9708 | 3.8610 |
| 0.0705 | 810.0 | 9720 | 2.6085 |
| 0.1286 | 811.0 | 9732 | 2.8976 |
| 0.1319 | 812.0 | 9744 | 3.0883 |
| 0.2169 | 813.0 | 9756 | 3.1248 |
| 0.1585 | 814.0 | 9768 | 3.5880 |
| 0.1412 | 815.0 | 9780 | 4.2307 |
| 0.1665 | 816.0 | 9792 | 2.5049 |
| 0.1138 | 817.0 | 9804 | 3.0581 |
| 0.1329 | 818.0 | 9816 | 2.6806 |
| 0.1029 | 819.0 | 9828 | 2.6299 |
| 0.0967 | 820.0 | 9840 | 3.4191 |
| 0.1269 | 821.0 | 9852 | 3.8664 |
| 0.1122 | 822.0 | 9864 | 2.9701 |
| 0.108 | 823.0 | 9876 | 3.2608 |
| 0.1038 | 824.0 | 9888 | 2.9620 |
| 0.1599 | 825.0 | 9900 | 2.8607 |
| 0.2117 | 826.0 | 9912 | 3.1970 |
| 0.1121 | 827.0 | 9924 | 3.7504 |
| 0.131 | 828.0 | 9936 | 3.8170 |
| 0.1627 | 829.0 | 9948 | 3.9556 |
| 0.1504 | 830.0 | 9960 | 3.0378 |
| 0.1334 | 831.0 | 9972 | 2.9688 |
| 0.148 | 832.0 | 9984 | 3.6264 |
| 0.0931 | 833.0 | 9996 | 3.1000 |
| 0.1124 | 834.0 | 10008 | 2.2768 |
| 0.0716 | 835.0 | 10020 | 2.5006 |
| 0.1948 | 836.0 | 10032 | 3.6966 |
| 0.1199 | 837.0 | 10044 | 2.8248 |
| 0.1664 | 838.0 | 10056 | 3.4134 |
| 0.1269 | 839.0 | 10068 | 2.6959 |
| 0.1033 | 840.0 | 10080 | 3.1595 |
| 0.1494 | 841.0 | 10092 | 3.2611 |
| 0.1642 | 842.0 | 10104 | 2.7121 |
| 0.145 | 843.0 | 10116 | 2.8543 |
| 0.0995 | 844.0 | 10128 | 3.2522 |
| 0.098 | 845.0 | 10140 | 2.1804 |
| 0.1257 | 846.0 | 10152 | 2.6450 |
| 0.0715 | 847.0 | 10164 | 2.6534 |
| 0.1559 | 848.0 | 10176 | 2.1307 |
| 0.1551 | 849.0 | 10188 | 2.5103 |
| 0.1052 | 850.0 | 10200 | 3.7062 |
| 0.0932 | 851.0 | 10212 | 3.3476 |
| 0.0832 | 852.0 | 10224 | 2.4707 |
| 0.1666 | 853.0 | 10236 | 3.2024 |
| 0.1273 | 854.0 | 10248 | 2.5906 |
| 0.163 | 855.0 | 10260 | 3.0574 |
| 0.1309 | 856.0 | 10272 | 2.5865 |
| 0.2476 | 857.0 | 10284 | 3.3188 |
| 0.1191 | 858.0 | 10296 | 2.5695 |
| 0.1548 | 859.0 | 10308 | 3.6313 |
| 0.1599 | 860.0 | 10320 | 2.8832 |
| 0.128 | 861.0 | 10332 | 2.4891 |
| 0.1391 | 862.0 | 10344 | 3.1289 |
| 0.138 | 863.0 | 10356 | 2.6089 |
| 0.0706 | 864.0 | 10368 | 3.0440 |
| 0.1128 | 865.0 | 10380 | 3.6210 |
| 0.2152 | 866.0 | 10392 | 3.2759 |
| 0.2337 | 867.0 | 10404 | 3.1451 |
| 0.1473 | 868.0 | 10416 | 3.5721 |
| 0.1346 | 869.0 | 10428 | 3.0452 |
| 0.1074 | 870.0 | 10440 | 2.7138 |
| 0.095 | 871.0 | 10452 | 2.6684 |
| 0.0699 | 872.0 | 10464 | 3.2899 |
| 0.1326 | 873.0 | 10476 | 3.5183 |
| 0.1523 | 874.0 | 10488 | 2.1549 |
| 0.1067 | 875.0 | 10500 | 2.3682 |
| 0.125 | 876.0 | 10512 | 2.7431 |
| 0.1797 | 877.0 | 10524 | 2.5871 |
| 0.1442 | 878.0 | 10536 | 3.8328 |
| 0.136 | 879.0 | 10548 | 2.3259 |
| 0.1459 | 880.0 | 10560 | 2.7320 |
| 0.0617 | 881.0 | 10572 | 3.1303 |
| 0.1419 | 882.0 | 10584 | 3.2222 |
| 0.0673 | 883.0 | 10596 | 2.7638 |
| 0.0978 | 884.0 | 10608 | 3.5383 |
| 0.0737 | 885.0 | 10620 | 3.8811 |
| 0.0948 | 886.0 | 10632 | 3.8811 |
| 0.1158 | 887.0 | 10644 | 3.2247 |
| 0.1497 | 888.0 | 10656 | 2.5282 |
| 0.1488 | 889.0 | 10668 | 3.2183 |
| 0.1361 | 890.0 | 10680 | 3.0011 |
| 0.1536 | 891.0 | 10692 | 2.8193 |
| 0.1509 | 892.0 | 10704 | 3.2418 |
| 0.0663 | 893.0 | 10716 | 2.6955 |
| 0.0954 | 894.0 | 10728 | 3.6407 |
| 0.1257 | 895.0 | 10740 | 3.0466 |
| 0.1293 | 896.0 | 10752 | 3.4879 |
| 0.1682 | 897.0 | 10764 | 3.0975 |
| 0.1427 | 898.0 | 10776 | 2.7423 |
| 0.1332 | 899.0 | 10788 | 3.3520 |
| 0.1368 | 900.0 | 10800 | 3.1909 |
| 0.1633 | 901.0 | 10812 | 3.5312 |
| 0.193 | 902.0 | 10824 | 2.9027 |
| 0.1169 | 903.0 | 10836 | 3.2119 |
| 0.0856 | 904.0 | 10848 | 2.6224 |
| 0.1507 | 905.0 | 10860 | 3.4485 |
| 0.1663 | 906.0 | 10872 | 3.7079 |
| 0.1162 | 907.0 | 10884 | 2.4238 |
| 0.1162 | 908.0 | 10896 | 2.7136 |
| 0.1181 | 909.0 | 10908 | 3.2237 |
| 0.1468 | 910.0 | 10920 | 2.9780 |
| 0.0959 | 911.0 | 10932 | 3.1877 |
| 0.1162 | 912.0 | 10944 | 2.1530 |
| 0.1245 | 913.0 | 10956 | 3.4275 |
| 0.1524 | 914.0 | 10968 | 2.9887 |
| 0.1487 | 915.0 | 10980 | 3.5492 |
| 0.1189 | 916.0 | 10992 | 3.7000 |
| 0.1104 | 917.0 | 11004 | 3.1991 |
| 0.1339 | 918.0 | 11016 | 3.3229 |
| 0.1239 | 919.0 | 11028 | 3.5813 |
| 0.1234 | 920.0 | 11040 | 2.6298 |
| 0.1115 | 921.0 | 11052 | 3.1678 |
| 0.097 | 922.0 | 11064 | 3.5488 |
| 0.1599 | 923.0 | 11076 | 2.1364 |
| 0.0864 | 924.0 | 11088 | 3.0174 |
| 0.2064 | 925.0 | 11100 | 3.3537 |
| 0.1389 | 926.0 | 11112 | 3.1944 |
| 0.1285 | 927.0 | 11124 | 2.5938 |
| 0.099 | 928.0 | 11136 | 2.9489 |
| 0.1544 | 929.0 | 11148 | 3.1323 |
| 0.0943 | 930.0 | 11160 | 3.0074 |
| 0.1343 | 931.0 | 11172 | 3.0724 |
| 0.0937 | 932.0 | 11184 | 2.5755 |
| 0.0631 | 933.0 | 11196 | 2.4738 |
| 0.1373 | 934.0 | 11208 | 2.8831 |
| 0.1043 | 935.0 | 11220 | 1.9059 |
| 0.0825 | 936.0 | 11232 | 2.8366 |
| 0.1619 | 937.0 | 11244 | 2.5491 |
| 0.0906 | 938.0 | 11256 | 2.5668 |
| 0.0479 | 939.0 | 11268 | 3.0457 |
| 0.1427 | 940.0 | 11280 | 4.0130 |
| 0.1058 | 941.0 | 11292 | 3.5801 |
| 0.1359 | 942.0 | 11304 | 2.2584 |
| 0.1117 | 943.0 | 11316 | 2.6767 |
| 0.1341 | 944.0 | 11328 | 3.2212 |
| 0.1866 | 945.0 | 11340 | 2.9726 |
| 0.1355 | 946.0 | 11352 | 3.1199 |
| 0.143 | 947.0 | 11364 | 2.7948 |
| 0.237 | 948.0 | 11376 | 3.2464 |
| 0.1206 | 949.0 | 11388 | 3.4582 |
| 0.2615 | 950.0 | 11400 | 2.1646 |
| 0.1631 | 951.0 | 11412 | 2.5108 |
| 0.158 | 952.0 | 11424 | 3.4831 |
| 0.1103 | 953.0 | 11436 | 2.3143 |
| 0.1942 | 954.0 | 11448 | 2.8638 |
| 0.1049 | 955.0 | 11460 | 3.3910 |
| 0.1635 | 956.0 | 11472 | 3.4069 |
| 0.0989 | 957.0 | 11484 | 2.7670 |
| 0.071 | 958.0 | 11496 | 3.6908 |
| 0.1326 | 959.0 | 11508 | 3.0617 |
| 0.1352 | 960.0 | 11520 | 2.4996 |
| 0.1155 | 961.0 | 11532 | 2.3456 |
| 0.1407 | 962.0 | 11544 | 3.1657 |
| 0.1622 | 963.0 | 11556 | 3.2390 |
| 0.0628 | 964.0 | 11568 | 2.4668 |
| 0.1201 | 965.0 | 11580 | 2.8448 |
| 0.1387 | 966.0 | 11592 | 2.9089 |
| 0.1103 | 967.0 | 11604 | 2.8493 |
| 0.0735 | 968.0 | 11616 | 2.5433 |
| 0.093 | 969.0 | 11628 | 3.0329 |
| 0.3551 | 970.0 | 11640 | 3.3447 |
| 0.1849 | 971.0 | 11652 | 4.2088 |
| 0.1257 | 972.0 | 11664 | 3.1439 |
| 0.0764 | 973.0 | 11676 | 3.4356 |
| 0.1678 | 974.0 | 11688 | 3.1160 |
| 0.1093 | 975.0 | 11700 | 2.7974 |
| 0.0811 | 976.0 | 11712 | 2.6031 |
| 0.0878 | 977.0 | 11724 | 2.6731 |
| 0.1478 | 978.0 | 11736 | 2.5262 |
| 0.0933 | 979.0 | 11748 | 2.9120 |
| 0.0846 | 980.0 | 11760 | 3.2794 |
| 0.1063 | 981.0 | 11772 | 2.9906 |
| 0.0907 | 982.0 | 11784 | 2.6891 |
| 0.1747 | 983.0 | 11796 | 3.6264 |
| 0.1611 | 984.0 | 11808 | 3.2517 |
| 0.1171 | 985.0 | 11820 | 2.6785 |
| 0.1323 | 986.0 | 11832 | 3.4850 |
| 0.0758 | 987.0 | 11844 | 3.6252 |
| 0.0713 | 988.0 | 11856 | 3.2538 |
| 0.0594 | 989.0 | 11868 | 2.5900 |
| 0.1958 | 990.0 | 11880 | 2.4104 |
| 0.1328 | 991.0 | 11892 | 3.8045 |
| 0.1006 | 992.0 | 11904 | 3.5627 |
| 0.0969 | 993.0 | 11916 | 2.5848 |
| 0.1363 | 994.0 | 11928 | 2.8333 |
| 0.1455 | 995.0 | 11940 | 2.3381 |
| 0.0774 | 996.0 | 11952 | 2.6104 |
| 0.1001 | 997.0 | 11964 | 3.5031 |
| 0.0956 | 998.0 | 11976 | 2.7140 |
| 0.1094 | 999.0 | 11988 | 3.1090 |
| 0.1129 | 1000.0 | 12000 | 2.6911 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RajkNakka/ppo-LunarLander-v2-unit-8 | RajkNakka | 2023-07-08T20:48:52Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:55:27Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 7.16 +/- 73.94
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Huggingfly/ppo-PyramidsTraining | Huggingfly | 2023-07-08T20:45:21Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-08T20:45:16Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Huggingfly/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
skrl/IsaacGymEnvs-Ingenuity-PPO | skrl | 2023-07-08T20:24:38Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:44:57Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 7162.47 +/- 555.5
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Ingenuity
type: IsaacGymEnvs-Ingenuity
---
<!-- ---
torch: 7018.19 +/- 508.68
jax: 7041.64 +/- 297.51
numpy: 7162.47 +/- 555.5
--- -->
# IsaacGymEnvs-Ingenuity-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Ingenuity
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ingenuity-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ingenuity-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 4 # 16 * 4096 / 16384
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-3
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.016}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
mlabonne/gpt2-GPTQ-4bit | mlabonne | 2023-07-08T20:09:26Z | 18 | 0 | transformers | [
"transformers",
"gpt2",
"text-generation",
"AutoGPTQ",
"4bit",
"GPTQ",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-11T17:30:12Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- AutoGPTQ
- 4bit
- GPTQ
---
Model created using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) on a [GPT-2](https://huggingface.co/gpt2) model with 4-bit quantization.
You can load this model with the AutoGPTQ library, installed with the following command:
```
pip install auto-gptq
```
You can then download the model from the hub using the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name = "mlabonne/gpt2-GPTQ-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantize_config = BaseQuantizeConfig.from_pretrained(model_name)
model = AutoGPTQForCausalLM.from_quantized(model_name,
model_basename="gptq_model-4bit-128g",
device="cuda:0",
use_triton=True,
use_safetensors=True,
quantize_config=quantize_config)
```
This model works with the traditional [Text Generation pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextGenerationPipeline).
Example of generation with the input text "I have a dream":
```
I have a dream. I want someone with my face, and what I have. I want to go home. I want to be alive. I want to see my children. I dream if I have the spirit, my body, my voice,
``` |
Word2vec/wikipedia2vec_enwiki_20180420_300d | Word2vec | 2023-07-08T20:03:03Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T15:26:11Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_300d", filename="enwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
fobt/speecht5_finetuned_voxpopuli_nl | fobt | 2023-07-08T19:59:00Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-08T17:41:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5237 | 4.3 | 1000 | 0.4782 |
| 0.4946 | 8.61 | 2000 | 0.4639 |
| 0.493 | 12.91 | 3000 | 0.4608 |
| 0.4903 | 17.21 | 4000 | 0.4585 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
snousias/bert-base-greek-uncased-v1-finetuned-polylex | snousias | 2023-07-08T19:50:38Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T19:48:32Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v1-finetuned-polylex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v1-finetuned-polylex
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1637 | 1.0 | 12 | 2.6649 |
| 3.0581 | 2.0 | 24 | 2.5475 |
| 2.648 | 3.0 | 36 | 2.1624 |
| 2.5983 | 4.0 | 48 | 2.3285 |
| 2.7524 | 5.0 | 60 | 2.5745 |
| 2.4923 | 6.0 | 72 | 2.8096 |
| 2.5336 | 7.0 | 84 | 2.9470 |
| 2.3271 | 8.0 | 96 | 2.5497 |
| 2.4018 | 9.0 | 108 | 2.3413 |
| 2.544 | 10.0 | 120 | 2.4170 |
| 1.9144 | 11.0 | 132 | 2.5254 |
| 2.0996 | 12.0 | 144 | 2.4147 |
| 1.8733 | 13.0 | 156 | 2.5462 |
| 1.8261 | 14.0 | 168 | 2.2045 |
| 2.0033 | 15.0 | 180 | 1.9549 |
| 1.9967 | 16.0 | 192 | 2.1614 |
| 1.8515 | 17.0 | 204 | 2.8167 |
| 1.8583 | 18.0 | 216 | 2.8441 |
| 1.7512 | 19.0 | 228 | 2.4536 |
| 1.5746 | 20.0 | 240 | 2.6204 |
| 1.5267 | 21.0 | 252 | 2.9290 |
| 1.7248 | 22.0 | 264 | 2.0433 |
| 1.5692 | 23.0 | 276 | 2.4710 |
| 1.6093 | 24.0 | 288 | 2.4340 |
| 1.619 | 25.0 | 300 | 2.2689 |
| 1.4406 | 26.0 | 312 | 3.6729 |
| 1.5452 | 27.0 | 324 | 3.2225 |
| 1.4575 | 28.0 | 336 | 1.8853 |
| 1.5534 | 29.0 | 348 | 2.2135 |
| 1.4872 | 30.0 | 360 | 2.7540 |
| 1.3923 | 31.0 | 372 | 2.2408 |
| 1.3682 | 32.0 | 384 | 2.5181 |
| 1.2623 | 33.0 | 396 | 2.1360 |
| 1.1888 | 34.0 | 408 | 2.3912 |
| 1.3427 | 35.0 | 420 | 2.4600 |
| 1.1969 | 36.0 | 432 | 2.6388 |
| 1.3367 | 37.0 | 444 | 2.5489 |
| 1.226 | 38.0 | 456 | 1.5805 |
| 1.1808 | 39.0 | 468 | 2.7466 |
| 1.1694 | 40.0 | 480 | 2.4887 |
| 1.2736 | 41.0 | 492 | 2.5735 |
| 1.2292 | 42.0 | 504 | 2.2357 |
| 1.2556 | 43.0 | 516 | 2.9244 |
| 1.0155 | 44.0 | 528 | 1.8348 |
| 1.2425 | 45.0 | 540 | 2.4494 |
| 1.2665 | 46.0 | 552 | 2.4866 |
| 1.3439 | 47.0 | 564 | 2.3430 |
| 1.4468 | 48.0 | 576 | 1.7801 |
| 1.1772 | 49.0 | 588 | 2.5785 |
| 1.0618 | 50.0 | 600 | 2.9959 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
camus-ng/dreambooth_lora_cory_v15_ten | camus-ng | 2023-07-08T19:43:42Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T16:25:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/dreambooth_lora_cory_v15_ten
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
snousias/bert-base-greek-uncased-v1-finetuned-imdb | snousias | 2023-07-08T19:38:31Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T18:56:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v1-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v1-finetuned-imdb
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0877 | 1.0 | 45 | 2.9871 |
| 1.2665 | 2.0 | 90 | 2.9228 |
| 1.9122 | 3.0 | 135 | 3.1228 |
| 2.2564 | 4.0 | 180 | 1.6066 |
| 1.9132 | 5.0 | 225 | 2.6351 |
| 1.9952 | 6.0 | 270 | 2.2649 |
| 1.7895 | 7.0 | 315 | 2.3376 |
| 2.0415 | 8.0 | 360 | 1.9894 |
| 1.8113 | 9.0 | 405 | 2.2998 |
| 1.6944 | 10.0 | 450 | 2.1420 |
| 1.7862 | 11.0 | 495 | 2.7167 |
| 1.5657 | 12.0 | 540 | 2.5103 |
| 1.4576 | 13.0 | 585 | 2.0238 |
| 1.3369 | 14.0 | 630 | 2.5880 |
| 1.3598 | 15.0 | 675 | 1.8161 |
| 1.3407 | 16.0 | 720 | 2.4031 |
| 1.3805 | 17.0 | 765 | 2.2539 |
| 1.176 | 18.0 | 810 | 3.2901 |
| 1.1152 | 19.0 | 855 | 2.3024 |
| 1.0629 | 20.0 | 900 | 2.0823 |
| 1.1972 | 21.0 | 945 | 2.9957 |
| 1.1317 | 22.0 | 990 | 2.5360 |
| 1.0396 | 23.0 | 1035 | 1.6268 |
| 0.8686 | 24.0 | 1080 | 3.2657 |
| 1.0526 | 25.0 | 1125 | 3.0398 |
| 0.9023 | 26.0 | 1170 | 2.8197 |
| 0.9539 | 27.0 | 1215 | 3.1922 |
| 0.8699 | 28.0 | 1260 | 1.6943 |
| 0.8669 | 29.0 | 1305 | 2.7801 |
| 0.7893 | 30.0 | 1350 | 2.1385 |
| 0.7462 | 31.0 | 1395 | 2.2881 |
| 0.7627 | 32.0 | 1440 | 3.0789 |
| 0.7536 | 33.0 | 1485 | 2.9320 |
| 0.8317 | 34.0 | 1530 | 3.4081 |
| 0.6749 | 35.0 | 1575 | 2.7531 |
| 0.789 | 36.0 | 1620 | 2.9154 |
| 0.6609 | 37.0 | 1665 | 2.1821 |
| 0.6795 | 38.0 | 1710 | 2.5330 |
| 0.6408 | 39.0 | 1755 | 3.4374 |
| 0.6827 | 40.0 | 1800 | 2.3127 |
| 0.6188 | 41.0 | 1845 | 2.0818 |
| 0.6085 | 42.0 | 1890 | 2.2737 |
| 0.6978 | 43.0 | 1935 | 2.9629 |
| 0.6164 | 44.0 | 1980 | 2.5250 |
| 0.6273 | 45.0 | 2025 | 2.3866 |
| 0.7064 | 46.0 | 2070 | 2.0937 |
| 0.6561 | 47.0 | 2115 | 2.4984 |
| 0.7341 | 48.0 | 2160 | 3.1911 |
| 0.6271 | 49.0 | 2205 | 2.2692 |
| 0.6757 | 50.0 | 2250 | 2.2642 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_enwiki_20180420_nolg_500d | Word2vec | 2023-07-08T19:22:12Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T15:07:46Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_nolg_500d", filename="enwiki_20180420_nolg_500d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
ahmadalsharef994/bert-base-banking77-pt2 | ahmadalsharef994 | 2023-07-08T19:14:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T18:19:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9298273146197705
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3045
- F1: 0.9298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1191 | 1.0 | 626 | 0.7800 | 0.8702 |
| 0.3899 | 2.0 | 1252 | 0.3662 | 0.9204 |
| 0.1916 | 3.0 | 1878 | 0.3045 | 0.9298 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_enwiki_20180420_100d | Word2vec | 2023-07-08T19:12:30Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:06Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_100d", filename="enwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
visual-openllm/visual-openllm-chatglm-6b-rola | visual-openllm | 2023-07-08T19:07:58Z | 0 | 8 | null | [
"dataset:tatsu-lab/alpaca",
"dataset:shibing624/alpaca-zh",
"license:apache-2.0",
"region:us"
] | null | 2023-03-26T07:49:58Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
- shibing624/alpaca-zh
---
- Loda LLM
```python
from modeling_chatglm import ChatGLMForConditionalGeneration
import torch
torch.set_default_tensor_type(torch.cuda.HalfTensor)
model = ChatGLMForConditionalGeneration.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, device_map='auto')
```
- Load LoRA
```python
from peft import PeftModel
model = PeftModel.from_pretrained(model, "visual-openllm/visual-openllm-chatglm-6b-rola")
torch.set_default_tensor_type(torch.cuda.FloatTensor)
``` |
wizofavalon/bert-large-uncased-finetuned-wikitext2 | wizofavalon | 2023-07-08T19:07:01Z | 70 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-05T19:20:17Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: wizofavalon/bert-large-uncased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wizofavalon/bert-large-uncased-finetuned-wikitext2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7861
- Validation Loss: 1.5868
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.7861 | 1.5868 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d | Word2vec | 2023-07-08T19:06:31Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T13:48:27Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d", filename="enwiki_20180420_nolg_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
tyavika/Distil-CNN512LSTM256NoBi | tyavika | 2023-07-08T18:48:27Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-02T11:03:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6009 | 1.0 | 3290 | 1.2927 |
| 1.0288 | 2.0 | 6580 | 1.1467 |
| 0.7497 | 3.0 | 9870 | 1.1902 |
| 0.5288 | 4.0 | 13160 | 1.3388 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cagarraz/Reinforce-1234 | cagarraz | 2023-07-08T18:41:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T16:38:24Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1234
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 34.70 +/- 15.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
spitfire4794/photo | spitfire4794 | 2023-07-08T18:40:04Z | 287 | 8 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"photorealistic",
"photoreal",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-04T18:28:38Z | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- photorealistic
- photoreal
- diffusers
inference: true
pipeline_tag: text-to-image
library_name: diffusers
---
# the original but with inference api enabled because why not
# Dreamlike Photoreal 2.0 is a photorealistic model based on Stable Diffusion 1.5, made by [dreamlike.art](https://dreamlike.art/).
# If you want to use dreamlike models on your website/app/etc., check the license at the bottom first!
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
You can add **photo** to your prompt to make your gens look more photorealistic.
Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a vertical aspect ratio. If you want a landscape photo, try using a horizontal aspect ratio.
This model was trained on 768x768px images, so use 768x768px, 640x896px, 896x640px, etc. It also works pretty good with higher resolutions such as 768x1024px or 1024x768px.
### Examples
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview2.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview3.jpg" style="max-width: 800px;" width="100%"/>
### dreamlike.art
You can use this model for free on [dreamlike.art](https://dreamlike.art/)!
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike.jpg" style="max-width: 1000px;" width="100%"/>
### CKPT
[Download dreamlike-photoreal-2.0.ckpt (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.ckpt)
### Safetensors
[Download dreamlike-photoreal-2.0.safetensors (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dreamlike-art/dreamlike-photoreal-2.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "photo, a church in the middle of a field of crops, bright cinematic lighting, gopro, fisheye lens"
image = pipe(prompt).images[0]
image.save("./result.jpg")
```
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/church.jpg" style="max-width: 640px;" width="100%"/>
# License
This model is licesed under a **modified** CreativeML OpenRAIL-M license.
- **You are not allowed to host, finetune, or do inference with the model or its derivatives on websites/apps/etc. If you want to, please email us at [email protected]**
- **You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Photoreal 2.0) and include the license as well as a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0)**
- **You are free to use the outputs (images) of the model for commercial purposes in teams of 10 or less**
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the **modified** CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md |
NERO500/q-FrozenLake-v1-4x4-noSlippery | NERO500 | 2023-07-08T18:39:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:39:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NERO500/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Word2vec/wikipedia2vec_arwiki_20180420_300d | Word2vec | 2023-07-08T18:34:15Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:33:09Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_300d", filename="arwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Subsets and Splits