modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
trevorj/dqn-SpaceInvadersNoFrameskip-v4 | trevorj | 2023-07-10T22:41:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T22:41:13Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 523.00 +/- 142.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga trevorj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga trevorj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga trevorj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.9 | jordyvl | 2023-07-10T22:40:13Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T22:00:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5373
- Accuracy: 0.85
- Brier Loss: 0.2432
- Nll: 1.1157
- F1 Micro: 0.85
- F1 Macro: 0.8450
- Ece: 0.1621
- Aurc: 0.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.1036 | 0.215 | 0.8753 | 5.3195 | 0.2150 | 0.1264 | 0.2571 | 0.6923 |
| No log | 2.0 | 14 | 1.6952 | 0.405 | 0.7407 | 3.4929 | 0.405 | 0.2416 | 0.2907 | 0.4040 |
| No log | 3.0 | 21 | 1.1843 | 0.62 | 0.5633 | 2.0113 | 0.62 | 0.5725 | 0.2740 | 0.2014 |
| No log | 4.0 | 28 | 0.8797 | 0.71 | 0.4080 | 1.7043 | 0.7100 | 0.6683 | 0.2024 | 0.1125 |
| No log | 5.0 | 35 | 0.8570 | 0.715 | 0.3837 | 1.6476 | 0.715 | 0.7280 | 0.2189 | 0.1079 |
| No log | 6.0 | 42 | 0.7484 | 0.775 | 0.3285 | 1.5962 | 0.775 | 0.7668 | 0.1873 | 0.0816 |
| No log | 7.0 | 49 | 0.7337 | 0.79 | 0.3131 | 1.5377 | 0.79 | 0.7779 | 0.1904 | 0.0771 |
| No log | 8.0 | 56 | 0.6709 | 0.795 | 0.3012 | 1.2156 | 0.795 | 0.7776 | 0.1939 | 0.0761 |
| No log | 9.0 | 63 | 0.6901 | 0.795 | 0.3069 | 1.4725 | 0.795 | 0.7916 | 0.1882 | 0.0769 |
| No log | 10.0 | 70 | 0.7960 | 0.75 | 0.3586 | 1.4426 | 0.75 | 0.7406 | 0.1868 | 0.0976 |
| No log | 11.0 | 77 | 0.7489 | 0.77 | 0.3296 | 1.6202 | 0.7700 | 0.7794 | 0.2020 | 0.0878 |
| No log | 12.0 | 84 | 0.7068 | 0.785 | 0.3270 | 1.4127 | 0.785 | 0.7812 | 0.1922 | 0.0759 |
| No log | 13.0 | 91 | 0.6687 | 0.79 | 0.3050 | 1.3820 | 0.79 | 0.7945 | 0.1818 | 0.0625 |
| No log | 14.0 | 98 | 0.6052 | 0.79 | 0.2854 | 1.0602 | 0.79 | 0.7716 | 0.1702 | 0.0590 |
| No log | 15.0 | 105 | 0.6369 | 0.795 | 0.2959 | 1.0580 | 0.795 | 0.7953 | 0.1709 | 0.0603 |
| No log | 16.0 | 112 | 0.6204 | 0.81 | 0.2816 | 1.1886 | 0.81 | 0.8050 | 0.1657 | 0.0702 |
| No log | 17.0 | 119 | 0.5648 | 0.83 | 0.2475 | 1.2506 | 0.83 | 0.8241 | 0.1347 | 0.0612 |
| No log | 18.0 | 126 | 0.5849 | 0.83 | 0.2672 | 1.2245 | 0.83 | 0.8155 | 0.1646 | 0.0601 |
| No log | 19.0 | 133 | 0.5536 | 0.835 | 0.2475 | 1.0514 | 0.835 | 0.8254 | 0.1683 | 0.0531 |
| No log | 20.0 | 140 | 0.5689 | 0.835 | 0.2513 | 1.2369 | 0.835 | 0.8437 | 0.1722 | 0.0489 |
| No log | 21.0 | 147 | 0.5540 | 0.83 | 0.2485 | 1.2139 | 0.83 | 0.8165 | 0.1641 | 0.0608 |
| No log | 22.0 | 154 | 0.5352 | 0.835 | 0.2402 | 1.0108 | 0.835 | 0.8295 | 0.1408 | 0.0430 |
| No log | 23.0 | 161 | 0.5380 | 0.84 | 0.2403 | 1.2280 | 0.8400 | 0.8347 | 0.1405 | 0.0436 |
| No log | 24.0 | 168 | 0.5422 | 0.835 | 0.2471 | 1.0204 | 0.835 | 0.8324 | 0.1606 | 0.0445 |
| No log | 25.0 | 175 | 0.5342 | 0.85 | 0.2404 | 1.0767 | 0.85 | 0.8487 | 0.1469 | 0.0432 |
| No log | 26.0 | 182 | 0.5374 | 0.84 | 0.2429 | 1.0774 | 0.8400 | 0.8334 | 0.1420 | 0.0462 |
| No log | 27.0 | 189 | 0.5311 | 0.85 | 0.2395 | 1.0748 | 0.85 | 0.8487 | 0.1439 | 0.0446 |
| No log | 28.0 | 196 | 0.5298 | 0.85 | 0.2384 | 1.1337 | 0.85 | 0.8487 | 0.1570 | 0.0437 |
| No log | 29.0 | 203 | 0.5387 | 0.845 | 0.2435 | 1.1319 | 0.845 | 0.8424 | 0.1539 | 0.0458 |
| No log | 30.0 | 210 | 0.5361 | 0.85 | 0.2430 | 1.0648 | 0.85 | 0.8450 | 0.1679 | 0.0431 |
| No log | 31.0 | 217 | 0.5339 | 0.85 | 0.2413 | 1.0676 | 0.85 | 0.8487 | 0.1646 | 0.0428 |
| No log | 32.0 | 224 | 0.5345 | 0.85 | 0.2421 | 1.0709 | 0.85 | 0.8487 | 0.1476 | 0.0440 |
| No log | 33.0 | 231 | 0.5343 | 0.85 | 0.2421 | 1.1236 | 0.85 | 0.8450 | 0.1621 | 0.0431 |
| No log | 34.0 | 238 | 0.5353 | 0.845 | 0.2426 | 1.1244 | 0.845 | 0.8424 | 0.1710 | 0.0428 |
| No log | 35.0 | 245 | 0.5346 | 0.85 | 0.2423 | 1.0649 | 0.85 | 0.8487 | 0.1520 | 0.0440 |
| No log | 36.0 | 252 | 0.5356 | 0.855 | 0.2422 | 1.1241 | 0.855 | 0.8517 | 0.1814 | 0.0429 |
| No log | 37.0 | 259 | 0.5357 | 0.85 | 0.2426 | 1.1237 | 0.85 | 0.8450 | 0.1670 | 0.0425 |
| No log | 38.0 | 266 | 0.5356 | 0.845 | 0.2426 | 1.1226 | 0.845 | 0.8419 | 0.1607 | 0.0435 |
| No log | 39.0 | 273 | 0.5347 | 0.855 | 0.2420 | 1.0739 | 0.855 | 0.8517 | 0.1597 | 0.0427 |
| No log | 40.0 | 280 | 0.5356 | 0.855 | 0.2423 | 1.1203 | 0.855 | 0.8517 | 0.1676 | 0.0435 |
| No log | 41.0 | 287 | 0.5365 | 0.85 | 0.2431 | 1.1199 | 0.85 | 0.8450 | 0.1780 | 0.0429 |
| No log | 42.0 | 294 | 0.5356 | 0.85 | 0.2426 | 1.1173 | 0.85 | 0.8450 | 0.1653 | 0.0430 |
| No log | 43.0 | 301 | 0.5363 | 0.85 | 0.2428 | 1.1189 | 0.85 | 0.8450 | 0.1550 | 0.0435 |
| No log | 44.0 | 308 | 0.5345 | 0.85 | 0.2418 | 1.1193 | 0.85 | 0.8450 | 0.1590 | 0.0428 |
| No log | 45.0 | 315 | 0.5374 | 0.85 | 0.2435 | 1.1202 | 0.85 | 0.8450 | 0.1633 | 0.0435 |
| No log | 46.0 | 322 | 0.5355 | 0.85 | 0.2423 | 1.1183 | 0.85 | 0.8450 | 0.1564 | 0.0428 |
| No log | 47.0 | 329 | 0.5354 | 0.85 | 0.2425 | 1.1176 | 0.85 | 0.8450 | 0.1509 | 0.0429 |
| No log | 48.0 | 336 | 0.5369 | 0.85 | 0.2433 | 1.1177 | 0.85 | 0.8450 | 0.1517 | 0.0432 |
| No log | 49.0 | 343 | 0.5361 | 0.85 | 0.2428 | 1.1182 | 0.85 | 0.8450 | 0.1490 | 0.0428 |
| No log | 50.0 | 350 | 0.5364 | 0.85 | 0.2431 | 1.1179 | 0.85 | 0.8450 | 0.1654 | 0.0430 |
| No log | 51.0 | 357 | 0.5365 | 0.85 | 0.2428 | 1.1185 | 0.85 | 0.8450 | 0.1729 | 0.0432 |
| No log | 52.0 | 364 | 0.5364 | 0.85 | 0.2430 | 1.1165 | 0.85 | 0.8450 | 0.1614 | 0.0429 |
| No log | 53.0 | 371 | 0.5362 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1694 | 0.0430 |
| No log | 54.0 | 378 | 0.5369 | 0.85 | 0.2432 | 1.1170 | 0.85 | 0.8450 | 0.1597 | 0.0432 |
| No log | 55.0 | 385 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1670 | 0.0429 |
| No log | 56.0 | 392 | 0.5367 | 0.85 | 0.2430 | 1.1180 | 0.85 | 0.8450 | 0.1619 | 0.0430 |
| No log | 57.0 | 399 | 0.5364 | 0.85 | 0.2429 | 1.1163 | 0.85 | 0.8450 | 0.1649 | 0.0429 |
| No log | 58.0 | 406 | 0.5364 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1611 | 0.0429 |
| No log | 59.0 | 413 | 0.5365 | 0.85 | 0.2428 | 1.1163 | 0.85 | 0.8450 | 0.1591 | 0.0429 |
| No log | 60.0 | 420 | 0.5364 | 0.85 | 0.2429 | 1.1155 | 0.85 | 0.8450 | 0.1588 | 0.0429 |
| No log | 61.0 | 427 | 0.5370 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1772 | 0.0432 |
| No log | 62.0 | 434 | 0.5367 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1622 | 0.0429 |
| No log | 63.0 | 441 | 0.5362 | 0.85 | 0.2428 | 1.1162 | 0.85 | 0.8450 | 0.1503 | 0.0428 |
| No log | 64.0 | 448 | 0.5372 | 0.85 | 0.2433 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0432 |
| No log | 65.0 | 455 | 0.5371 | 0.85 | 0.2431 | 1.1162 | 0.85 | 0.8450 | 0.1499 | 0.0429 |
| No log | 66.0 | 462 | 0.5367 | 0.85 | 0.2430 | 1.1160 | 0.85 | 0.8450 | 0.1591 | 0.0427 |
| No log | 67.0 | 469 | 0.5367 | 0.85 | 0.2430 | 1.1164 | 0.85 | 0.8450 | 0.1562 | 0.0428 |
| No log | 68.0 | 476 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1556 | 0.0427 |
| No log | 69.0 | 483 | 0.5368 | 0.85 | 0.2431 | 1.1158 | 0.85 | 0.8450 | 0.1593 | 0.0428 |
| No log | 70.0 | 490 | 0.5372 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1628 | 0.0428 |
| No log | 71.0 | 497 | 0.5371 | 0.85 | 0.2432 | 1.1163 | 0.85 | 0.8450 | 0.1599 | 0.0429 |
| 0.1708 | 72.0 | 504 | 0.5370 | 0.85 | 0.2430 | 1.1161 | 0.85 | 0.8450 | 0.1559 | 0.0430 |
| 0.1708 | 73.0 | 511 | 0.5372 | 0.85 | 0.2433 | 1.1154 | 0.85 | 0.8450 | 0.1556 | 0.0428 |
| 0.1708 | 74.0 | 518 | 0.5370 | 0.85 | 0.2429 | 1.1165 | 0.85 | 0.8450 | 0.1540 | 0.0428 |
| 0.1708 | 75.0 | 525 | 0.5371 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0427 |
| 0.1708 | 76.0 | 532 | 0.5369 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 77.0 | 539 | 0.5369 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 78.0 | 546 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 79.0 | 553 | 0.5375 | 0.85 | 0.2433 | 1.1162 | 0.85 | 0.8450 | 0.1688 | 0.0429 |
| 0.1708 | 80.0 | 560 | 0.5372 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 81.0 | 567 | 0.5373 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1620 | 0.0428 |
| 0.1708 | 82.0 | 574 | 0.5374 | 0.85 | 0.2433 | 1.1160 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 83.0 | 581 | 0.5372 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 84.0 | 588 | 0.5371 | 0.85 | 0.2431 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 85.0 | 595 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1687 | 0.0426 |
| 0.1708 | 86.0 | 602 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1619 | 0.0426 |
| 0.1708 | 87.0 | 609 | 0.5374 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1687 | 0.0428 |
| 0.1708 | 88.0 | 616 | 0.5373 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 89.0 | 623 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 90.0 | 630 | 0.5373 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 91.0 | 637 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 92.0 | 644 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 93.0 | 651 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 94.0 | 658 | 0.5373 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 95.0 | 665 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 96.0 | 672 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 97.0 | 679 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 98.0 | 686 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 99.0 | 693 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 100.0 | 700 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
digiplay/BlankCanvas_v1 | digiplay | 2023-07-10T22:36:13Z | 323 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T09:59:13Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/106523/blank-canvas
Original Author's DEMO images :
 |
Raizel123/Mbyonglora | Raizel123 | 2023-07-10T22:31:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:27:47Z | ---
license: creativeml-openrail-m
---
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_MSE | jordyvl | 2023-07-10T22:30:03Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T21:13:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4673
- Accuracy: 0.6425
- Brier Loss: 0.4763
- Nll: 3.0680
- F1 Micro: 0.6425
- F1 Macro: 0.6485
- Ece: 0.1946
- Aurc: 0.1381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.4851 | 0.06 | 0.9565 | 13.8276 | 0.06 | 0.0556 | 0.1688 | 0.9385 |
| No log | 2.0 | 50 | 3.5619 | 0.3775 | 0.7827 | 6.2649 | 0.3775 | 0.3611 | 0.2331 | 0.3882 |
| No log | 3.0 | 75 | 2.8990 | 0.5025 | 0.6453 | 4.7376 | 0.5025 | 0.4858 | 0.1689 | 0.2658 |
| No log | 4.0 | 100 | 2.5972 | 0.515 | 0.5980 | 4.4210 | 0.515 | 0.4895 | 0.1605 | 0.2249 |
| No log | 5.0 | 125 | 2.4353 | 0.56 | 0.5762 | 3.4885 | 0.56 | 0.5566 | 0.1548 | 0.2100 |
| No log | 6.0 | 150 | 2.4157 | 0.5475 | 0.5864 | 3.8261 | 0.5475 | 0.5323 | 0.1837 | 0.2167 |
| No log | 7.0 | 175 | 2.1786 | 0.6075 | 0.5203 | 3.4565 | 0.6075 | 0.6103 | 0.1403 | 0.1670 |
| No log | 8.0 | 200 | 2.1082 | 0.63 | 0.5040 | 3.3570 | 0.63 | 0.6246 | 0.1580 | 0.1530 |
| No log | 9.0 | 225 | 2.0472 | 0.625 | 0.5042 | 3.8572 | 0.625 | 0.6184 | 0.1552 | 0.1530 |
| No log | 10.0 | 250 | 2.0589 | 0.6025 | 0.5468 | 3.5723 | 0.6025 | 0.5982 | 0.1781 | 0.1785 |
| No log | 11.0 | 275 | 1.8965 | 0.65 | 0.4755 | 3.4466 | 0.65 | 0.6497 | 0.1605 | 0.1475 |
| No log | 12.0 | 300 | 1.9014 | 0.6325 | 0.5066 | 3.0881 | 0.6325 | 0.6359 | 0.1658 | 0.1591 |
| No log | 13.0 | 325 | 1.7904 | 0.6175 | 0.5162 | 3.4673 | 0.6175 | 0.6141 | 0.1525 | 0.1598 |
| No log | 14.0 | 350 | 1.8624 | 0.625 | 0.5173 | 3.6824 | 0.625 | 0.6179 | 0.1567 | 0.1624 |
| No log | 15.0 | 375 | 1.7083 | 0.6625 | 0.4817 | 3.1296 | 0.6625 | 0.6686 | 0.1651 | 0.1405 |
| No log | 16.0 | 400 | 1.8848 | 0.59 | 0.5478 | 4.3761 | 0.59 | 0.5913 | 0.2083 | 0.1696 |
| No log | 17.0 | 425 | 1.7238 | 0.6125 | 0.5229 | 3.1232 | 0.6125 | 0.6052 | 0.1833 | 0.1553 |
| No log | 18.0 | 450 | 1.7126 | 0.625 | 0.5152 | 2.9267 | 0.625 | 0.6284 | 0.1747 | 0.1565 |
| No log | 19.0 | 475 | 1.6459 | 0.6275 | 0.5024 | 2.9078 | 0.6275 | 0.6219 | 0.1766 | 0.1527 |
| 1.0542 | 20.0 | 500 | 1.6029 | 0.6275 | 0.4855 | 3.0931 | 0.6275 | 0.6316 | 0.1720 | 0.1414 |
| 1.0542 | 21.0 | 525 | 1.6566 | 0.6525 | 0.4847 | 3.0998 | 0.6525 | 0.6479 | 0.1558 | 0.1438 |
| 1.0542 | 22.0 | 550 | 1.6169 | 0.645 | 0.4894 | 3.0081 | 0.645 | 0.6471 | 0.1687 | 0.1400 |
| 1.0542 | 23.0 | 575 | 1.5322 | 0.6525 | 0.4557 | 3.3587 | 0.6525 | 0.6520 | 0.1428 | 0.1247 |
| 1.0542 | 24.0 | 600 | 1.5991 | 0.6475 | 0.4787 | 2.9349 | 0.6475 | 0.6444 | 0.1580 | 0.1450 |
| 1.0542 | 25.0 | 625 | 1.5625 | 0.6375 | 0.4926 | 3.0245 | 0.6375 | 0.6378 | 0.1641 | 0.1433 |
| 1.0542 | 26.0 | 650 | 1.5366 | 0.64 | 0.4884 | 3.3388 | 0.64 | 0.6461 | 0.1595 | 0.1453 |
| 1.0542 | 27.0 | 675 | 1.5686 | 0.65 | 0.4765 | 3.5120 | 0.65 | 0.6504 | 0.1625 | 0.1359 |
| 1.0542 | 28.0 | 700 | 1.5562 | 0.6475 | 0.4817 | 3.0348 | 0.6475 | 0.6488 | 0.1459 | 0.1388 |
| 1.0542 | 29.0 | 725 | 1.5213 | 0.6475 | 0.4719 | 3.2628 | 0.6475 | 0.6475 | 0.1634 | 0.1326 |
| 1.0542 | 30.0 | 750 | 1.5492 | 0.6675 | 0.4730 | 3.1693 | 0.6675 | 0.6679 | 0.1469 | 0.1415 |
| 1.0542 | 31.0 | 775 | 1.5311 | 0.65 | 0.4896 | 3.0881 | 0.65 | 0.6504 | 0.1815 | 0.1380 |
| 1.0542 | 32.0 | 800 | 1.5556 | 0.6475 | 0.4821 | 3.1829 | 0.6475 | 0.6491 | 0.1640 | 0.1405 |
| 1.0542 | 33.0 | 825 | 1.5471 | 0.6375 | 0.4846 | 3.4190 | 0.6375 | 0.6407 | 0.1628 | 0.1415 |
| 1.0542 | 34.0 | 850 | 1.4809 | 0.6575 | 0.4714 | 2.9136 | 0.6575 | 0.6612 | 0.1729 | 0.1338 |
| 1.0542 | 35.0 | 875 | 1.5256 | 0.66 | 0.4773 | 3.2303 | 0.66 | 0.6650 | 0.1746 | 0.1368 |
| 1.0542 | 36.0 | 900 | 1.4929 | 0.6675 | 0.4671 | 3.2360 | 0.6675 | 0.6698 | 0.1698 | 0.1309 |
| 1.0542 | 37.0 | 925 | 1.4923 | 0.645 | 0.4880 | 3.0567 | 0.645 | 0.6564 | 0.1764 | 0.1395 |
| 1.0542 | 38.0 | 950 | 1.5038 | 0.665 | 0.4672 | 3.2116 | 0.665 | 0.6661 | 0.1588 | 0.1343 |
| 1.0542 | 39.0 | 975 | 1.4708 | 0.6625 | 0.4669 | 3.1420 | 0.6625 | 0.6675 | 0.1683 | 0.1301 |
| 0.0522 | 40.0 | 1000 | 1.5153 | 0.6475 | 0.4865 | 3.1796 | 0.6475 | 0.6447 | 0.1639 | 0.1400 |
| 0.0522 | 41.0 | 1025 | 1.4705 | 0.6575 | 0.4642 | 3.2196 | 0.6575 | 0.6626 | 0.1440 | 0.1308 |
| 0.0522 | 42.0 | 1050 | 1.4844 | 0.6575 | 0.4722 | 3.2445 | 0.6575 | 0.6595 | 0.1746 | 0.1328 |
| 0.0522 | 43.0 | 1075 | 1.4957 | 0.6425 | 0.4828 | 3.1456 | 0.6425 | 0.6468 | 0.1499 | 0.1417 |
| 0.0522 | 44.0 | 1100 | 1.5179 | 0.645 | 0.4910 | 3.3921 | 0.645 | 0.6470 | 0.1861 | 0.1433 |
| 0.0522 | 45.0 | 1125 | 1.4878 | 0.6425 | 0.4839 | 3.2139 | 0.6425 | 0.6478 | 0.1720 | 0.1403 |
| 0.0522 | 46.0 | 1150 | 1.4666 | 0.655 | 0.4741 | 2.9333 | 0.655 | 0.6601 | 0.1813 | 0.1347 |
| 0.0522 | 47.0 | 1175 | 1.4954 | 0.6575 | 0.4776 | 3.2102 | 0.6575 | 0.6604 | 0.1842 | 0.1390 |
| 0.0522 | 48.0 | 1200 | 1.4976 | 0.645 | 0.4856 | 3.1539 | 0.645 | 0.6493 | 0.1549 | 0.1407 |
| 0.0522 | 49.0 | 1225 | 1.4772 | 0.64 | 0.4780 | 2.9845 | 0.64 | 0.6445 | 0.1826 | 0.1388 |
| 0.0522 | 50.0 | 1250 | 1.4584 | 0.65 | 0.4703 | 3.0776 | 0.65 | 0.6533 | 0.1685 | 0.1352 |
| 0.0522 | 51.0 | 1275 | 1.4828 | 0.6325 | 0.4844 | 3.1425 | 0.6325 | 0.6377 | 0.1641 | 0.1409 |
| 0.0522 | 52.0 | 1300 | 1.4676 | 0.6525 | 0.4737 | 3.1483 | 0.6525 | 0.6565 | 0.1773 | 0.1358 |
| 0.0522 | 53.0 | 1325 | 1.4675 | 0.6475 | 0.4791 | 3.1411 | 0.6475 | 0.6515 | 0.1820 | 0.1388 |
| 0.0522 | 54.0 | 1350 | 1.4724 | 0.645 | 0.4764 | 3.0744 | 0.645 | 0.6499 | 0.1847 | 0.1382 |
| 0.0522 | 55.0 | 1375 | 1.4689 | 0.6425 | 0.4769 | 3.2256 | 0.6425 | 0.6476 | 0.1839 | 0.1376 |
| 0.0522 | 56.0 | 1400 | 1.4660 | 0.6425 | 0.4760 | 2.9907 | 0.6425 | 0.6479 | 0.1906 | 0.1378 |
| 0.0522 | 57.0 | 1425 | 1.4663 | 0.645 | 0.4757 | 3.0722 | 0.645 | 0.6514 | 0.1705 | 0.1367 |
| 0.0522 | 58.0 | 1450 | 1.4678 | 0.65 | 0.4770 | 3.0710 | 0.65 | 0.6546 | 0.1794 | 0.1371 |
| 0.0522 | 59.0 | 1475 | 1.4717 | 0.64 | 0.4786 | 3.0737 | 0.64 | 0.6455 | 0.1889 | 0.1392 |
| 0.0064 | 60.0 | 1500 | 1.4691 | 0.645 | 0.4768 | 3.0688 | 0.645 | 0.6499 | 0.1815 | 0.1378 |
| 0.0064 | 61.0 | 1525 | 1.4689 | 0.64 | 0.4767 | 3.0688 | 0.64 | 0.6452 | 0.1846 | 0.1382 |
| 0.0064 | 62.0 | 1550 | 1.4689 | 0.64 | 0.4770 | 3.0674 | 0.64 | 0.6455 | 0.1937 | 0.1383 |
| 0.0064 | 63.0 | 1575 | 1.4687 | 0.6425 | 0.4767 | 3.0700 | 0.6425 | 0.6485 | 0.1897 | 0.1381 |
| 0.0064 | 64.0 | 1600 | 1.4674 | 0.6425 | 0.4764 | 3.0675 | 0.6425 | 0.6472 | 0.1855 | 0.1375 |
| 0.0064 | 65.0 | 1625 | 1.4681 | 0.6425 | 0.4766 | 3.0694 | 0.6425 | 0.6485 | 0.1917 | 0.1381 |
| 0.0064 | 66.0 | 1650 | 1.4681 | 0.6425 | 0.4766 | 3.0687 | 0.6425 | 0.6472 | 0.1905 | 0.1378 |
| 0.0064 | 67.0 | 1675 | 1.4667 | 0.645 | 0.4757 | 3.0681 | 0.645 | 0.6505 | 0.1899 | 0.1375 |
| 0.0064 | 68.0 | 1700 | 1.4683 | 0.6425 | 0.4771 | 3.0686 | 0.6425 | 0.6474 | 0.1871 | 0.1379 |
| 0.0064 | 69.0 | 1725 | 1.4672 | 0.64 | 0.4760 | 3.0679 | 0.64 | 0.6455 | 0.1932 | 0.1380 |
| 0.0064 | 70.0 | 1750 | 1.4673 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1955 | 0.1376 |
| 0.0064 | 71.0 | 1775 | 1.4676 | 0.645 | 0.4763 | 3.0680 | 0.645 | 0.6505 | 0.1921 | 0.1376 |
| 0.0064 | 72.0 | 1800 | 1.4674 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1946 | 0.1376 |
| 0.0064 | 73.0 | 1825 | 1.4675 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6474 | 0.1946 | 0.1377 |
| 0.0064 | 74.0 | 1850 | 1.4674 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6485 | 0.1945 | 0.1380 |
| 0.0064 | 75.0 | 1875 | 1.4674 | 0.64 | 0.4763 | 3.0680 | 0.64 | 0.6455 | 0.1960 | 0.1380 |
| 0.0064 | 76.0 | 1900 | 1.4675 | 0.64 | 0.4764 | 3.0682 | 0.64 | 0.6455 | 0.1972 | 0.1381 |
| 0.0064 | 77.0 | 1925 | 1.4675 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1947 | 0.1380 |
| 0.0064 | 78.0 | 1950 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0064 | 79.0 | 1975 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6474 | 0.1935 | 0.1376 |
| 0.0 | 80.0 | 2000 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1380 |
| 0.0 | 81.0 | 2025 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1946 | 0.1380 |
| 0.0 | 82.0 | 2050 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1935 | 0.1380 |
| 0.0 | 83.0 | 2075 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 84.0 | 2100 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 85.0 | 2125 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 86.0 | 2150 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 87.0 | 2175 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 88.0 | 2200 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 89.0 | 2225 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 90.0 | 2250 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 91.0 | 2275 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 92.0 | 2300 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 93.0 | 2325 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 94.0 | 2350 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1909 | 0.1381 |
| 0.0 | 95.0 | 2375 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 96.0 | 2400 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 97.0 | 2425 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 98.0 | 2450 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 99.0 | 2475 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 100.0 | 2500 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
jorgeortizfuentes/spanish-bert-base-spanish-wwm-cased | jorgeortizfuentes | 2023-07-10T22:27:15Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:jorgeortizfuentes/universal_spanish_chilean_corpus",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-10T19:32:52Z | ---
tags:
- generated_from_trainer
datasets:
- jorgeortizfuentes/universal_spanish_chilean_corpus
model-index:
- name: spanish-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the jorgeortizfuentes/universal_spanish_chilean_corpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Raizel123/Novalora | Raizel123 | 2023-07-10T22:26:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T22:23:44Z | ---
license: creativeml-openrail-m
---
|
Renan1986-12/distilbert-base-uncased-finetuned-cola | Renan1986-12 | 2023-07-10T22:24:22Z | 62 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T22:13:06Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Renan1986-12/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Renan1986-12/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1680
- Validation Loss: 0.5433
- Train Matthews Correlation: 0.5570
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.4525 | 0.4571 | 0.4871 | 0 |
| 0.2752 | 0.4495 | 0.5614 | 1 |
| 0.1680 | 0.5433 | 0.5570 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-dp-mod-datasets-txt-processing-rarity-all | NasimB | 2023-07-10T22:14:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T19:52:59Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-datasets-txt-processing-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-datasets-txt-processing-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7606 | 0.29 | 500 | 5.6933 |
| 5.4375 | 0.59 | 1000 | 5.2559 |
| 5.0937 | 0.88 | 1500 | 5.0171 |
| 4.8204 | 1.18 | 2000 | 4.8701 |
| 4.6728 | 1.47 | 2500 | 4.7593 |
| 4.574 | 1.77 | 3000 | 4.6587 |
| 4.4456 | 2.06 | 3500 | 4.5885 |
| 4.258 | 2.36 | 4000 | 4.5468 |
| 4.2423 | 2.65 | 4500 | 4.4860 |
| 4.2036 | 2.94 | 5000 | 4.4302 |
| 3.9737 | 3.24 | 5500 | 4.4364 |
| 3.9439 | 3.53 | 6000 | 4.4019 |
| 3.9271 | 3.83 | 6500 | 4.3632 |
| 3.7901 | 4.12 | 7000 | 4.3689 |
| 3.6474 | 4.42 | 7500 | 4.3662 |
| 3.6414 | 4.71 | 8000 | 4.3472 |
| 3.6338 | 5.01 | 8500 | 4.3344 |
| 3.3764 | 5.3 | 9000 | 4.3618 |
| 3.3821 | 5.59 | 9500 | 4.3568 |
| 3.3777 | 5.89 | 10000 | 4.3513 |
| 3.2752 | 6.18 | 10500 | 4.3602 |
| 3.2228 | 6.48 | 11000 | 4.3652 |
| 3.2172 | 6.77 | 11500 | 4.3656 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
carova/ppo-LunarLander-v2 | carova | 2023-07-10T22:11:10Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T17:56:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.36 +/- 68.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
heea/image-to-music | heea | 2023-07-10T22:09:33Z | 0 | 2 | null | [
"region:us"
] | null | 2023-07-10T22:05:47Z | import matplotlib.pyplot as plt
import numpy as np
from scipy.io.wavfile import write
# Load the image
image_path = 'hugging_face.jpg'
image = plt.imread(image_path)
# Convert the image to grayscale
grayscale_image = np.mean(image, axis=2)
# Define the musical parameters
sample_rate = 44100 # Audio sample rate (Hz)
duration = 0.1 # Duration of each note (seconds)
# Define the mapping from pixel values to musical notes
min_note = 40 # MIDI note number for the lowest pixel value
max_note = 80 # MIDI note number for the highest pixel value
# Rescale the pixel values to the range [min_note, max_note]
scaled_image = (grayscale_image - np.min(grayscale_image))
scaled_image *= (max_note - min_note) / np.max(scaled_image)
scaled_image += min_note
# Generate the audio signal
total_duration = int(duration * sample_rate * grayscale_image.shape[1])
t = np.linspace(0, total_duration / sample_rate, total_duration, endpoint=False)
audio_signal = np.zeros(total_duration)
for i, column in enumerate(scaled_image.T):
start = int(i * duration * sample_rate)
end = int((i + 1) * duration * sample_rate)
audio_signal[start:end] = np.sin(2 * np.pi * column * t[start:end])
# Normalize the audio signal
audio_signal /= np.max(np.abs(audio_signal))
audio_signal *= 32767 # Scale the signal to the range of a 16-bit integer
# Convert the audio signal to 16-bit integer format
audio_signal = audio_signal.astype(np.int16)
# Save the audio signal to a WAV file
output_file = 'hugging_face.wav'
write(output_file, sample_rate, audio_signal)
print(f"Audio file '{output_file}' generated successfully!")
|
TheBloke/airochronos-33B-GGML | TheBloke | 2023-07-10T22:07:18Z | 0 | 18 | null | [
"license:other",
"region:us"
] | null | 2023-07-10T21:14:18Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Henk717's Airochronos 33B GGML
These files are GGML format model files for [Henk717's Airochronos 33B](https://huggingface.co/Henk717/airochronos-33B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/airochronos-33B)
## Prompt template: Alpaca
```Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airochronos-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airochronos-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airochronos-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airochronos-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airochronos-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airochronos-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airochronos-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| airochronos-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airochronos-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airochronos-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airochronos-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airochronos-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airochronos-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airochronos-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airochronos-33b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Henk717's Airochronos 33B
After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos.
This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B.
Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing.
It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.7 | jordyvl | 2023-07-10T21:59:33Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T21:19:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4918
- Accuracy: 0.85
- Brier Loss: 0.2583
- Nll: 1.0894
- F1 Micro: 0.85
- F1 Macro: 0.8374
- Ece: 0.1917
- Aurc: 0.0470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.8329 | 0.225 | 0.8761 | 5.2731 | 0.225 | 0.1384 | 0.2607 | 0.6977 |
| No log | 2.0 | 14 | 1.4785 | 0.405 | 0.7460 | 3.4067 | 0.405 | 0.2289 | 0.3097 | 0.4085 |
| No log | 3.0 | 21 | 1.0406 | 0.6 | 0.5725 | 1.8722 | 0.6 | 0.5345 | 0.3050 | 0.2010 |
| No log | 4.0 | 28 | 0.8087 | 0.725 | 0.4192 | 1.6096 | 0.7250 | 0.6767 | 0.2345 | 0.1149 |
| No log | 5.0 | 35 | 0.7666 | 0.735 | 0.3731 | 1.6189 | 0.735 | 0.7350 | 0.2377 | 0.1011 |
| No log | 6.0 | 42 | 0.6960 | 0.78 | 0.3413 | 1.5230 | 0.78 | 0.7592 | 0.2295 | 0.0868 |
| No log | 7.0 | 49 | 0.6490 | 0.805 | 0.3110 | 1.4861 | 0.805 | 0.7864 | 0.2138 | 0.0785 |
| No log | 8.0 | 56 | 0.6238 | 0.795 | 0.3069 | 1.2098 | 0.795 | 0.7816 | 0.2065 | 0.0698 |
| No log | 9.0 | 63 | 0.5755 | 0.83 | 0.2866 | 1.1943 | 0.83 | 0.8117 | 0.1937 | 0.0694 |
| No log | 10.0 | 70 | 0.6360 | 0.77 | 0.3164 | 1.2608 | 0.7700 | 0.7550 | 0.1785 | 0.0677 |
| No log | 11.0 | 77 | 0.6548 | 0.785 | 0.3335 | 1.4895 | 0.785 | 0.7707 | 0.2281 | 0.0885 |
| No log | 12.0 | 84 | 0.5847 | 0.805 | 0.3002 | 1.4317 | 0.805 | 0.7807 | 0.2264 | 0.0756 |
| No log | 13.0 | 91 | 0.5956 | 0.81 | 0.3040 | 1.2590 | 0.81 | 0.7928 | 0.2241 | 0.0556 |
| No log | 14.0 | 98 | 0.5692 | 0.81 | 0.3025 | 1.2119 | 0.81 | 0.8043 | 0.2235 | 0.0665 |
| No log | 15.0 | 105 | 0.5223 | 0.83 | 0.2762 | 1.1162 | 0.83 | 0.8221 | 0.1798 | 0.0552 |
| No log | 16.0 | 112 | 0.4981 | 0.84 | 0.2523 | 1.0864 | 0.8400 | 0.8372 | 0.1868 | 0.0396 |
| No log | 17.0 | 119 | 0.5207 | 0.805 | 0.2741 | 1.0416 | 0.805 | 0.7897 | 0.1960 | 0.0551 |
| No log | 18.0 | 126 | 0.5165 | 0.84 | 0.2723 | 1.1596 | 0.8400 | 0.8325 | 0.1942 | 0.0506 |
| No log | 19.0 | 133 | 0.4979 | 0.845 | 0.2573 | 1.2329 | 0.845 | 0.8297 | 0.1825 | 0.0444 |
| No log | 20.0 | 140 | 0.4953 | 0.855 | 0.2565 | 1.1213 | 0.855 | 0.8442 | 0.1844 | 0.0474 |
| No log | 21.0 | 147 | 0.5296 | 0.82 | 0.2792 | 1.0000 | 0.82 | 0.8218 | 0.1768 | 0.0523 |
| No log | 22.0 | 154 | 0.5027 | 0.835 | 0.2625 | 0.9926 | 0.835 | 0.8238 | 0.2035 | 0.0481 |
| No log | 23.0 | 161 | 0.5027 | 0.84 | 0.2642 | 1.0500 | 0.8400 | 0.8299 | 0.1616 | 0.0482 |
| No log | 24.0 | 168 | 0.5017 | 0.84 | 0.2616 | 1.0560 | 0.8400 | 0.8314 | 0.1819 | 0.0497 |
| No log | 25.0 | 175 | 0.4942 | 0.85 | 0.2594 | 1.1003 | 0.85 | 0.8407 | 0.1793 | 0.0483 |
| No log | 26.0 | 182 | 0.4943 | 0.83 | 0.2586 | 1.0436 | 0.83 | 0.8140 | 0.1869 | 0.0518 |
| No log | 27.0 | 189 | 0.4950 | 0.835 | 0.2613 | 1.0817 | 0.835 | 0.8224 | 0.2039 | 0.0504 |
| No log | 28.0 | 196 | 0.4957 | 0.85 | 0.2599 | 1.1109 | 0.85 | 0.8309 | 0.2058 | 0.0485 |
| No log | 29.0 | 203 | 0.4956 | 0.845 | 0.2599 | 1.0914 | 0.845 | 0.8304 | 0.1916 | 0.0492 |
| No log | 30.0 | 210 | 0.4893 | 0.84 | 0.2561 | 1.0890 | 0.8400 | 0.8214 | 0.2071 | 0.0482 |
| No log | 31.0 | 217 | 0.4920 | 0.835 | 0.2587 | 1.0907 | 0.835 | 0.8270 | 0.2031 | 0.0482 |
| No log | 32.0 | 224 | 0.4927 | 0.83 | 0.2601 | 1.0879 | 0.83 | 0.8157 | 0.2093 | 0.0500 |
| No log | 33.0 | 231 | 0.4925 | 0.835 | 0.2593 | 1.0886 | 0.835 | 0.8270 | 0.1810 | 0.0484 |
| No log | 34.0 | 238 | 0.4909 | 0.845 | 0.2578 | 1.0871 | 0.845 | 0.8304 | 0.1916 | 0.0478 |
| No log | 35.0 | 245 | 0.4927 | 0.845 | 0.2591 | 1.0866 | 0.845 | 0.8378 | 0.1943 | 0.0473 |
| No log | 36.0 | 252 | 0.4919 | 0.85 | 0.2581 | 1.0891 | 0.85 | 0.8342 | 0.2193 | 0.0475 |
| No log | 37.0 | 259 | 0.4908 | 0.845 | 0.2579 | 1.0867 | 0.845 | 0.8346 | 0.2215 | 0.0474 |
| No log | 38.0 | 266 | 0.4929 | 0.85 | 0.2590 | 1.0873 | 0.85 | 0.8407 | 0.1884 | 0.0471 |
| No log | 39.0 | 273 | 0.4913 | 0.85 | 0.2584 | 1.0861 | 0.85 | 0.8374 | 0.1944 | 0.0474 |
| No log | 40.0 | 280 | 0.4933 | 0.835 | 0.2595 | 1.0871 | 0.835 | 0.8248 | 0.1893 | 0.0491 |
| No log | 41.0 | 287 | 0.4936 | 0.84 | 0.2599 | 1.0863 | 0.8400 | 0.8276 | 0.1860 | 0.0486 |
| No log | 42.0 | 294 | 0.4911 | 0.85 | 0.2580 | 1.0861 | 0.85 | 0.8374 | 0.2186 | 0.0474 |
| No log | 43.0 | 301 | 0.4915 | 0.85 | 0.2581 | 1.0860 | 0.85 | 0.8374 | 0.2023 | 0.0475 |
| No log | 44.0 | 308 | 0.4921 | 0.85 | 0.2586 | 1.0874 | 0.85 | 0.8374 | 0.2013 | 0.0477 |
| No log | 45.0 | 315 | 0.4915 | 0.85 | 0.2583 | 1.0862 | 0.85 | 0.8374 | 0.1941 | 0.0475 |
| No log | 46.0 | 322 | 0.4918 | 0.85 | 0.2584 | 1.0878 | 0.85 | 0.8374 | 0.1852 | 0.0473 |
| No log | 47.0 | 329 | 0.4916 | 0.85 | 0.2583 | 1.0873 | 0.85 | 0.8374 | 0.2089 | 0.0473 |
| No log | 48.0 | 336 | 0.4921 | 0.85 | 0.2586 | 1.0879 | 0.85 | 0.8374 | 0.2026 | 0.0477 |
| No log | 49.0 | 343 | 0.4918 | 0.845 | 0.2584 | 1.0884 | 0.845 | 0.8282 | 0.1963 | 0.0478 |
| No log | 50.0 | 350 | 0.4922 | 0.85 | 0.2587 | 1.0871 | 0.85 | 0.8374 | 0.2102 | 0.0474 |
| No log | 51.0 | 357 | 0.4920 | 0.85 | 0.2585 | 1.0879 | 0.85 | 0.8374 | 0.2095 | 0.0474 |
| No log | 52.0 | 364 | 0.4926 | 0.85 | 0.2589 | 1.0878 | 0.85 | 0.8374 | 0.2022 | 0.0477 |
| No log | 53.0 | 371 | 0.4920 | 0.85 | 0.2586 | 1.0888 | 0.85 | 0.8374 | 0.2027 | 0.0475 |
| No log | 54.0 | 378 | 0.4921 | 0.85 | 0.2586 | 1.0886 | 0.85 | 0.8374 | 0.2020 | 0.0474 |
| No log | 55.0 | 385 | 0.4921 | 0.85 | 0.2587 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0471 |
| No log | 56.0 | 392 | 0.4925 | 0.85 | 0.2589 | 1.0881 | 0.85 | 0.8374 | 0.1946 | 0.0473 |
| No log | 57.0 | 399 | 0.4917 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1932 | 0.0472 |
| No log | 58.0 | 406 | 0.4921 | 0.85 | 0.2586 | 1.0877 | 0.85 | 0.8374 | 0.1948 | 0.0476 |
| No log | 59.0 | 413 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1931 | 0.0472 |
| No log | 60.0 | 420 | 0.4918 | 0.85 | 0.2583 | 1.0882 | 0.85 | 0.8374 | 0.1945 | 0.0475 |
| No log | 61.0 | 427 | 0.4916 | 0.85 | 0.2582 | 1.0883 | 0.85 | 0.8374 | 0.1936 | 0.0472 |
| No log | 62.0 | 434 | 0.4920 | 0.85 | 0.2586 | 1.0882 | 0.85 | 0.8374 | 0.1942 | 0.0473 |
| No log | 63.0 | 441 | 0.4922 | 0.85 | 0.2587 | 1.0889 | 0.85 | 0.8374 | 0.1935 | 0.0473 |
| No log | 64.0 | 448 | 0.4921 | 0.85 | 0.2586 | 1.0885 | 0.85 | 0.8374 | 0.1848 | 0.0473 |
| No log | 65.0 | 455 | 0.4916 | 0.85 | 0.2582 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 66.0 | 462 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1849 | 0.0472 |
| No log | 67.0 | 469 | 0.4917 | 0.85 | 0.2584 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| No log | 68.0 | 476 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| No log | 69.0 | 483 | 0.4918 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2007 | 0.0471 |
| No log | 70.0 | 490 | 0.4919 | 0.85 | 0.2584 | 1.0886 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 71.0 | 497 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.1940 | 0.0474 |
| 0.1824 | 72.0 | 504 | 0.4919 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| 0.1824 | 73.0 | 511 | 0.4917 | 0.85 | 0.2583 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 74.0 | 518 | 0.4920 | 0.85 | 0.2585 | 1.0890 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 75.0 | 525 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1846 | 0.0472 |
| 0.1824 | 76.0 | 532 | 0.4918 | 0.85 | 0.2583 | 1.0889 | 0.85 | 0.8374 | 0.1930 | 0.0472 |
| 0.1824 | 77.0 | 539 | 0.4917 | 0.85 | 0.2582 | 1.0891 | 0.85 | 0.8374 | 0.2005 | 0.0472 |
| 0.1824 | 78.0 | 546 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1928 | 0.0472 |
| 0.1824 | 79.0 | 553 | 0.4920 | 0.85 | 0.2585 | 1.0893 | 0.85 | 0.8374 | 0.1845 | 0.0473 |
| 0.1824 | 80.0 | 560 | 0.4919 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0473 |
| 0.1824 | 81.0 | 567 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1925 | 0.0471 |
| 0.1824 | 82.0 | 574 | 0.4920 | 0.85 | 0.2585 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 83.0 | 581 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 84.0 | 588 | 0.4918 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1926 | 0.0471 |
| 0.1824 | 85.0 | 595 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 86.0 | 602 | 0.4918 | 0.85 | 0.2584 | 1.0893 | 0.85 | 0.8374 | 0.1927 | 0.0472 |
| 0.1824 | 87.0 | 609 | 0.4918 | 0.85 | 0.2584 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 88.0 | 616 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 89.0 | 623 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 90.0 | 630 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1998 | 0.0471 |
| 0.1824 | 91.0 | 637 | 0.4919 | 0.85 | 0.2584 | 1.0894 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 92.0 | 644 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 93.0 | 651 | 0.4918 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 94.0 | 658 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 95.0 | 665 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 96.0 | 672 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 97.0 | 679 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 98.0 | 686 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 99.0 | 693 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 100.0 | 700 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
umanlp/babelbert-ft-xlm-r | umanlp | 2023-07-10T21:57:04Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-07T21:22:09Z | This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/).
It was obtained by fine-tuning the representations of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).
|
NasimB/gpt2-concat-mod-datasets-txt-processing-rarity-all | NasimB | 2023-07-10T21:52:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T19:34:36Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-mod-datasets-txt-processing-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-mod-datasets-txt-processing-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6762 | 0.3 | 500 | 5.6569 |
| 5.3312 | 0.6 | 1000 | 5.2263 |
| 4.9972 | 0.91 | 1500 | 4.9819 |
| 4.7192 | 1.21 | 2000 | 4.8383 |
| 4.5828 | 1.51 | 2500 | 4.7225 |
| 4.481 | 1.81 | 3000 | 4.6261 |
| 4.327 | 2.12 | 3500 | 4.5756 |
| 4.1937 | 2.42 | 4000 | 4.5233 |
| 4.1572 | 2.72 | 4500 | 4.4631 |
| 4.108 | 3.02 | 5000 | 4.4211 |
| 3.8775 | 3.33 | 5500 | 4.4180 |
| 3.8911 | 3.63 | 6000 | 4.3805 |
| 3.8636 | 3.93 | 6500 | 4.3392 |
| 3.6522 | 4.23 | 7000 | 4.3680 |
| 3.6068 | 4.54 | 7500 | 4.3577 |
| 3.6059 | 4.84 | 8000 | 4.3344 |
| 3.4722 | 5.14 | 8500 | 4.3582 |
| 3.3577 | 5.44 | 9000 | 4.3591 |
| 3.3543 | 5.75 | 9500 | 4.3534 |
| 3.3219 | 6.05 | 10000 | 4.3616 |
| 3.1978 | 6.35 | 10500 | 4.3687 |
| 3.2005 | 6.65 | 11000 | 4.3698 |
| 3.1998 | 6.96 | 11500 | 4.3697 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
luisagaino/distilbert-base-uncased-finetuned-cola | luisagaino | 2023-07-10T21:45:18Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T15:12:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: luisagaino/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# luisagaino/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1957
- Validation Loss: 0.5520
- Train Matthews Correlation: 0.5070
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5226 | 0.4778 | 0.4379 | 0 |
| 0.3316 | 0.4880 | 0.4994 | 1 |
| 0.1957 | 0.5520 | 0.5070 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bhenrym14/airoboros-7b-gpt4-1.4.1-lxctx-PI-16384-GPTQ | bhenrym14 | 2023-07-10T21:44:47Z | 9 | 2 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"arxiv:2306.15595",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T02:16:21Z | ---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
Mostly untested!
# RoPE Scaled QLoRA Fine-tune of Llama-7b on airoboros-gpt4-1.4.1 (GPTQ)
## Overview
This is [Jon Durbin's Airoboros 7B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4) (GPTQ Quantization) with several key modifications:
- Context length extended to 16384 by RoPE Scaled Embeddings.
- The Llama-7b base model is pretrained for additional 150 steps on 16384 length sequences for the pile dataset.
- During the fine-tune, training sequences beyond 2048 have the target truncated to equal 2048.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
- **This is a QLoRA fine-tune**
Pretraining and fine-tuning each took ~8 hours on 1x RTX 6000 Ada.
## How to Use
For context beyond 8192 tokens, do NOT use exllama. AutoGPTQ appears to work.
REQUIRED: you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py). You will need to call `replace_llama_rope_with_scaled_rope` in ooba somewhere. Calling this at the top of the training module after the imports works for me.
Hopefully there is a quick fix to exllama that can make >8k work soon.
Otherwise for context <8k. Use exllama. Set `max_seq_len` to 16384, and `compress_pos_emb` to 8.
## Motivation
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. Here I attempt to take a smaller model and extend the context to 16k tokens. This, however, proved problematic as stability suffered in the 8-10k+ range. The Meta paper demonstrated that decreasing perplexities can still be acheived at these context lengths; however, their approach involved tuning all variables on the maximum sequence length after incorporating the RoPE scaling adjustment.
Here I saught to stabilize the performance at longer contexts by fine-tuning in two stages:
1. Fine-tune on 16k token sequences from the Pile dataset. This is very compute intensive.
2. Fine-tune on the airoboros dataset as I have done in prior models.
The results follow.
## Relative Performance (perplexity)
| Context (tokens) | bhenrym14/airo-7b-lxctx-PI-16384-fp16 | No Pretrain | airoboros-7b-gpt4-1.4 |
| ---| ------- | ------ | ---|
| 64 | 29.39 |32.28 | **25.90** |
| 128 | 18.80 |20.43 | **15.70** |
| 256 | 13.67 |14.60 | **11.33** |
| 512 | 10.60 |11.20 | **8.60** |
| 1024 | 8.54 | 8.945 | **7.07** |
| 2048 | 7.34 | 7.71 | **6.2** |
| 4096 | **6.81** | 7.25 | 48.65 |
| 8192 | **6.49** | 7.07 | 1018 |
| 16384 | **6.31** | 8.70 | very big |
- The pretraining successfuly ameliorates the rise in perplexity between 8192 and 16284. Not only that, it outperforms it everywhere.
- For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point.
- This comparison isn't perfect. I did use the 1.4.1 dataset and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences.
## Quantization
Quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True).
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 7B GPT4 1.4
__mostly untested, use if you want, or wait for some validation__
## Overview
This is a __full__ (not qlora) fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
1aurent/dqn-SpaceInvadersNoFrameskip-v4 | 1aurent | 2023-07-10T21:36:15Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T21:35:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 511.50 +/- 177.34
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 1aurent -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga 1aurent -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga 1aurent
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.5 | jordyvl | 2023-07-10T21:19:10Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T20:39:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Accuracy: 0.85
- Brier Loss: 0.2593
- Nll: 0.9223
- F1 Micro: 0.85
- F1 Macro: 0.8392
- Ece: 0.1994
- Aurc: 0.0457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.5608 | 0.225 | 0.8774 | 5.2159 | 0.225 | 0.1397 | 0.2725 | 0.7038 |
| No log | 2.0 | 14 | 1.2539 | 0.415 | 0.7531 | 3.2673 | 0.415 | 0.2434 | 0.3070 | 0.4078 |
| No log | 3.0 | 21 | 0.9055 | 0.585 | 0.5971 | 1.9093 | 0.585 | 0.5086 | 0.3232 | 0.2172 |
| No log | 4.0 | 28 | 0.7122 | 0.72 | 0.4403 | 1.7693 | 0.72 | 0.6805 | 0.3073 | 0.1228 |
| No log | 5.0 | 35 | 0.6584 | 0.74 | 0.3938 | 1.5810 | 0.74 | 0.7214 | 0.2661 | 0.1075 |
| No log | 6.0 | 42 | 0.5711 | 0.8 | 0.3462 | 1.4146 | 0.8000 | 0.7524 | 0.2347 | 0.0843 |
| No log | 7.0 | 49 | 0.5521 | 0.8 | 0.3199 | 1.2631 | 0.8000 | 0.7867 | 0.2542 | 0.0634 |
| No log | 8.0 | 56 | 0.5603 | 0.77 | 0.3381 | 1.1808 | 0.7700 | 0.7680 | 0.2316 | 0.0858 |
| No log | 9.0 | 63 | 0.5209 | 0.82 | 0.3062 | 1.2891 | 0.82 | 0.7972 | 0.2405 | 0.0792 |
| No log | 10.0 | 70 | 0.5705 | 0.78 | 0.3343 | 1.5183 | 0.78 | 0.7743 | 0.2264 | 0.0874 |
| No log | 11.0 | 77 | 0.5137 | 0.82 | 0.3047 | 1.2987 | 0.82 | 0.8096 | 0.2420 | 0.0592 |
| No log | 12.0 | 84 | 0.4664 | 0.835 | 0.2929 | 1.1529 | 0.835 | 0.8101 | 0.2291 | 0.0753 |
| No log | 13.0 | 91 | 0.4772 | 0.82 | 0.2915 | 1.2078 | 0.82 | 0.8029 | 0.2131 | 0.0620 |
| No log | 14.0 | 98 | 0.4553 | 0.825 | 0.2843 | 1.1312 | 0.825 | 0.8112 | 0.2196 | 0.0453 |
| No log | 15.0 | 105 | 0.4574 | 0.825 | 0.2821 | 1.1234 | 0.825 | 0.8163 | 0.2241 | 0.0554 |
| No log | 16.0 | 112 | 0.4873 | 0.8 | 0.3111 | 1.2248 | 0.8000 | 0.8007 | 0.1992 | 0.0657 |
| No log | 17.0 | 119 | 0.4224 | 0.855 | 0.2620 | 1.1871 | 0.855 | 0.8218 | 0.2337 | 0.0479 |
| No log | 18.0 | 126 | 0.4414 | 0.825 | 0.2857 | 1.0723 | 0.825 | 0.8227 | 0.2500 | 0.0517 |
| No log | 19.0 | 133 | 0.4232 | 0.845 | 0.2737 | 0.9360 | 0.845 | 0.8219 | 0.2053 | 0.0543 |
| No log | 20.0 | 140 | 0.4114 | 0.845 | 0.2637 | 1.0046 | 0.845 | 0.8233 | 0.2144 | 0.0460 |
| No log | 21.0 | 147 | 0.4110 | 0.835 | 0.2640 | 0.9853 | 0.835 | 0.8160 | 0.2278 | 0.0466 |
| No log | 22.0 | 154 | 0.4163 | 0.845 | 0.2678 | 1.1494 | 0.845 | 0.8291 | 0.2156 | 0.0458 |
| No log | 23.0 | 161 | 0.4243 | 0.835 | 0.2779 | 0.9475 | 0.835 | 0.8269 | 0.2420 | 0.0554 |
| No log | 24.0 | 168 | 0.4079 | 0.835 | 0.2683 | 0.9249 | 0.835 | 0.8044 | 0.2091 | 0.0532 |
| No log | 25.0 | 175 | 0.4027 | 0.85 | 0.2621 | 0.9433 | 0.85 | 0.8361 | 0.2138 | 0.0530 |
| No log | 26.0 | 182 | 0.3975 | 0.855 | 0.2590 | 0.9310 | 0.855 | 0.8457 | 0.1932 | 0.0487 |
| No log | 27.0 | 189 | 0.4032 | 0.85 | 0.2650 | 0.9823 | 0.85 | 0.8425 | 0.2088 | 0.0528 |
| No log | 28.0 | 196 | 0.4037 | 0.845 | 0.2650 | 1.0692 | 0.845 | 0.8361 | 0.2157 | 0.0496 |
| No log | 29.0 | 203 | 0.4027 | 0.845 | 0.2652 | 1.0423 | 0.845 | 0.8295 | 0.1917 | 0.0502 |
| No log | 30.0 | 210 | 0.3989 | 0.85 | 0.2610 | 1.0633 | 0.85 | 0.8392 | 0.2214 | 0.0482 |
| No log | 31.0 | 217 | 0.3985 | 0.855 | 0.2609 | 1.0374 | 0.855 | 0.8424 | 0.2074 | 0.0472 |
| No log | 32.0 | 224 | 0.3986 | 0.85 | 0.2596 | 1.0403 | 0.85 | 0.8392 | 0.2184 | 0.0462 |
| No log | 33.0 | 231 | 0.3990 | 0.85 | 0.2603 | 1.0369 | 0.85 | 0.8392 | 0.2079 | 0.0470 |
| No log | 34.0 | 238 | 0.3982 | 0.85 | 0.2600 | 0.9765 | 0.85 | 0.8392 | 0.2160 | 0.0467 |
| No log | 35.0 | 245 | 0.3977 | 0.85 | 0.2601 | 0.9762 | 0.85 | 0.8392 | 0.2108 | 0.0465 |
| No log | 36.0 | 252 | 0.3977 | 0.85 | 0.2600 | 1.0372 | 0.85 | 0.8392 | 0.2075 | 0.0466 |
| No log | 37.0 | 259 | 0.3972 | 0.85 | 0.2597 | 1.0383 | 0.85 | 0.8392 | 0.2091 | 0.0465 |
| No log | 38.0 | 266 | 0.3967 | 0.85 | 0.2590 | 0.9796 | 0.85 | 0.8392 | 0.1987 | 0.0461 |
| No log | 39.0 | 273 | 0.3979 | 0.85 | 0.2601 | 1.0390 | 0.85 | 0.8392 | 0.1991 | 0.0467 |
| No log | 40.0 | 280 | 0.3976 | 0.85 | 0.2601 | 0.9775 | 0.85 | 0.8392 | 0.2175 | 0.0465 |
| No log | 41.0 | 287 | 0.3979 | 0.85 | 0.2603 | 0.9796 | 0.85 | 0.8392 | 0.1930 | 0.0467 |
| No log | 42.0 | 294 | 0.3973 | 0.85 | 0.2598 | 0.9746 | 0.85 | 0.8392 | 0.2175 | 0.0468 |
| No log | 43.0 | 301 | 0.3972 | 0.85 | 0.2598 | 0.9798 | 0.85 | 0.8392 | 0.1931 | 0.0466 |
| No log | 44.0 | 308 | 0.3969 | 0.85 | 0.2594 | 0.9784 | 0.85 | 0.8392 | 0.2094 | 0.0465 |
| No log | 45.0 | 315 | 0.3971 | 0.85 | 0.2596 | 0.9847 | 0.85 | 0.8392 | 0.2033 | 0.0464 |
| No log | 46.0 | 322 | 0.3969 | 0.85 | 0.2597 | 0.9768 | 0.85 | 0.8392 | 0.2100 | 0.0465 |
| No log | 47.0 | 329 | 0.3974 | 0.85 | 0.2599 | 0.9788 | 0.85 | 0.8392 | 0.2090 | 0.0467 |
| No log | 48.0 | 336 | 0.3971 | 0.85 | 0.2596 | 0.9797 | 0.85 | 0.8392 | 0.1977 | 0.0463 |
| No log | 49.0 | 343 | 0.3972 | 0.85 | 0.2597 | 0.9391 | 0.85 | 0.8392 | 0.1903 | 0.0465 |
| No log | 50.0 | 350 | 0.3969 | 0.85 | 0.2596 | 0.9802 | 0.85 | 0.8392 | 0.1985 | 0.0464 |
| No log | 51.0 | 357 | 0.3970 | 0.85 | 0.2596 | 0.9795 | 0.85 | 0.8392 | 0.2161 | 0.0463 |
| No log | 52.0 | 364 | 0.3973 | 0.85 | 0.2597 | 0.9333 | 0.85 | 0.8392 | 0.1983 | 0.0462 |
| No log | 53.0 | 371 | 0.3971 | 0.85 | 0.2597 | 0.9408 | 0.85 | 0.8392 | 0.2022 | 0.0467 |
| No log | 54.0 | 378 | 0.3970 | 0.85 | 0.2595 | 0.9371 | 0.85 | 0.8392 | 0.1992 | 0.0460 |
| No log | 55.0 | 385 | 0.3970 | 0.85 | 0.2596 | 0.9262 | 0.85 | 0.8392 | 0.1917 | 0.0464 |
| No log | 56.0 | 392 | 0.3971 | 0.85 | 0.2595 | 0.9195 | 0.85 | 0.8392 | 0.1927 | 0.0461 |
| No log | 57.0 | 399 | 0.3970 | 0.85 | 0.2596 | 0.9789 | 0.85 | 0.8392 | 0.1992 | 0.0462 |
| No log | 58.0 | 406 | 0.3968 | 0.85 | 0.2594 | 0.9255 | 0.85 | 0.8392 | 0.1929 | 0.0462 |
| No log | 59.0 | 413 | 0.3967 | 0.85 | 0.2593 | 0.9795 | 0.85 | 0.8392 | 0.1996 | 0.0459 |
| No log | 60.0 | 420 | 0.3970 | 0.85 | 0.2596 | 0.9787 | 0.85 | 0.8392 | 0.1994 | 0.0461 |
| No log | 61.0 | 427 | 0.3967 | 0.85 | 0.2594 | 0.9803 | 0.85 | 0.8392 | 0.2073 | 0.0461 |
| No log | 62.0 | 434 | 0.3968 | 0.85 | 0.2594 | 0.9325 | 0.85 | 0.8392 | 0.1996 | 0.0460 |
| No log | 63.0 | 441 | 0.3968 | 0.85 | 0.2595 | 0.9276 | 0.85 | 0.8392 | 0.2063 | 0.0459 |
| No log | 64.0 | 448 | 0.3968 | 0.85 | 0.2595 | 0.9247 | 0.85 | 0.8392 | 0.1991 | 0.0461 |
| No log | 65.0 | 455 | 0.3968 | 0.85 | 0.2595 | 0.9301 | 0.85 | 0.8392 | 0.1989 | 0.0459 |
| No log | 66.0 | 462 | 0.3968 | 0.85 | 0.2595 | 0.9310 | 0.85 | 0.8392 | 0.1922 | 0.0459 |
| No log | 67.0 | 469 | 0.3968 | 0.85 | 0.2595 | 0.9250 | 0.85 | 0.8392 | 0.2061 | 0.0459 |
| No log | 68.0 | 476 | 0.3968 | 0.85 | 0.2594 | 0.9234 | 0.85 | 0.8392 | 0.1994 | 0.0461 |
| No log | 69.0 | 483 | 0.3967 | 0.85 | 0.2594 | 0.9257 | 0.85 | 0.8392 | 0.2065 | 0.0459 |
| No log | 70.0 | 490 | 0.3967 | 0.85 | 0.2594 | 0.9205 | 0.85 | 0.8392 | 0.1840 | 0.0459 |
| No log | 71.0 | 497 | 0.3967 | 0.85 | 0.2594 | 0.9258 | 0.85 | 0.8392 | 0.2017 | 0.0458 |
| 0.1666 | 72.0 | 504 | 0.3969 | 0.85 | 0.2594 | 0.9297 | 0.85 | 0.8392 | 0.2017 | 0.0458 |
| 0.1666 | 73.0 | 511 | 0.3966 | 0.85 | 0.2593 | 0.9223 | 0.85 | 0.8392 | 0.1920 | 0.0457 |
| 0.1666 | 74.0 | 518 | 0.3967 | 0.85 | 0.2594 | 0.9228 | 0.85 | 0.8392 | 0.1920 | 0.0459 |
| 0.1666 | 75.0 | 525 | 0.3967 | 0.85 | 0.2594 | 0.9257 | 0.85 | 0.8392 | 0.1919 | 0.0459 |
| 0.1666 | 76.0 | 532 | 0.3966 | 0.85 | 0.2593 | 0.9232 | 0.85 | 0.8392 | 0.1994 | 0.0458 |
| 0.1666 | 77.0 | 539 | 0.3968 | 0.85 | 0.2594 | 0.9224 | 0.85 | 0.8392 | 0.1920 | 0.0459 |
| 0.1666 | 78.0 | 546 | 0.3966 | 0.85 | 0.2593 | 0.9242 | 0.85 | 0.8392 | 0.1918 | 0.0458 |
| 0.1666 | 79.0 | 553 | 0.3967 | 0.85 | 0.2594 | 0.9233 | 0.85 | 0.8392 | 0.1920 | 0.0459 |
| 0.1666 | 80.0 | 560 | 0.3968 | 0.85 | 0.2594 | 0.9241 | 0.85 | 0.8392 | 0.1919 | 0.0458 |
| 0.1666 | 81.0 | 567 | 0.3967 | 0.85 | 0.2594 | 0.9225 | 0.85 | 0.8392 | 0.1918 | 0.0459 |
| 0.1666 | 82.0 | 574 | 0.3967 | 0.85 | 0.2594 | 0.9233 | 0.85 | 0.8392 | 0.1919 | 0.0459 |
| 0.1666 | 83.0 | 581 | 0.3967 | 0.85 | 0.2593 | 0.9246 | 0.85 | 0.8392 | 0.1919 | 0.0458 |
| 0.1666 | 84.0 | 588 | 0.3966 | 0.85 | 0.2593 | 0.9229 | 0.85 | 0.8392 | 0.2017 | 0.0458 |
| 0.1666 | 85.0 | 595 | 0.3966 | 0.85 | 0.2593 | 0.9232 | 0.85 | 0.8392 | 0.2017 | 0.0458 |
| 0.1666 | 86.0 | 602 | 0.3967 | 0.85 | 0.2593 | 0.9225 | 0.85 | 0.8392 | 0.1920 | 0.0458 |
| 0.1666 | 87.0 | 609 | 0.3966 | 0.85 | 0.2593 | 0.9214 | 0.85 | 0.8392 | 0.1999 | 0.0458 |
| 0.1666 | 88.0 | 616 | 0.3967 | 0.85 | 0.2593 | 0.9214 | 0.85 | 0.8392 | 0.1920 | 0.0458 |
| 0.1666 | 89.0 | 623 | 0.3966 | 0.85 | 0.2593 | 0.9227 | 0.85 | 0.8392 | 0.2097 | 0.0458 |
| 0.1666 | 90.0 | 630 | 0.3967 | 0.85 | 0.2594 | 0.9219 | 0.85 | 0.8392 | 0.1919 | 0.0458 |
| 0.1666 | 91.0 | 637 | 0.3966 | 0.85 | 0.2593 | 0.9212 | 0.85 | 0.8392 | 0.1994 | 0.0458 |
| 0.1666 | 92.0 | 644 | 0.3966 | 0.85 | 0.2593 | 0.9227 | 0.85 | 0.8392 | 0.1919 | 0.0458 |
| 0.1666 | 93.0 | 651 | 0.3966 | 0.85 | 0.2593 | 0.9231 | 0.85 | 0.8392 | 0.2017 | 0.0458 |
| 0.1666 | 94.0 | 658 | 0.3967 | 0.85 | 0.2593 | 0.9220 | 0.85 | 0.8392 | 0.1919 | 0.0458 |
| 0.1666 | 95.0 | 665 | 0.3966 | 0.85 | 0.2593 | 0.9217 | 0.85 | 0.8392 | 0.1920 | 0.0457 |
| 0.1666 | 96.0 | 672 | 0.3966 | 0.85 | 0.2593 | 0.9218 | 0.85 | 0.8392 | 0.1920 | 0.0458 |
| 0.1666 | 97.0 | 679 | 0.3966 | 0.85 | 0.2593 | 0.9221 | 0.85 | 0.8392 | 0.1920 | 0.0458 |
| 0.1666 | 98.0 | 686 | 0.3966 | 0.85 | 0.2593 | 0.9224 | 0.85 | 0.8392 | 0.1920 | 0.0457 |
| 0.1666 | 99.0 | 693 | 0.3966 | 0.85 | 0.2593 | 0.9224 | 0.85 | 0.8392 | 0.1994 | 0.0457 |
| 0.1666 | 100.0 | 700 | 0.3966 | 0.85 | 0.2593 | 0.9223 | 0.85 | 0.8392 | 0.1994 | 0.0457 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ALazcanoG/nominal-groups-recognition-bert-base-spanish-wwm-cased | ALazcanoG | 2023-07-10T21:09:53Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:ALazcanoG/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-07T23:46:21Z | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- ALazcanoG/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the ALazcanoG/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3362
- Body Part Precision: 0.6830
- Body Part Recall: 0.7409
- Body Part F1: 0.7108
- Body Part Number: 413
- Disease Precision: 0.7439
- Disease Recall: 0.7446
- Disease F1: 0.7442
- Disease Number: 975
- Family Member Precision: 0.7941
- Family Member Recall: 0.9
- Family Member F1: 0.8438
- Family Member Number: 30
- Medication Precision: 0.8734
- Medication Recall: 0.7419
- Medication F1: 0.8023
- Medication Number: 93
- Procedure Precision: 0.6190
- Procedure Recall: 0.6270
- Procedure F1: 0.6230
- Procedure Number: 311
- Overall Precision: 0.7144
- Overall Recall: 0.7261
- Overall F1: 0.7202
- Overall Accuracy: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4335 | 1.0 | 1004 | 0.3011 | 0.5944 | 0.7167 | 0.6498 | 413 | 0.7014 | 0.7036 | 0.7025 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.7875 | 0.6774 | 0.7283 | 93 | 0.6007 | 0.5177 | 0.5561 | 311 | 0.6634 | 0.6751 | 0.6692 | 0.9063 |
| 0.2379 | 2.0 | 2008 | 0.2920 | 0.6995 | 0.7215 | 0.7104 | 413 | 0.7655 | 0.7097 | 0.7366 | 975 | 0.75 | 0.8 | 0.7742 | 30 | 0.7667 | 0.7419 | 0.7541 | 93 | 0.6094 | 0.6270 | 0.6181 | 311 | 0.7212 | 0.7014 | 0.7112 | 0.9140 |
| 0.1629 | 3.0 | 3012 | 0.3022 | 0.6674 | 0.7530 | 0.7076 | 413 | 0.7286 | 0.7241 | 0.7263 | 975 | 0.8571 | 0.8 | 0.8276 | 30 | 0.8519 | 0.7419 | 0.7931 | 93 | 0.5994 | 0.6495 | 0.6235 | 311 | 0.6975 | 0.7201 | 0.7086 | 0.9170 |
| 0.1143 | 4.0 | 4016 | 0.3362 | 0.6830 | 0.7409 | 0.7108 | 413 | 0.7439 | 0.7446 | 0.7442 | 975 | 0.7941 | 0.9 | 0.8438 | 30 | 0.8734 | 0.7419 | 0.8023 | 93 | 0.6190 | 0.6270 | 0.6230 | 311 | 0.7144 | 0.7261 | 0.7202 | 0.9175 |
| 0.0861 | 5.0 | 5020 | 0.3643 | 0.6806 | 0.7482 | 0.7128 | 413 | 0.7428 | 0.7436 | 0.7432 | 975 | 0.8182 | 0.9 | 0.8571 | 30 | 0.8831 | 0.7312 | 0.8000 | 93 | 0.5928 | 0.6367 | 0.6140 | 311 | 0.7081 | 0.7283 | 0.7181 | 0.9163 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ruggedmug/ppo-Huggy | ruggedmug | 2023-07-10T21:08:52Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-10T21:08:43Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ruggedmug/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO | skrl | 2023-07-10T21:06:55Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T19:47:47Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -8.89 +/- 10.3
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltScrew
type: IsaacGymEnvs-FactoryTaskNutBoltScrew
---
<!-- ---
torch: -21.51 +/- 14.99
jax: -35.77 +/- 0.39
numpy: -8.89 +/- 10.3
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltScrew
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 128 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 32 # 128 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
WALIDALI/bekinorrev | WALIDALI | 2023-07-10T21:00:29Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T20:57:08Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekinorrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.9 | jordyvl | 2023-07-10T20:38:28Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T19:59:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t2.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5446
- Accuracy: 0.85
- Brier Loss: 0.2446
- Nll: 1.0816
- F1 Micro: 0.85
- F1 Macro: 0.8348
- Ece: 0.1474
- Aurc: 0.0436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.1216 | 0.215 | 0.8751 | 5.2864 | 0.2150 | 0.1264 | 0.2697 | 0.6907 |
| No log | 2.0 | 14 | 1.7056 | 0.405 | 0.7400 | 3.5721 | 0.405 | 0.2275 | 0.2995 | 0.4011 |
| No log | 3.0 | 21 | 1.1857 | 0.62 | 0.5612 | 2.0143 | 0.62 | 0.5712 | 0.2994 | 0.2024 |
| No log | 4.0 | 28 | 0.8767 | 0.705 | 0.4085 | 1.6918 | 0.705 | 0.6436 | 0.2231 | 0.1152 |
| No log | 5.0 | 35 | 0.8620 | 0.72 | 0.3878 | 1.7931 | 0.72 | 0.7294 | 0.2233 | 0.1076 |
| No log | 6.0 | 42 | 0.7517 | 0.775 | 0.3252 | 1.5573 | 0.775 | 0.7600 | 0.1970 | 0.0790 |
| No log | 7.0 | 49 | 0.7280 | 0.79 | 0.3175 | 1.5140 | 0.79 | 0.7742 | 0.1903 | 0.0826 |
| No log | 8.0 | 56 | 0.6848 | 0.8 | 0.2942 | 1.4438 | 0.8000 | 0.7902 | 0.1828 | 0.0866 |
| No log | 9.0 | 63 | 0.6744 | 0.81 | 0.2889 | 1.4703 | 0.81 | 0.7969 | 0.1989 | 0.0692 |
| No log | 10.0 | 70 | 0.8432 | 0.74 | 0.3859 | 1.3134 | 0.74 | 0.7206 | 0.1959 | 0.1051 |
| No log | 11.0 | 77 | 0.7424 | 0.765 | 0.3294 | 1.5162 | 0.765 | 0.7792 | 0.2005 | 0.1048 |
| No log | 12.0 | 84 | 0.6953 | 0.79 | 0.3194 | 1.2233 | 0.79 | 0.7850 | 0.1800 | 0.0922 |
| No log | 13.0 | 91 | 0.5703 | 0.845 | 0.2538 | 1.2355 | 0.845 | 0.8372 | 0.1739 | 0.0447 |
| No log | 14.0 | 98 | 0.6439 | 0.795 | 0.2924 | 1.2777 | 0.795 | 0.7743 | 0.1771 | 0.0534 |
| No log | 15.0 | 105 | 0.5895 | 0.825 | 0.2650 | 1.2086 | 0.825 | 0.8071 | 0.1665 | 0.0566 |
| No log | 16.0 | 112 | 0.5973 | 0.81 | 0.2753 | 1.0959 | 0.81 | 0.8013 | 0.1839 | 0.0534 |
| No log | 17.0 | 119 | 0.5825 | 0.795 | 0.2722 | 1.1565 | 0.795 | 0.7886 | 0.1855 | 0.0534 |
| No log | 18.0 | 126 | 0.5854 | 0.845 | 0.2661 | 1.1223 | 0.845 | 0.8424 | 0.1981 | 0.0549 |
| No log | 19.0 | 133 | 0.5514 | 0.82 | 0.2553 | 0.9585 | 0.82 | 0.8150 | 0.1600 | 0.0481 |
| No log | 20.0 | 140 | 0.5600 | 0.835 | 0.2443 | 1.2692 | 0.835 | 0.8232 | 0.1657 | 0.0469 |
| No log | 21.0 | 147 | 0.5592 | 0.845 | 0.2473 | 1.1658 | 0.845 | 0.8331 | 0.1683 | 0.0493 |
| No log | 22.0 | 154 | 0.5507 | 0.845 | 0.2411 | 1.1403 | 0.845 | 0.8311 | 0.1797 | 0.0450 |
| No log | 23.0 | 161 | 0.5305 | 0.84 | 0.2361 | 1.1509 | 0.8400 | 0.8287 | 0.1650 | 0.0409 |
| No log | 24.0 | 168 | 0.5352 | 0.835 | 0.2378 | 1.2208 | 0.835 | 0.8201 | 0.1515 | 0.0420 |
| No log | 25.0 | 175 | 0.5425 | 0.845 | 0.2420 | 1.1208 | 0.845 | 0.8321 | 0.1776 | 0.0430 |
| No log | 26.0 | 182 | 0.5396 | 0.84 | 0.2409 | 1.1230 | 0.8400 | 0.8286 | 0.1647 | 0.0446 |
| No log | 27.0 | 189 | 0.5436 | 0.85 | 0.2401 | 1.1179 | 0.85 | 0.8387 | 0.1568 | 0.0427 |
| No log | 28.0 | 196 | 0.5373 | 0.835 | 0.2415 | 1.1092 | 0.835 | 0.8141 | 0.1641 | 0.0427 |
| No log | 29.0 | 203 | 0.5420 | 0.845 | 0.2436 | 1.0988 | 0.845 | 0.8326 | 0.1551 | 0.0444 |
| No log | 30.0 | 210 | 0.5413 | 0.845 | 0.2420 | 1.1064 | 0.845 | 0.8312 | 0.1486 | 0.0440 |
| No log | 31.0 | 217 | 0.5411 | 0.84 | 0.2418 | 1.1024 | 0.8400 | 0.8286 | 0.1565 | 0.0435 |
| No log | 32.0 | 224 | 0.5426 | 0.845 | 0.2429 | 1.0993 | 0.845 | 0.8322 | 0.1631 | 0.0433 |
| No log | 33.0 | 231 | 0.5424 | 0.85 | 0.2426 | 1.0989 | 0.85 | 0.8348 | 0.1615 | 0.0436 |
| No log | 34.0 | 238 | 0.5406 | 0.84 | 0.2419 | 1.0979 | 0.8400 | 0.8251 | 0.1640 | 0.0440 |
| No log | 35.0 | 245 | 0.5438 | 0.85 | 0.2436 | 1.0953 | 0.85 | 0.8348 | 0.1595 | 0.0438 |
| No log | 36.0 | 252 | 0.5429 | 0.85 | 0.2429 | 1.0970 | 0.85 | 0.8348 | 0.1495 | 0.0433 |
| No log | 37.0 | 259 | 0.5431 | 0.85 | 0.2427 | 1.0951 | 0.85 | 0.8348 | 0.1617 | 0.0435 |
| No log | 38.0 | 266 | 0.5424 | 0.85 | 0.2426 | 1.0959 | 0.85 | 0.8348 | 0.1587 | 0.0434 |
| No log | 39.0 | 273 | 0.5428 | 0.85 | 0.2432 | 1.0924 | 0.85 | 0.8348 | 0.1512 | 0.0433 |
| No log | 40.0 | 280 | 0.5437 | 0.85 | 0.2438 | 1.0911 | 0.85 | 0.8348 | 0.1726 | 0.0438 |
| No log | 41.0 | 287 | 0.5438 | 0.85 | 0.2434 | 1.0925 | 0.85 | 0.8348 | 0.1704 | 0.0433 |
| No log | 42.0 | 294 | 0.5428 | 0.85 | 0.2432 | 1.0927 | 0.85 | 0.8348 | 0.1585 | 0.0436 |
| No log | 43.0 | 301 | 0.5455 | 0.85 | 0.2443 | 1.0907 | 0.85 | 0.8348 | 0.1756 | 0.0437 |
| No log | 44.0 | 308 | 0.5427 | 0.85 | 0.2433 | 1.0908 | 0.85 | 0.8348 | 0.1616 | 0.0433 |
| No log | 45.0 | 315 | 0.5456 | 0.85 | 0.2446 | 1.0878 | 0.85 | 0.8348 | 0.1767 | 0.0437 |
| No log | 46.0 | 322 | 0.5439 | 0.85 | 0.2438 | 1.0895 | 0.85 | 0.8348 | 0.1503 | 0.0435 |
| No log | 47.0 | 329 | 0.5448 | 0.85 | 0.2443 | 1.0891 | 0.85 | 0.8348 | 0.1674 | 0.0439 |
| No log | 48.0 | 336 | 0.5440 | 0.85 | 0.2437 | 1.0898 | 0.85 | 0.8348 | 0.1768 | 0.0437 |
| No log | 49.0 | 343 | 0.5443 | 0.85 | 0.2441 | 1.0883 | 0.85 | 0.8348 | 0.1433 | 0.0432 |
| No log | 50.0 | 350 | 0.5449 | 0.85 | 0.2444 | 1.0877 | 0.85 | 0.8348 | 0.1722 | 0.0436 |
| No log | 51.0 | 357 | 0.5443 | 0.85 | 0.2442 | 1.0871 | 0.85 | 0.8348 | 0.1606 | 0.0434 |
| No log | 52.0 | 364 | 0.5453 | 0.85 | 0.2444 | 1.0865 | 0.85 | 0.8348 | 0.1729 | 0.0436 |
| No log | 53.0 | 371 | 0.5433 | 0.845 | 0.2438 | 1.0873 | 0.845 | 0.8287 | 0.1570 | 0.0434 |
| No log | 54.0 | 378 | 0.5453 | 0.85 | 0.2447 | 1.0854 | 0.85 | 0.8348 | 0.1606 | 0.0435 |
| No log | 55.0 | 385 | 0.5438 | 0.85 | 0.2439 | 1.0868 | 0.85 | 0.8348 | 0.1721 | 0.0434 |
| No log | 56.0 | 392 | 0.5455 | 0.85 | 0.2447 | 1.0853 | 0.85 | 0.8348 | 0.1710 | 0.0437 |
| No log | 57.0 | 399 | 0.5435 | 0.85 | 0.2439 | 1.0864 | 0.85 | 0.8348 | 0.1540 | 0.0434 |
| No log | 58.0 | 406 | 0.5451 | 0.85 | 0.2447 | 1.0844 | 0.85 | 0.8348 | 0.1636 | 0.0436 |
| No log | 59.0 | 413 | 0.5442 | 0.85 | 0.2441 | 1.0858 | 0.85 | 0.8348 | 0.1556 | 0.0435 |
| No log | 60.0 | 420 | 0.5453 | 0.85 | 0.2447 | 1.0843 | 0.85 | 0.8348 | 0.1717 | 0.0437 |
| No log | 61.0 | 427 | 0.5439 | 0.85 | 0.2442 | 1.0847 | 0.85 | 0.8348 | 0.1541 | 0.0432 |
| No log | 62.0 | 434 | 0.5455 | 0.85 | 0.2449 | 1.0839 | 0.85 | 0.8348 | 0.1550 | 0.0435 |
| No log | 63.0 | 441 | 0.5446 | 0.85 | 0.2445 | 1.0843 | 0.85 | 0.8348 | 0.1553 | 0.0435 |
| No log | 64.0 | 448 | 0.5448 | 0.85 | 0.2446 | 1.0833 | 0.85 | 0.8348 | 0.1634 | 0.0435 |
| No log | 65.0 | 455 | 0.5443 | 0.85 | 0.2443 | 1.0847 | 0.85 | 0.8348 | 0.1554 | 0.0435 |
| No log | 66.0 | 462 | 0.5448 | 0.85 | 0.2447 | 1.0831 | 0.85 | 0.8348 | 0.1547 | 0.0436 |
| No log | 67.0 | 469 | 0.5452 | 0.85 | 0.2448 | 1.0828 | 0.85 | 0.8348 | 0.1563 | 0.0436 |
| No log | 68.0 | 476 | 0.5443 | 0.85 | 0.2444 | 1.0834 | 0.85 | 0.8348 | 0.1472 | 0.0434 |
| No log | 69.0 | 483 | 0.5447 | 0.85 | 0.2445 | 1.0832 | 0.85 | 0.8348 | 0.1632 | 0.0434 |
| No log | 70.0 | 490 | 0.5447 | 0.85 | 0.2446 | 1.0831 | 0.85 | 0.8348 | 0.1559 | 0.0435 |
| No log | 71.0 | 497 | 0.5447 | 0.85 | 0.2446 | 1.0829 | 0.85 | 0.8348 | 0.1473 | 0.0435 |
| 0.1823 | 72.0 | 504 | 0.5443 | 0.85 | 0.2444 | 1.0828 | 0.85 | 0.8348 | 0.1559 | 0.0434 |
| 0.1823 | 73.0 | 511 | 0.5447 | 0.85 | 0.2447 | 1.0825 | 0.85 | 0.8348 | 0.1472 | 0.0434 |
| 0.1823 | 74.0 | 518 | 0.5444 | 0.85 | 0.2444 | 1.0829 | 0.85 | 0.8348 | 0.1559 | 0.0436 |
| 0.1823 | 75.0 | 525 | 0.5446 | 0.85 | 0.2445 | 1.0829 | 0.85 | 0.8348 | 0.1557 | 0.0435 |
| 0.1823 | 76.0 | 532 | 0.5448 | 0.85 | 0.2445 | 1.0825 | 0.85 | 0.8348 | 0.1559 | 0.0435 |
| 0.1823 | 77.0 | 539 | 0.5443 | 0.85 | 0.2444 | 1.0827 | 0.85 | 0.8348 | 0.1558 | 0.0435 |
| 0.1823 | 78.0 | 546 | 0.5446 | 0.85 | 0.2446 | 1.0824 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 79.0 | 553 | 0.5450 | 0.85 | 0.2448 | 1.0821 | 0.85 | 0.8348 | 0.1637 | 0.0436 |
| 0.1823 | 80.0 | 560 | 0.5447 | 0.85 | 0.2446 | 1.0823 | 0.85 | 0.8348 | 0.1638 | 0.0436 |
| 0.1823 | 81.0 | 567 | 0.5446 | 0.85 | 0.2446 | 1.0820 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 82.0 | 574 | 0.5447 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1561 | 0.0435 |
| 0.1823 | 83.0 | 581 | 0.5448 | 0.85 | 0.2446 | 1.0822 | 0.85 | 0.8348 | 0.1550 | 0.0436 |
| 0.1823 | 84.0 | 588 | 0.5445 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1551 | 0.0435 |
| 0.1823 | 85.0 | 595 | 0.5446 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 86.0 | 602 | 0.5446 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 87.0 | 609 | 0.5448 | 0.85 | 0.2447 | 1.0820 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 88.0 | 616 | 0.5447 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1551 | 0.0435 |
| 0.1823 | 89.0 | 623 | 0.5446 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 90.0 | 630 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1638 | 0.0436 |
| 0.1823 | 91.0 | 637 | 0.5446 | 0.85 | 0.2445 | 1.0817 | 0.85 | 0.8348 | 0.1474 | 0.0435 |
| 0.1823 | 92.0 | 644 | 0.5445 | 0.85 | 0.2445 | 1.0818 | 0.85 | 0.8348 | 0.1551 | 0.0436 |
| 0.1823 | 93.0 | 651 | 0.5447 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 94.0 | 658 | 0.5447 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1561 | 0.0436 |
| 0.1823 | 95.0 | 665 | 0.5447 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1550 | 0.0435 |
| 0.1823 | 96.0 | 672 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 97.0 | 679 | 0.5446 | 0.85 | 0.2446 | 1.0817 | 0.85 | 0.8348 | 0.1551 | 0.0436 |
| 0.1823 | 98.0 | 686 | 0.5446 | 0.85 | 0.2446 | 1.0817 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 99.0 | 693 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 100.0 | 700 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO | skrl | 2023-07-10T20:15:49Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T19:47:18Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -38.54 +/- 17.49
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltPlace
type: IsaacGymEnvs-FactoryTaskNutBoltPlace
---
<!-- ---
torch: -38.54 +/- 17.49
jax: -60.9 +/- 0.84
numpy: -58.9 +/- 1.8
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltPlace
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 120 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 30 # 120 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
jliu596/Reinforce-cartpole | jliu596 | 2023-07-10T20:13:38Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T17:26:46Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
blzncz/segformer-finetuned-4ss1st3r_s3gs3m-10k-steps | blzncz | 2023-07-10T20:04:14Z | 188 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-07-10T10:49:12Z | ---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-4ss1st3r_s3gs3m-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-4ss1st3r_s3gs3m-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the blzncz/4ss1st3r_s3gs3m dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Mean Iou: 0.5967
- Mean Accuracy: 0.8460
- Overall Accuracy: 0.9344
- Accuracy Bg: nan
- Accuracy Fallo cohesivo: 0.9510
- Accuracy Fallo malla: 0.8524
- Accuracy Fallo adhesivo: 0.9362
- Accuracy Fallo burbuja: 0.6444
- Iou Bg: 0.0
- Iou Fallo cohesivo: 0.9239
- Iou Fallo malla: 0.7125
- Iou Fallo adhesivo: 0.8335
- Iou Fallo burbuja: 0.5139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------------------:|:--------------------:|:-----------------------:|:----------------------:|:------:|:------------------:|:---------------:|:------------------:|:-----------------:|
| 0.4796 | 1.0 | 133 | 0.4190 | 0.4518 | 0.6689 | 0.9049 | nan | 0.9277 | 0.8091 | 0.9381 | 0.0008 | 0.0 | 0.8866 | 0.6536 | 0.7179 | 0.0008 |
| 0.2665 | 2.0 | 266 | 0.3667 | 0.5096 | 0.7283 | 0.9001 | nan | 0.9111 | 0.8964 | 0.8731 | 0.2324 | 0.0 | 0.8802 | 0.6013 | 0.8467 | 0.2197 |
| 0.2158 | 3.0 | 399 | 0.3210 | 0.5505 | 0.7807 | 0.9142 | nan | 0.9250 | 0.8685 | 0.9414 | 0.3878 | 0.0 | 0.8952 | 0.6239 | 0.8901 | 0.3432 |
| 0.1737 | 4.0 | 532 | 0.3572 | 0.5370 | 0.7851 | 0.8905 | nan | 0.8905 | 0.9102 | 0.9121 | 0.4277 | 0.0 | 0.8671 | 0.5637 | 0.8777 | 0.3764 |
| 0.1602 | 5.0 | 665 | 0.6273 | 0.4086 | 0.7632 | 0.7743 | nan | 0.7333 | 0.9343 | 0.9685 | 0.4168 | 0.0 | 0.7198 | 0.4460 | 0.5324 | 0.3449 |
| 0.1707 | 6.0 | 798 | 0.3534 | 0.5442 | 0.7953 | 0.9025 | nan | 0.9056 | 0.9031 | 0.9234 | 0.4492 | 0.0 | 0.8812 | 0.5985 | 0.8629 | 0.3783 |
| 0.1376 | 7.0 | 931 | 0.3266 | 0.5513 | 0.7634 | 0.9262 | nan | 0.9434 | 0.8621 | 0.9288 | 0.3195 | 0.0 | 0.9109 | 0.6623 | 0.8866 | 0.2968 |
| 0.1346 | 8.0 | 1064 | 0.4976 | 0.4916 | 0.7900 | 0.8396 | nan | 0.8190 | 0.9133 | 0.9713 | 0.4565 | 0.0 | 0.8041 | 0.4662 | 0.7906 | 0.3970 |
| 0.1319 | 9.0 | 1197 | 0.3650 | 0.5652 | 0.8404 | 0.9043 | nan | 0.9053 | 0.8856 | 0.9593 | 0.6113 | 0.0 | 0.8829 | 0.5992 | 0.8734 | 0.4706 |
| 0.1229 | 10.0 | 1330 | 0.3201 | 0.5666 | 0.7963 | 0.9299 | nan | 0.9435 | 0.8764 | 0.9389 | 0.4265 | 0.0 | 0.9171 | 0.6896 | 0.8499 | 0.3763 |
| 0.1142 | 11.0 | 1463 | 0.3824 | 0.5576 | 0.8204 | 0.9020 | nan | 0.8988 | 0.9231 | 0.9456 | 0.5142 | 0.0 | 0.8795 | 0.6001 | 0.8711 | 0.4374 |
| 0.0983 | 12.0 | 1596 | 0.3133 | 0.5812 | 0.8297 | 0.9293 | nan | 0.9354 | 0.9046 | 0.9558 | 0.5229 | 0.0 | 0.9136 | 0.6969 | 0.8618 | 0.4335 |
| 0.1058 | 13.0 | 1729 | 0.2965 | 0.5860 | 0.8250 | 0.9364 | nan | 0.9528 | 0.8496 | 0.9598 | 0.5378 | 0.0 | 0.9253 | 0.7162 | 0.8502 | 0.4383 |
| 0.1052 | 14.0 | 1862 | 0.2839 | 0.6064 | 0.8275 | 0.9460 | nan | 0.9674 | 0.8517 | 0.9290 | 0.5621 | 0.0 | 0.9355 | 0.7492 | 0.8930 | 0.4540 |
| 0.0911 | 15.0 | 1995 | 0.3245 | 0.5853 | 0.8116 | 0.9368 | nan | 0.9565 | 0.8504 | 0.9298 | 0.5099 | 0.0 | 0.9243 | 0.7171 | 0.8534 | 0.4318 |
| 0.0889 | 16.0 | 2128 | 0.3094 | 0.5969 | 0.8225 | 0.9422 | nan | 0.9615 | 0.8559 | 0.9376 | 0.5351 | 0.0 | 0.9313 | 0.7353 | 0.8726 | 0.4451 |
| 0.0827 | 17.0 | 2261 | 0.4776 | 0.5187 | 0.8195 | 0.8547 | nan | 0.8390 | 0.9163 | 0.9440 | 0.5786 | 0.0 | 0.8207 | 0.4920 | 0.8216 | 0.4590 |
| 0.0939 | 18.0 | 2394 | 0.3923 | 0.5364 | 0.8375 | 0.8948 | nan | 0.8950 | 0.8831 | 0.9437 | 0.6282 | 0.0 | 0.8746 | 0.6268 | 0.7090 | 0.4717 |
| 0.0799 | 19.0 | 2527 | 0.3560 | 0.5776 | 0.8252 | 0.9254 | nan | 0.9337 | 0.8933 | 0.9409 | 0.5331 | 0.0 | 0.9096 | 0.6846 | 0.8519 | 0.4422 |
| 0.075 | 20.0 | 2660 | 0.3803 | 0.5796 | 0.8338 | 0.9194 | nan | 0.9249 | 0.9078 | 0.9238 | 0.5788 | 0.0 | 0.9032 | 0.6459 | 0.8821 | 0.4670 |
| 0.0844 | 21.0 | 2793 | 0.2885 | 0.6170 | 0.8334 | 0.9507 | nan | 0.9757 | 0.8296 | 0.9390 | 0.5892 | 0.0 | 0.9412 | 0.7654 | 0.8933 | 0.4852 |
| 0.0746 | 22.0 | 2926 | 0.3222 | 0.5831 | 0.8160 | 0.9331 | nan | 0.9481 | 0.8685 | 0.9370 | 0.5105 | 0.0 | 0.9193 | 0.7032 | 0.8716 | 0.4215 |
| 0.072 | 23.0 | 3059 | 0.3481 | 0.5878 | 0.8336 | 0.9266 | nan | 0.9357 | 0.8952 | 0.9271 | 0.5764 | 0.0 | 0.9123 | 0.6824 | 0.8720 | 0.4725 |
| 0.0735 | 24.0 | 3192 | 0.3196 | 0.5974 | 0.8403 | 0.9353 | nan | 0.9496 | 0.8666 | 0.9430 | 0.6018 | 0.0 | 0.9225 | 0.7165 | 0.8649 | 0.4832 |
| 0.0674 | 25.0 | 3325 | 0.3407 | 0.5927 | 0.8435 | 0.9282 | nan | 0.9401 | 0.8786 | 0.9246 | 0.6304 | 0.0 | 0.9141 | 0.6844 | 0.8696 | 0.4953 |
| 0.0712 | 26.0 | 3458 | 0.3356 | 0.5906 | 0.8420 | 0.9301 | nan | 0.9405 | 0.8895 | 0.9299 | 0.6080 | 0.0 | 0.9160 | 0.6905 | 0.8743 | 0.4722 |
| 0.072 | 27.0 | 3591 | 0.3491 | 0.5833 | 0.8372 | 0.9286 | nan | 0.9415 | 0.8636 | 0.9425 | 0.6012 | 0.0 | 0.9161 | 0.6966 | 0.8246 | 0.4790 |
| 0.0641 | 28.0 | 3724 | 0.3130 | 0.6087 | 0.8422 | 0.9473 | nan | 0.9697 | 0.8357 | 0.9427 | 0.6208 | 0.0 | 0.9386 | 0.7613 | 0.8599 | 0.4837 |
| 0.0597 | 29.0 | 3857 | 0.3828 | 0.5666 | 0.8394 | 0.9107 | nan | 0.9141 | 0.8934 | 0.9411 | 0.6092 | 0.0 | 0.8924 | 0.6327 | 0.8343 | 0.4735 |
| 0.0648 | 30.0 | 3990 | 0.3435 | 0.6001 | 0.8372 | 0.9403 | nan | 0.9569 | 0.8708 | 0.9276 | 0.5935 | 0.0 | 0.9292 | 0.7312 | 0.8779 | 0.4623 |
| 0.0618 | 31.0 | 4123 | 0.3531 | 0.5963 | 0.8521 | 0.9303 | nan | 0.9450 | 0.8621 | 0.9240 | 0.6773 | 0.0 | 0.9179 | 0.6842 | 0.8730 | 0.5063 |
| 0.0556 | 32.0 | 4256 | 0.3307 | 0.6037 | 0.8417 | 0.9401 | nan | 0.9576 | 0.8637 | 0.9271 | 0.6183 | 0.0 | 0.9298 | 0.7274 | 0.8637 | 0.4974 |
| 0.0616 | 33.0 | 4389 | 0.3510 | 0.5911 | 0.8347 | 0.9298 | nan | 0.9424 | 0.8714 | 0.9388 | 0.5863 | 0.0 | 0.9158 | 0.6914 | 0.8745 | 0.4740 |
| 0.0603 | 34.0 | 4522 | 0.3467 | 0.6022 | 0.8544 | 0.9334 | nan | 0.9487 | 0.8610 | 0.9274 | 0.6807 | 0.0 | 0.9211 | 0.7029 | 0.8738 | 0.5130 |
| 0.0587 | 35.0 | 4655 | 0.3574 | 0.6017 | 0.8407 | 0.9379 | nan | 0.9555 | 0.8541 | 0.9346 | 0.6187 | 0.0 | 0.9269 | 0.7228 | 0.8627 | 0.4962 |
| 0.0557 | 36.0 | 4788 | 0.3871 | 0.5720 | 0.8334 | 0.9178 | nan | 0.9317 | 0.8416 | 0.9374 | 0.6228 | 0.0 | 0.9051 | 0.6479 | 0.8160 | 0.4911 |
| 0.0567 | 37.0 | 4921 | 0.4425 | 0.5656 | 0.8282 | 0.9070 | nan | 0.9114 | 0.8922 | 0.9244 | 0.5848 | 0.0 | 0.8889 | 0.6100 | 0.8575 | 0.4718 |
| 0.0537 | 38.0 | 5054 | 0.3512 | 0.5946 | 0.8392 | 0.9317 | nan | 0.9463 | 0.8649 | 0.9314 | 0.6142 | 0.0 | 0.9187 | 0.6984 | 0.8637 | 0.4921 |
| 0.0559 | 39.0 | 5187 | 0.3676 | 0.5931 | 0.8437 | 0.9273 | nan | 0.9381 | 0.8798 | 0.9323 | 0.6247 | 0.0 | 0.9129 | 0.6779 | 0.8786 | 0.4959 |
| 0.0502 | 40.0 | 5320 | 0.4149 | 0.5518 | 0.8381 | 0.8984 | nan | 0.9011 | 0.8773 | 0.9368 | 0.6370 | 0.0 | 0.8793 | 0.6069 | 0.7741 | 0.4989 |
| 0.0559 | 41.0 | 5453 | 0.4042 | 0.5694 | 0.8342 | 0.9130 | nan | 0.9206 | 0.8721 | 0.9400 | 0.6041 | 0.0 | 0.8971 | 0.6319 | 0.8286 | 0.4896 |
| 0.0523 | 42.0 | 5586 | 0.3669 | 0.5903 | 0.8462 | 0.9286 | nan | 0.9414 | 0.8676 | 0.9337 | 0.6421 | 0.0 | 0.9162 | 0.6883 | 0.8370 | 0.5102 |
| 0.0525 | 43.0 | 5719 | 0.4140 | 0.5704 | 0.8531 | 0.9081 | nan | 0.9110 | 0.8867 | 0.9417 | 0.6729 | 0.0 | 0.8898 | 0.6220 | 0.8366 | 0.5035 |
| 0.0508 | 44.0 | 5852 | 0.3965 | 0.5714 | 0.8396 | 0.9141 | nan | 0.9227 | 0.8800 | 0.9147 | 0.6409 | 0.0 | 0.8989 | 0.6513 | 0.8007 | 0.5060 |
| 0.0507 | 45.0 | 5985 | 0.3793 | 0.5817 | 0.8392 | 0.9196 | nan | 0.9272 | 0.8932 | 0.9214 | 0.6148 | 0.0 | 0.9042 | 0.6627 | 0.8407 | 0.5011 |
| 0.0494 | 46.0 | 6118 | 0.3500 | 0.6020 | 0.8426 | 0.9363 | nan | 0.9524 | 0.8619 | 0.9322 | 0.6240 | 0.0 | 0.9247 | 0.7142 | 0.8653 | 0.5058 |
| 0.0462 | 47.0 | 6251 | 0.3524 | 0.6031 | 0.8435 | 0.9388 | nan | 0.9545 | 0.8668 | 0.9364 | 0.6163 | 0.0 | 0.9274 | 0.7269 | 0.8703 | 0.4909 |
| 0.0486 | 48.0 | 6384 | 0.3876 | 0.5902 | 0.8397 | 0.9308 | nan | 0.9479 | 0.8557 | 0.9161 | 0.6392 | 0.0 | 0.9203 | 0.6928 | 0.8334 | 0.5046 |
| 0.0461 | 49.0 | 6517 | 0.3674 | 0.5933 | 0.8409 | 0.9326 | nan | 0.9482 | 0.8622 | 0.9258 | 0.6274 | 0.0 | 0.9214 | 0.7053 | 0.8367 | 0.5030 |
| 0.0497 | 50.0 | 6650 | 0.4018 | 0.5838 | 0.8374 | 0.9246 | nan | 0.9390 | 0.8519 | 0.9341 | 0.6244 | 0.0 | 0.9102 | 0.6733 | 0.8361 | 0.4992 |
| 0.0491 | 51.0 | 6783 | 0.4036 | 0.5824 | 0.8513 | 0.9198 | nan | 0.9272 | 0.8805 | 0.9403 | 0.6573 | 0.0 | 0.9037 | 0.6712 | 0.8169 | 0.5203 |
| 0.046 | 52.0 | 6916 | 0.3913 | 0.5820 | 0.8395 | 0.9243 | nan | 0.9347 | 0.8771 | 0.9336 | 0.6126 | 0.0 | 0.9105 | 0.6792 | 0.8244 | 0.4960 |
| 0.0488 | 53.0 | 7049 | 0.3441 | 0.6010 | 0.8504 | 0.9362 | nan | 0.9523 | 0.8521 | 0.9457 | 0.6517 | 0.0 | 0.9250 | 0.7225 | 0.8496 | 0.5081 |
| 0.0458 | 54.0 | 7182 | 0.3784 | 0.5977 | 0.8382 | 0.9378 | nan | 0.9603 | 0.8212 | 0.9375 | 0.6337 | 0.0 | 0.9286 | 0.7157 | 0.8387 | 0.5053 |
| 0.0449 | 55.0 | 7315 | 0.3506 | 0.6068 | 0.8493 | 0.9404 | nan | 0.9579 | 0.8554 | 0.9385 | 0.6456 | 0.0 | 0.9300 | 0.7357 | 0.8549 | 0.5132 |
| 0.0482 | 56.0 | 7448 | 0.4005 | 0.5819 | 0.8414 | 0.9249 | nan | 0.9374 | 0.8642 | 0.9337 | 0.6303 | 0.0 | 0.9119 | 0.6831 | 0.8139 | 0.5006 |
| 0.0434 | 57.0 | 7581 | 0.3749 | 0.5914 | 0.8465 | 0.9294 | nan | 0.9423 | 0.8675 | 0.9339 | 0.6421 | 0.0 | 0.9171 | 0.6999 | 0.8265 | 0.5134 |
| 0.0435 | 58.0 | 7714 | 0.4195 | 0.5722 | 0.8400 | 0.9172 | nan | 0.9274 | 0.8700 | 0.9234 | 0.6392 | 0.0 | 0.9025 | 0.6588 | 0.7954 | 0.5044 |
| 0.0442 | 59.0 | 7847 | 0.3975 | 0.5828 | 0.8407 | 0.9257 | nan | 0.9398 | 0.8563 | 0.9312 | 0.6356 | 0.0 | 0.9134 | 0.6866 | 0.8103 | 0.5037 |
| 0.0442 | 60.0 | 7980 | 0.3845 | 0.5929 | 0.8457 | 0.9315 | nan | 0.9459 | 0.8603 | 0.9363 | 0.6404 | 0.0 | 0.9193 | 0.7041 | 0.8308 | 0.5103 |
| 0.0422 | 61.0 | 8113 | 0.3875 | 0.5963 | 0.8465 | 0.9338 | nan | 0.9489 | 0.8616 | 0.9340 | 0.6413 | 0.0 | 0.9226 | 0.7135 | 0.8381 | 0.5072 |
| 0.0436 | 62.0 | 8246 | 0.3859 | 0.6022 | 0.8497 | 0.9385 | nan | 0.9566 | 0.8477 | 0.9382 | 0.6562 | 0.0 | 0.9289 | 0.7300 | 0.8376 | 0.5147 |
| 0.0429 | 63.0 | 8379 | 0.3857 | 0.5956 | 0.8425 | 0.9357 | nan | 0.9534 | 0.8481 | 0.9357 | 0.6327 | 0.0 | 0.9249 | 0.7233 | 0.8283 | 0.5016 |
| 0.0446 | 64.0 | 8512 | 0.3778 | 0.5976 | 0.8495 | 0.9343 | nan | 0.9492 | 0.8602 | 0.9399 | 0.6489 | 0.0 | 0.9232 | 0.7191 | 0.8305 | 0.5153 |
| 0.0429 | 65.0 | 8645 | 0.3889 | 0.5948 | 0.8478 | 0.9330 | nan | 0.9490 | 0.8548 | 0.9325 | 0.6549 | 0.0 | 0.9225 | 0.7075 | 0.8271 | 0.5167 |
| 0.0454 | 66.0 | 8778 | 0.3915 | 0.5941 | 0.8470 | 0.9329 | nan | 0.9490 | 0.8571 | 0.9271 | 0.6547 | 0.0 | 0.9221 | 0.7087 | 0.8278 | 0.5117 |
| 0.0427 | 67.0 | 8911 | 0.3924 | 0.5967 | 0.8455 | 0.9349 | nan | 0.9518 | 0.8520 | 0.9350 | 0.6433 | 0.0 | 0.9247 | 0.7167 | 0.8290 | 0.5133 |
| 0.0425 | 68.0 | 9044 | 0.3990 | 0.5992 | 0.8491 | 0.9358 | nan | 0.9524 | 0.8545 | 0.9355 | 0.6541 | 0.0 | 0.9250 | 0.7187 | 0.8387 | 0.5136 |
| 0.0429 | 69.0 | 9177 | 0.3911 | 0.5909 | 0.8499 | 0.9303 | nan | 0.9451 | 0.8532 | 0.9394 | 0.6619 | 0.0 | 0.9192 | 0.7029 | 0.8178 | 0.5146 |
| 0.0465 | 70.0 | 9310 | 0.3840 | 0.5977 | 0.8481 | 0.9332 | nan | 0.9473 | 0.8700 | 0.9278 | 0.6473 | 0.0 | 0.9215 | 0.7079 | 0.8480 | 0.5110 |
| 0.0436 | 71.0 | 9443 | 0.3862 | 0.5974 | 0.8456 | 0.9351 | nan | 0.9518 | 0.8534 | 0.9359 | 0.6413 | 0.0 | 0.9248 | 0.7162 | 0.8338 | 0.5124 |
| 0.0435 | 72.0 | 9576 | 0.3926 | 0.5952 | 0.8448 | 0.9328 | nan | 0.9484 | 0.8585 | 0.9318 | 0.6405 | 0.0 | 0.9217 | 0.7073 | 0.8386 | 0.5084 |
| 0.0421 | 73.0 | 9709 | 0.3961 | 0.5984 | 0.8467 | 0.9348 | nan | 0.9513 | 0.8564 | 0.9309 | 0.6482 | 0.0 | 0.9243 | 0.7119 | 0.8414 | 0.5143 |
| 0.0409 | 74.0 | 9842 | 0.3973 | 0.5982 | 0.8494 | 0.9341 | nan | 0.9498 | 0.8596 | 0.9306 | 0.6578 | 0.0 | 0.9233 | 0.7094 | 0.8401 | 0.5181 |
| 0.041 | 75.0 | 9975 | 0.3898 | 0.5963 | 0.8476 | 0.9335 | nan | 0.9493 | 0.8561 | 0.9354 | 0.6498 | 0.0 | 0.9227 | 0.7108 | 0.8329 | 0.5153 |
| 0.0436 | 75.19 | 10000 | 0.3966 | 0.5967 | 0.8460 | 0.9344 | nan | 0.9510 | 0.8524 | 0.9362 | 0.6444 | 0.0 | 0.9239 | 0.7125 | 0.8335 | 0.5139 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-cocnat-aochildes-mod-sub-length-10k | NasimB | 2023-07-10T19:27:45Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T17:32:01Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-aochildes-mod-sub-length-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-aochildes-mod-sub-length-10k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6933 | 0.29 | 500 | 5.6341 |
| 5.3469 | 0.59 | 1000 | 5.1996 |
| 4.9864 | 0.88 | 1500 | 4.9580 |
| 4.7189 | 1.18 | 2000 | 4.8083 |
| 4.5609 | 1.47 | 2500 | 4.6850 |
| 4.4523 | 1.77 | 3000 | 4.5821 |
| 4.317 | 2.06 | 3500 | 4.5146 |
| 4.1329 | 2.35 | 4000 | 4.4652 |
| 4.1086 | 2.65 | 4500 | 4.4071 |
| 4.0635 | 2.94 | 5000 | 4.3601 |
| 3.8482 | 3.24 | 5500 | 4.3553 |
| 3.8055 | 3.53 | 6000 | 4.3282 |
| 3.7859 | 3.83 | 6500 | 4.2926 |
| 3.6619 | 4.12 | 7000 | 4.2970 |
| 3.5196 | 4.41 | 7500 | 4.2933 |
| 3.5139 | 4.71 | 8000 | 4.2857 |
| 3.4905 | 5.0 | 8500 | 4.2710 |
| 3.3203 | 5.3 | 9000 | 4.2871 |
| 3.322 | 5.59 | 9500 | 4.2867 |
| 3.3172 | 5.89 | 10000 | 4.2863 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
simonestradasch/COMPner2-bert-base-spanish-wwm-cased | simonestradasch | 2023-07-10T19:24:22Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:simonestradasch/NERcomp2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-10T18:55:43Z | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- simonestradasch/NERcomp2
model-index:
- name: COMPner2-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COMPner2-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the simonestradasch/NERcomp2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2843
- Body Part Precision: 0.6644
- Body Part Recall: 0.7143
- Body Part F1: 0.6884
- Body Part Number: 413
- Disease Precision: 0.7251
- Disease Recall: 0.7303
- Disease F1: 0.7276
- Disease Number: 975
- Family Member Precision: 0.8065
- Family Member Recall: 0.8333
- Family Member F1: 0.8197
- Family Member Number: 30
- Medication Precision: 0.7778
- Medication Recall: 0.6774
- Medication F1: 0.7241
- Medication Number: 93
- Procedure Precision: 0.5763
- Procedure Recall: 0.5949
- Procedure F1: 0.5854
- Procedure Number: 311
- Overall Precision: 0.6885
- Overall Recall: 0.7025
- Overall F1: 0.6955
- Overall Accuracy: 0.9146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4243 | 1.0 | 1004 | 0.2935 | 0.5910 | 0.6998 | 0.6408 | 413 | 0.6784 | 0.6944 | 0.6863 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.6882 | 0.6882 | 0.6882 | 93 | 0.6050 | 0.5466 | 0.5743 | 311 | 0.6473 | 0.6718 | 0.6593 | 0.9052 |
| 0.2348 | 2.0 | 2008 | 0.2843 | 0.6644 | 0.7143 | 0.6884 | 413 | 0.7251 | 0.7303 | 0.7276 | 975 | 0.8065 | 0.8333 | 0.8197 | 30 | 0.7778 | 0.6774 | 0.7241 | 93 | 0.5763 | 0.5949 | 0.5854 | 311 | 0.6885 | 0.7025 | 0.6955 | 0.9146 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-small-finetuned-hausa | grace-pro | 2023-07-10T19:21:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-10T18:52:05Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Precision: 0.6873
- Recall: 0.4713
- F1: 0.5592
- Accuracy: 0.9618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1493 | 1.0 | 2624 | 0.1382 | 0.6423 | 0.3968 | 0.4905 | 0.9572 |
| 0.1259 | 2.0 | 5248 | 0.1319 | 0.6734 | 0.4415 | 0.5333 | 0.9603 |
| 0.106 | 3.0 | 7872 | 0.1385 | 0.6908 | 0.4502 | 0.5452 | 0.9611 |
| 0.0949 | 4.0 | 10496 | 0.1377 | 0.6752 | 0.4759 | 0.5583 | 0.9613 |
| 0.086 | 5.0 | 13120 | 0.1444 | 0.6873 | 0.4713 | 0.5592 | 0.9618 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-10T19:16:42Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T18:37:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4300
- Accuracy: 0.83
- Brier Loss: 0.2807
- Nll: 1.0350
- F1 Micro: 0.83
- F1 Macro: 0.8295
- Ece: 0.2287
- Aurc: 0.0560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.6525 | 0.225 | 0.8757 | 5.3231 | 0.225 | 0.1387 | 0.2689 | 0.6977 |
| No log | 2.0 | 14 | 1.3106 | 0.405 | 0.7470 | 3.3487 | 0.405 | 0.2195 | 0.2936 | 0.4032 |
| No log | 3.0 | 21 | 0.9127 | 0.585 | 0.5785 | 1.8686 | 0.585 | 0.5142 | 0.2974 | 0.2067 |
| No log | 4.0 | 28 | 0.7280 | 0.715 | 0.4339 | 1.6780 | 0.715 | 0.6761 | 0.2672 | 0.1204 |
| No log | 5.0 | 35 | 0.6523 | 0.775 | 0.3676 | 1.6537 | 0.775 | 0.7619 | 0.2554 | 0.0929 |
| No log | 6.0 | 42 | 0.5888 | 0.785 | 0.3502 | 1.3926 | 0.785 | 0.7538 | 0.2277 | 0.0908 |
| No log | 7.0 | 49 | 0.6113 | 0.805 | 0.3326 | 1.7118 | 0.805 | 0.7903 | 0.2428 | 0.0803 |
| No log | 8.0 | 56 | 0.5404 | 0.785 | 0.3178 | 1.1557 | 0.785 | 0.7671 | 0.2183 | 0.0716 |
| No log | 9.0 | 63 | 0.5380 | 0.82 | 0.3051 | 1.3231 | 0.82 | 0.8072 | 0.2168 | 0.0773 |
| No log | 10.0 | 70 | 0.6035 | 0.775 | 0.3508 | 1.3888 | 0.775 | 0.7682 | 0.2191 | 0.0812 |
| No log | 11.0 | 77 | 0.5473 | 0.795 | 0.3202 | 1.2622 | 0.795 | 0.7740 | 0.2303 | 0.0626 |
| No log | 12.0 | 84 | 0.4860 | 0.825 | 0.2937 | 1.3575 | 0.825 | 0.8053 | 0.2392 | 0.0727 |
| No log | 13.0 | 91 | 0.5046 | 0.81 | 0.3032 | 1.1857 | 0.81 | 0.8086 | 0.2248 | 0.0564 |
| No log | 14.0 | 98 | 0.4745 | 0.825 | 0.2870 | 1.2338 | 0.825 | 0.8089 | 0.2441 | 0.0518 |
| No log | 15.0 | 105 | 0.4764 | 0.81 | 0.2943 | 1.0325 | 0.81 | 0.8110 | 0.1935 | 0.0556 |
| No log | 16.0 | 112 | 0.4918 | 0.81 | 0.3062 | 1.0551 | 0.81 | 0.8015 | 0.2198 | 0.0587 |
| No log | 17.0 | 119 | 0.4757 | 0.815 | 0.2970 | 1.4203 | 0.815 | 0.7965 | 0.2263 | 0.0850 |
| No log | 18.0 | 126 | 0.4586 | 0.825 | 0.2926 | 1.0361 | 0.825 | 0.8268 | 0.2279 | 0.0583 |
| No log | 19.0 | 133 | 0.4503 | 0.835 | 0.2855 | 1.1476 | 0.835 | 0.8301 | 0.2392 | 0.0589 |
| No log | 20.0 | 140 | 0.4780 | 0.805 | 0.3105 | 0.9928 | 0.805 | 0.7902 | 0.1988 | 0.0775 |
| No log | 21.0 | 147 | 0.4965 | 0.8 | 0.3205 | 1.1887 | 0.8000 | 0.8029 | 0.2410 | 0.0702 |
| No log | 22.0 | 154 | 0.4753 | 0.815 | 0.3016 | 0.9609 | 0.815 | 0.8169 | 0.2163 | 0.0580 |
| No log | 23.0 | 161 | 0.4733 | 0.8 | 0.3074 | 1.2566 | 0.8000 | 0.8001 | 0.2162 | 0.0704 |
| No log | 24.0 | 168 | 0.4472 | 0.815 | 0.2888 | 1.0352 | 0.815 | 0.8187 | 0.2317 | 0.0590 |
| No log | 25.0 | 175 | 0.4434 | 0.815 | 0.2854 | 0.9874 | 0.815 | 0.8186 | 0.2149 | 0.0554 |
| No log | 26.0 | 182 | 0.4316 | 0.82 | 0.2754 | 1.0477 | 0.82 | 0.8267 | 0.2195 | 0.0508 |
| No log | 27.0 | 189 | 0.4276 | 0.83 | 0.2751 | 1.1016 | 0.83 | 0.8336 | 0.2050 | 0.0525 |
| No log | 28.0 | 196 | 0.4329 | 0.82 | 0.2795 | 1.0537 | 0.82 | 0.8220 | 0.2158 | 0.0611 |
| No log | 29.0 | 203 | 0.4327 | 0.82 | 0.2827 | 1.1766 | 0.82 | 0.8237 | 0.2024 | 0.0603 |
| No log | 30.0 | 210 | 0.4317 | 0.82 | 0.2820 | 1.0331 | 0.82 | 0.8219 | 0.2083 | 0.0611 |
| No log | 31.0 | 217 | 0.4316 | 0.82 | 0.2803 | 1.0974 | 0.82 | 0.8263 | 0.1984 | 0.0575 |
| No log | 32.0 | 224 | 0.4340 | 0.82 | 0.2833 | 1.0384 | 0.82 | 0.8240 | 0.2202 | 0.0590 |
| No log | 33.0 | 231 | 0.4333 | 0.81 | 0.2824 | 1.0355 | 0.81 | 0.8160 | 0.2103 | 0.0586 |
| No log | 34.0 | 238 | 0.4309 | 0.83 | 0.2817 | 1.1015 | 0.83 | 0.8307 | 0.2107 | 0.0577 |
| No log | 35.0 | 245 | 0.4321 | 0.82 | 0.2817 | 1.0359 | 0.82 | 0.8229 | 0.2147 | 0.0590 |
| No log | 36.0 | 252 | 0.4304 | 0.825 | 0.2802 | 1.1016 | 0.825 | 0.8257 | 0.2137 | 0.0569 |
| No log | 37.0 | 259 | 0.4303 | 0.825 | 0.2811 | 1.0990 | 0.825 | 0.8268 | 0.2149 | 0.0581 |
| No log | 38.0 | 266 | 0.4314 | 0.825 | 0.2814 | 1.1003 | 0.825 | 0.8257 | 0.2163 | 0.0581 |
| No log | 39.0 | 273 | 0.4302 | 0.82 | 0.2806 | 1.1007 | 0.82 | 0.8226 | 0.2102 | 0.0576 |
| No log | 40.0 | 280 | 0.4307 | 0.825 | 0.2809 | 1.0376 | 0.825 | 0.8264 | 0.2049 | 0.0573 |
| No log | 41.0 | 287 | 0.4303 | 0.82 | 0.2808 | 1.0434 | 0.82 | 0.8226 | 0.2096 | 0.0574 |
| No log | 42.0 | 294 | 0.4310 | 0.825 | 0.2817 | 1.0376 | 0.825 | 0.8268 | 0.2140 | 0.0580 |
| No log | 43.0 | 301 | 0.4310 | 0.825 | 0.2813 | 1.0391 | 0.825 | 0.8257 | 0.2147 | 0.0580 |
| No log | 44.0 | 308 | 0.4301 | 0.825 | 0.2808 | 1.0389 | 0.825 | 0.8257 | 0.2064 | 0.0573 |
| No log | 45.0 | 315 | 0.4305 | 0.83 | 0.2811 | 1.0419 | 0.83 | 0.8307 | 0.2300 | 0.0577 |
| No log | 46.0 | 322 | 0.4303 | 0.82 | 0.2808 | 1.0423 | 0.82 | 0.8226 | 0.2197 | 0.0582 |
| No log | 47.0 | 329 | 0.4304 | 0.825 | 0.2811 | 1.0405 | 0.825 | 0.8257 | 0.2240 | 0.0580 |
| No log | 48.0 | 336 | 0.4300 | 0.82 | 0.2805 | 1.0407 | 0.82 | 0.8226 | 0.2105 | 0.0574 |
| No log | 49.0 | 343 | 0.4307 | 0.825 | 0.2812 | 1.0381 | 0.825 | 0.8257 | 0.2252 | 0.0577 |
| No log | 50.0 | 350 | 0.4304 | 0.82 | 0.2810 | 1.0422 | 0.82 | 0.8226 | 0.2353 | 0.0578 |
| No log | 51.0 | 357 | 0.4310 | 0.825 | 0.2813 | 1.0382 | 0.825 | 0.8264 | 0.2153 | 0.0569 |
| No log | 52.0 | 364 | 0.4309 | 0.82 | 0.2814 | 1.0380 | 0.82 | 0.8226 | 0.2282 | 0.0574 |
| No log | 53.0 | 371 | 0.4307 | 0.825 | 0.2813 | 1.0357 | 0.825 | 0.8264 | 0.2250 | 0.0568 |
| No log | 54.0 | 378 | 0.4305 | 0.82 | 0.2810 | 1.0366 | 0.82 | 0.8226 | 0.2284 | 0.0575 |
| No log | 55.0 | 385 | 0.4304 | 0.825 | 0.2811 | 1.0351 | 0.825 | 0.8264 | 0.2241 | 0.0566 |
| No log | 56.0 | 392 | 0.4308 | 0.825 | 0.2813 | 1.0369 | 0.825 | 0.8257 | 0.2414 | 0.0572 |
| No log | 57.0 | 399 | 0.4305 | 0.825 | 0.2810 | 1.0356 | 0.825 | 0.8257 | 0.2322 | 0.0571 |
| No log | 58.0 | 406 | 0.4302 | 0.82 | 0.2808 | 1.0359 | 0.82 | 0.8226 | 0.2368 | 0.0569 |
| No log | 59.0 | 413 | 0.4302 | 0.82 | 0.2809 | 1.0346 | 0.82 | 0.8226 | 0.2271 | 0.0569 |
| No log | 60.0 | 420 | 0.4303 | 0.82 | 0.2809 | 1.0357 | 0.82 | 0.8226 | 0.2272 | 0.0570 |
| No log | 61.0 | 427 | 0.4304 | 0.825 | 0.2810 | 1.0360 | 0.825 | 0.8257 | 0.2325 | 0.0569 |
| No log | 62.0 | 434 | 0.4303 | 0.825 | 0.2809 | 1.0360 | 0.825 | 0.8257 | 0.2321 | 0.0568 |
| No log | 63.0 | 441 | 0.4303 | 0.83 | 0.2809 | 1.0356 | 0.83 | 0.8295 | 0.2300 | 0.0562 |
| No log | 64.0 | 448 | 0.4304 | 0.825 | 0.2810 | 1.0347 | 0.825 | 0.8264 | 0.2242 | 0.0564 |
| No log | 65.0 | 455 | 0.4301 | 0.83 | 0.2808 | 1.0361 | 0.83 | 0.8295 | 0.2384 | 0.0564 |
| No log | 66.0 | 462 | 0.4303 | 0.83 | 0.2810 | 1.0359 | 0.83 | 0.8295 | 0.2293 | 0.0563 |
| No log | 67.0 | 469 | 0.4302 | 0.83 | 0.2809 | 1.0360 | 0.83 | 0.8295 | 0.2386 | 0.0564 |
| No log | 68.0 | 476 | 0.4304 | 0.83 | 0.2810 | 1.0360 | 0.83 | 0.8295 | 0.2384 | 0.0563 |
| No log | 69.0 | 483 | 0.4305 | 0.83 | 0.2812 | 1.0355 | 0.83 | 0.8295 | 0.2295 | 0.0564 |
| No log | 70.0 | 490 | 0.4302 | 0.825 | 0.2808 | 1.0354 | 0.825 | 0.8264 | 0.2239 | 0.0561 |
| No log | 71.0 | 497 | 0.4305 | 0.83 | 0.2812 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0564 |
| 0.1776 | 72.0 | 504 | 0.4303 | 0.83 | 0.2808 | 1.0356 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 73.0 | 511 | 0.4301 | 0.825 | 0.2807 | 1.0351 | 0.825 | 0.8264 | 0.2348 | 0.0563 |
| 0.1776 | 74.0 | 518 | 0.4304 | 0.83 | 0.2811 | 1.0353 | 0.83 | 0.8295 | 0.2195 | 0.0562 |
| 0.1776 | 75.0 | 525 | 0.4301 | 0.825 | 0.2808 | 1.0355 | 0.825 | 0.8257 | 0.2320 | 0.0568 |
| 0.1776 | 76.0 | 532 | 0.4302 | 0.83 | 0.2808 | 1.0348 | 0.83 | 0.8295 | 0.2289 | 0.0561 |
| 0.1776 | 77.0 | 539 | 0.4301 | 0.83 | 0.2808 | 1.0355 | 0.83 | 0.8295 | 0.2300 | 0.0562 |
| 0.1776 | 78.0 | 546 | 0.4301 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2394 | 0.0563 |
| 0.1776 | 79.0 | 553 | 0.4302 | 0.83 | 0.2809 | 1.0346 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
| 0.1776 | 80.0 | 560 | 0.4302 | 0.83 | 0.2809 | 1.0353 | 0.83 | 0.8295 | 0.2299 | 0.0563 |
| 0.1776 | 81.0 | 567 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0563 |
| 0.1776 | 82.0 | 574 | 0.4302 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2298 | 0.0560 |
| 0.1776 | 83.0 | 581 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0561 |
| 0.1776 | 84.0 | 588 | 0.4299 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 85.0 | 595 | 0.4301 | 0.83 | 0.2808 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0562 |
| 0.1776 | 86.0 | 602 | 0.4301 | 0.83 | 0.2808 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0562 |
| 0.1776 | 87.0 | 609 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0561 |
| 0.1776 | 88.0 | 616 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2287 | 0.0562 |
| 0.1776 | 89.0 | 623 | 0.4300 | 0.83 | 0.2807 | 1.0353 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 90.0 | 630 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0559 |
| 0.1776 | 91.0 | 637 | 0.4300 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0562 |
| 0.1776 | 92.0 | 644 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 93.0 | 651 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0562 |
| 0.1776 | 94.0 | 658 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0560 |
| 0.1776 | 95.0 | 665 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2297 | 0.0562 |
| 0.1776 | 96.0 | 672 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0561 |
| 0.1776 | 97.0 | 679 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 98.0 | 686 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 99.0 | 693 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
| 0.1776 | 100.0 | 700 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NasimB/gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k | NasimB | 2023-07-10T19:10:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T16:50:38Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7606 | 0.29 | 500 | 5.6940 |
| 5.4347 | 0.59 | 1000 | 5.2560 |
| 5.0945 | 0.88 | 1500 | 5.0226 |
| 4.8232 | 1.18 | 2000 | 4.8777 |
| 4.675 | 1.47 | 2500 | 4.7626 |
| 4.5767 | 1.77 | 3000 | 4.6625 |
| 4.4488 | 2.06 | 3500 | 4.5933 |
| 4.2612 | 2.36 | 4000 | 4.5563 |
| 4.245 | 2.65 | 4500 | 4.4882 |
| 4.208 | 2.94 | 5000 | 4.4332 |
| 3.9773 | 3.24 | 5500 | 4.4362 |
| 3.9484 | 3.53 | 6000 | 4.4046 |
| 3.9304 | 3.83 | 6500 | 4.3669 |
| 3.7943 | 4.12 | 7000 | 4.3731 |
| 3.6517 | 4.42 | 7500 | 4.3646 |
| 3.646 | 4.71 | 8000 | 4.3456 |
| 3.6381 | 5.01 | 8500 | 4.3333 |
| 3.3812 | 5.3 | 9000 | 4.3586 |
| 3.3875 | 5.59 | 9500 | 4.3536 |
| 3.3847 | 5.89 | 10000 | 4.3483 |
| 3.2816 | 6.18 | 10500 | 4.3600 |
| 3.2295 | 6.48 | 11000 | 4.3636 |
| 3.223 | 6.77 | 11500 | 4.3630 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
aphi/ppo-SnowballTarget | aphi | 2023-07-10T19:08:24Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-10T19:08:17Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aphi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MaitreHibou/Reinforce-Cartpole-v1 | MaitreHibou | 2023-07-10T18:49:32Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T18:49:23Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
grace-pro/afriberta-base-finetuned-hausa | grace-pro | 2023-07-10T18:48:53Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-10T17:58:10Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-base-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-base-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1401
- Precision: 0.7156
- Recall: 0.5251
- F1: 0.6057
- Accuracy: 0.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1381 | 1.0 | 2624 | 0.1277 | 0.6804 | 0.4403 | 0.5346 | 0.9607 |
| 0.1113 | 2.0 | 5248 | 0.1198 | 0.6803 | 0.4994 | 0.5760 | 0.9628 |
| 0.0861 | 3.0 | 7872 | 0.1270 | 0.7102 | 0.5052 | 0.5904 | 0.9645 |
| 0.0723 | 4.0 | 10496 | 0.1322 | 0.7188 | 0.5134 | 0.5990 | 0.9654 |
| 0.0602 | 5.0 | 13120 | 0.1401 | 0.7156 | 0.5251 | 0.6057 | 0.9655 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DavidSolan0/coverart | DavidSolan0 | 2023-07-10T18:34:53Z | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T18:30:01Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### coverart Dreambooth model trained by DavidSolan0 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
simonestradasch/COMPner-bert-base-spanish-wwm-cased | simonestradasch | 2023-07-10T18:28:38Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:simonestradasch/NERcomp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-10T18:07:06Z | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- simonestradasch/NERcomp
model-index:
- name: COMPner-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COMPner-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the simonestradasch/NERcomp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2793
- Body Part Precision: 0.6700
- Body Part Recall: 0.7186
- Body Part F1: 0.6934
- Body Part Number: 565
- Disease Precision: 0.6966
- Disease Recall: 0.7533
- Disease F1: 0.7238
- Disease Number: 1350
- Family Member Precision: 0.9
- Family Member Recall: 0.75
- Family Member F1: 0.8182
- Family Member Number: 24
- Medication Precision: 0.7143
- Medication Recall: 0.6190
- Medication F1: 0.6633
- Medication Number: 105
- Procedure Precision: 0.5233
- Procedure Recall: 0.5125
- Procedure F1: 0.5178
- Procedure Number: 439
- Overall Precision: 0.6640
- Overall Recall: 0.6971
- Overall F1: 0.6802
- Overall Accuracy: 0.9136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4741 | 1.0 | 703 | 0.2932 | 0.6449 | 0.6301 | 0.6374 | 565 | 0.6984 | 0.7170 | 0.7076 | 1350 | 0.9412 | 0.6667 | 0.7805 | 24 | 0.8551 | 0.5619 | 0.6782 | 105 | 0.5113 | 0.3599 | 0.4225 | 439 | 0.6674 | 0.6271 | 0.6466 | 0.9091 |
| 0.259 | 2.0 | 1406 | 0.2793 | 0.6700 | 0.7186 | 0.6934 | 565 | 0.6966 | 0.7533 | 0.7238 | 1350 | 0.9 | 0.75 | 0.8182 | 24 | 0.7143 | 0.6190 | 0.6633 | 105 | 0.5233 | 0.5125 | 0.5178 | 439 | 0.6640 | 0.6971 | 0.6802 | 0.9136 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BrendaScar/ppo-LunarLander-v2 | BrendaScar | 2023-07-10T18:24:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T18:24:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.77 +/- 20.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MaitreHibou/dqn-SpaceInvadersNoFrameskip-v4 | MaitreHibou | 2023-07-10T18:21:47Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T18:21:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 656.50 +/- 140.98
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MaitreHibou
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
FerhatDk/wav2vec2-base-finetuned-ks | FerhatDk | 2023-07-10T18:08:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-09-22T08:59:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3550
- Accuracy: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 0.6840 | 0.6 |
| 0.6867 | 2.0 | 16 | 0.6780 | 0.6364 |
| 0.6742 | 3.0 | 24 | 0.6601 | 0.6182 |
| 0.6446 | 4.0 | 32 | 0.6294 | 0.6364 |
| 0.6299 | 5.0 | 40 | 0.6002 | 0.6727 |
| 0.6299 | 6.0 | 48 | 0.5755 | 0.7091 |
| 0.6021 | 7.0 | 56 | 0.5530 | 0.7273 |
| 0.5678 | 8.0 | 64 | 0.5036 | 0.8182 |
| 0.5512 | 9.0 | 72 | 0.4753 | 0.8545 |
| 0.4784 | 10.0 | 80 | 0.4184 | 0.9273 |
| 0.4784 | 11.0 | 88 | 0.4102 | 0.8909 |
| 0.4515 | 12.0 | 96 | 0.4444 | 0.8182 |
| 0.4878 | 13.0 | 104 | 0.3780 | 0.9091 |
| 0.4418 | 14.0 | 112 | 0.4570 | 0.8 |
| 0.4746 | 15.0 | 120 | 0.3870 | 0.8545 |
| 0.4746 | 16.0 | 128 | 0.3932 | 0.8364 |
| 0.4226 | 17.0 | 136 | 0.2779 | 0.9636 |
| 0.4301 | 18.0 | 144 | 0.3125 | 0.9455 |
| 0.3482 | 19.0 | 152 | 0.3212 | 0.9091 |
| 0.3611 | 20.0 | 160 | 0.3925 | 0.8364 |
| 0.3611 | 21.0 | 168 | 0.3389 | 0.8909 |
| 0.3507 | 22.0 | 176 | 0.3099 | 0.8727 |
| 0.3241 | 23.0 | 184 | 0.3120 | 0.8727 |
| 0.2533 | 24.0 | 192 | 0.2313 | 0.9455 |
| 0.2466 | 25.0 | 200 | 0.3550 | 0.8727 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t1.5_a0.7 | jordyvl | 2023-07-10T17:57:06Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T17:18:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4797
- Accuracy: 0.835
- Brier Loss: 0.2522
- Nll: 0.8627
- F1 Micro: 0.835
- F1 Macro: 0.8222
- Ece: 0.1830
- Aurc: 0.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.9341 | 0.215 | 0.8749 | 5.3238 | 0.2150 | 0.1264 | 0.2642 | 0.6914 |
| No log | 2.0 | 14 | 1.5320 | 0.405 | 0.7410 | 3.5078 | 0.405 | 0.2276 | 0.2957 | 0.4015 |
| No log | 3.0 | 21 | 1.0532 | 0.635 | 0.5629 | 2.0153 | 0.635 | 0.5844 | 0.3037 | 0.2006 |
| No log | 4.0 | 28 | 0.7915 | 0.715 | 0.4093 | 1.6974 | 0.715 | 0.6762 | 0.2420 | 0.1131 |
| No log | 5.0 | 35 | 0.8024 | 0.745 | 0.3869 | 1.7109 | 0.745 | 0.7548 | 0.2160 | 0.1006 |
| No log | 6.0 | 42 | 0.7162 | 0.765 | 0.3351 | 1.8105 | 0.765 | 0.7599 | 0.2216 | 0.0874 |
| No log | 7.0 | 49 | 0.6966 | 0.785 | 0.3304 | 1.5292 | 0.785 | 0.7682 | 0.2058 | 0.0979 |
| No log | 8.0 | 56 | 0.6317 | 0.805 | 0.2995 | 1.3486 | 0.805 | 0.7887 | 0.2266 | 0.0721 |
| No log | 9.0 | 63 | 0.6903 | 0.805 | 0.3304 | 1.5866 | 0.805 | 0.7971 | 0.2371 | 0.0995 |
| No log | 10.0 | 70 | 0.6223 | 0.805 | 0.2940 | 1.3478 | 0.805 | 0.8114 | 0.2281 | 0.0697 |
| No log | 11.0 | 77 | 0.6350 | 0.795 | 0.3145 | 1.3386 | 0.795 | 0.7730 | 0.2063 | 0.0962 |
| No log | 12.0 | 84 | 0.5570 | 0.835 | 0.2666 | 1.2662 | 0.835 | 0.8181 | 0.1951 | 0.0553 |
| No log | 13.0 | 91 | 0.5610 | 0.81 | 0.2858 | 1.2619 | 0.81 | 0.8002 | 0.1884 | 0.0626 |
| No log | 14.0 | 98 | 0.5843 | 0.8 | 0.2961 | 1.0782 | 0.8000 | 0.8083 | 0.1993 | 0.0683 |
| No log | 15.0 | 105 | 0.5918 | 0.78 | 0.2965 | 1.1207 | 0.78 | 0.7861 | 0.1895 | 0.0634 |
| No log | 16.0 | 112 | 0.5541 | 0.84 | 0.2765 | 1.3189 | 0.8400 | 0.8455 | 0.1969 | 0.0597 |
| No log | 17.0 | 119 | 0.5037 | 0.835 | 0.2568 | 0.9024 | 0.835 | 0.8248 | 0.2083 | 0.0499 |
| No log | 18.0 | 126 | 0.5050 | 0.85 | 0.2563 | 1.0032 | 0.85 | 0.8441 | 0.2147 | 0.0580 |
| No log | 19.0 | 133 | 0.5430 | 0.815 | 0.2779 | 1.1046 | 0.815 | 0.8044 | 0.1906 | 0.0562 |
| No log | 20.0 | 140 | 0.5276 | 0.84 | 0.2743 | 0.9964 | 0.8400 | 0.8144 | 0.2104 | 0.0597 |
| No log | 21.0 | 147 | 0.5155 | 0.835 | 0.2686 | 0.9556 | 0.835 | 0.8210 | 0.1962 | 0.0572 |
| No log | 22.0 | 154 | 0.4937 | 0.835 | 0.2581 | 1.0079 | 0.835 | 0.8172 | 0.1975 | 0.0479 |
| No log | 23.0 | 161 | 0.4931 | 0.845 | 0.2533 | 1.0021 | 0.845 | 0.8270 | 0.1884 | 0.0503 |
| No log | 24.0 | 168 | 0.4869 | 0.83 | 0.2554 | 0.9660 | 0.83 | 0.8084 | 0.1945 | 0.0481 |
| No log | 25.0 | 175 | 0.4843 | 0.845 | 0.2512 | 0.9979 | 0.845 | 0.8316 | 0.1746 | 0.0466 |
| No log | 26.0 | 182 | 0.4866 | 0.835 | 0.2531 | 0.9006 | 0.835 | 0.8188 | 0.1833 | 0.0472 |
| No log | 27.0 | 189 | 0.4882 | 0.825 | 0.2562 | 0.8929 | 0.825 | 0.8043 | 0.2023 | 0.0469 |
| No log | 28.0 | 196 | 0.4814 | 0.82 | 0.2494 | 0.9122 | 0.82 | 0.8060 | 0.1773 | 0.0451 |
| No log | 29.0 | 203 | 0.4749 | 0.835 | 0.2501 | 0.8770 | 0.835 | 0.8252 | 0.1688 | 0.0442 |
| No log | 30.0 | 210 | 0.4761 | 0.84 | 0.2490 | 0.8848 | 0.8400 | 0.8250 | 0.2068 | 0.0443 |
| No log | 31.0 | 217 | 0.4787 | 0.845 | 0.2508 | 0.8754 | 0.845 | 0.8309 | 0.1635 | 0.0438 |
| No log | 32.0 | 224 | 0.4791 | 0.835 | 0.2521 | 0.8711 | 0.835 | 0.8224 | 0.1876 | 0.0446 |
| No log | 33.0 | 231 | 0.4779 | 0.84 | 0.2509 | 0.8650 | 0.8400 | 0.8252 | 0.1813 | 0.0436 |
| No log | 34.0 | 238 | 0.4774 | 0.84 | 0.2513 | 0.8662 | 0.8400 | 0.8252 | 0.1919 | 0.0441 |
| No log | 35.0 | 245 | 0.4760 | 0.835 | 0.2502 | 0.8636 | 0.835 | 0.8224 | 0.1840 | 0.0434 |
| No log | 36.0 | 252 | 0.4784 | 0.84 | 0.2509 | 0.8688 | 0.8400 | 0.8281 | 0.1691 | 0.0437 |
| No log | 37.0 | 259 | 0.4771 | 0.835 | 0.2507 | 0.8670 | 0.835 | 0.8224 | 0.1936 | 0.0440 |
| No log | 38.0 | 266 | 0.4764 | 0.835 | 0.2499 | 0.8614 | 0.835 | 0.8224 | 0.1830 | 0.0434 |
| No log | 39.0 | 273 | 0.4769 | 0.835 | 0.2503 | 0.8651 | 0.835 | 0.8224 | 0.2001 | 0.0438 |
| No log | 40.0 | 280 | 0.4777 | 0.84 | 0.2514 | 0.8608 | 0.8400 | 0.8281 | 0.1832 | 0.0435 |
| No log | 41.0 | 287 | 0.4777 | 0.835 | 0.2504 | 0.8650 | 0.835 | 0.8224 | 0.1953 | 0.0437 |
| No log | 42.0 | 294 | 0.4779 | 0.835 | 0.2511 | 0.8629 | 0.835 | 0.8224 | 0.1944 | 0.0440 |
| No log | 43.0 | 301 | 0.4790 | 0.835 | 0.2519 | 0.8631 | 0.835 | 0.8222 | 0.1808 | 0.0439 |
| No log | 44.0 | 308 | 0.4777 | 0.835 | 0.2509 | 0.8604 | 0.835 | 0.8222 | 0.1886 | 0.0435 |
| No log | 45.0 | 315 | 0.4787 | 0.835 | 0.2517 | 0.8620 | 0.835 | 0.8222 | 0.1940 | 0.0437 |
| No log | 46.0 | 322 | 0.4774 | 0.84 | 0.2509 | 0.8614 | 0.8400 | 0.8281 | 0.1779 | 0.0433 |
| No log | 47.0 | 329 | 0.4785 | 0.835 | 0.2517 | 0.8609 | 0.835 | 0.8222 | 0.1811 | 0.0438 |
| No log | 48.0 | 336 | 0.4792 | 0.835 | 0.2521 | 0.8611 | 0.835 | 0.8222 | 0.1849 | 0.0436 |
| No log | 49.0 | 343 | 0.4771 | 0.84 | 0.2509 | 0.8623 | 0.8400 | 0.8281 | 0.1908 | 0.0430 |
| No log | 50.0 | 350 | 0.4793 | 0.835 | 0.2520 | 0.8633 | 0.835 | 0.8222 | 0.1900 | 0.0435 |
| No log | 51.0 | 357 | 0.4786 | 0.83 | 0.2517 | 0.8654 | 0.83 | 0.8159 | 0.1684 | 0.0437 |
| No log | 52.0 | 364 | 0.4792 | 0.83 | 0.2521 | 0.8625 | 0.83 | 0.8166 | 0.1915 | 0.0430 |
| No log | 53.0 | 371 | 0.4785 | 0.835 | 0.2513 | 0.8652 | 0.835 | 0.8222 | 0.1853 | 0.0434 |
| No log | 54.0 | 378 | 0.4798 | 0.835 | 0.2523 | 0.8652 | 0.835 | 0.8222 | 0.1767 | 0.0437 |
| No log | 55.0 | 385 | 0.4791 | 0.835 | 0.2519 | 0.8637 | 0.835 | 0.8222 | 0.1891 | 0.0435 |
| No log | 56.0 | 392 | 0.4790 | 0.835 | 0.2519 | 0.8614 | 0.835 | 0.8222 | 0.1749 | 0.0429 |
| No log | 57.0 | 399 | 0.4782 | 0.835 | 0.2513 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| No log | 58.0 | 406 | 0.4794 | 0.835 | 0.2521 | 0.8602 | 0.835 | 0.8222 | 0.1758 | 0.0435 |
| No log | 59.0 | 413 | 0.4790 | 0.835 | 0.2517 | 0.8617 | 0.835 | 0.8222 | 0.1754 | 0.0432 |
| No log | 60.0 | 420 | 0.4791 | 0.835 | 0.2520 | 0.8614 | 0.835 | 0.8222 | 0.1830 | 0.0430 |
| No log | 61.0 | 427 | 0.4789 | 0.835 | 0.2518 | 0.8612 | 0.835 | 0.8222 | 0.1870 | 0.0432 |
| No log | 62.0 | 434 | 0.4792 | 0.835 | 0.2520 | 0.8620 | 0.835 | 0.8222 | 0.1902 | 0.0433 |
| No log | 63.0 | 441 | 0.4789 | 0.835 | 0.2518 | 0.8619 | 0.835 | 0.8222 | 0.1997 | 0.0431 |
| No log | 64.0 | 448 | 0.4797 | 0.835 | 0.2523 | 0.8607 | 0.835 | 0.8222 | 0.1833 | 0.0434 |
| No log | 65.0 | 455 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1922 | 0.0434 |
| No log | 66.0 | 462 | 0.4791 | 0.835 | 0.2519 | 0.8620 | 0.835 | 0.8222 | 0.1894 | 0.0430 |
| No log | 67.0 | 469 | 0.4792 | 0.835 | 0.2520 | 0.8612 | 0.835 | 0.8222 | 0.1885 | 0.0433 |
| No log | 68.0 | 476 | 0.4796 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1918 | 0.0433 |
| No log | 69.0 | 483 | 0.4793 | 0.835 | 0.2521 | 0.8628 | 0.835 | 0.8222 | 0.1828 | 0.0433 |
| No log | 70.0 | 490 | 0.4792 | 0.835 | 0.2519 | 0.8622 | 0.835 | 0.8222 | 0.1918 | 0.0432 |
| No log | 71.0 | 497 | 0.4797 | 0.835 | 0.2523 | 0.8615 | 0.835 | 0.8222 | 0.1836 | 0.0434 |
| 0.194 | 72.0 | 504 | 0.4797 | 0.835 | 0.2522 | 0.8618 | 0.835 | 0.8222 | 0.1842 | 0.0433 |
| 0.194 | 73.0 | 511 | 0.4794 | 0.835 | 0.2521 | 0.8624 | 0.835 | 0.8222 | 0.1914 | 0.0432 |
| 0.194 | 74.0 | 518 | 0.4794 | 0.835 | 0.2521 | 0.8617 | 0.835 | 0.8222 | 0.1915 | 0.0431 |
| 0.194 | 75.0 | 525 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1917 | 0.0434 |
| 0.194 | 76.0 | 532 | 0.4795 | 0.835 | 0.2520 | 0.8622 | 0.835 | 0.8222 | 0.1985 | 0.0433 |
| 0.194 | 77.0 | 539 | 0.4795 | 0.835 | 0.2520 | 0.8623 | 0.835 | 0.8222 | 0.1985 | 0.0432 |
| 0.194 | 78.0 | 546 | 0.4795 | 0.835 | 0.2522 | 0.8621 | 0.835 | 0.8222 | 0.1981 | 0.0432 |
| 0.194 | 79.0 | 553 | 0.4798 | 0.835 | 0.2522 | 0.8626 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 80.0 | 560 | 0.4796 | 0.835 | 0.2521 | 0.8630 | 0.835 | 0.8222 | 0.1984 | 0.0433 |
| 0.194 | 81.0 | 567 | 0.4797 | 0.835 | 0.2522 | 0.8619 | 0.835 | 0.8222 | 0.1902 | 0.0434 |
| 0.194 | 82.0 | 574 | 0.4797 | 0.835 | 0.2522 | 0.8631 | 0.835 | 0.8222 | 0.1913 | 0.0433 |
| 0.194 | 83.0 | 581 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 84.0 | 588 | 0.4797 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 85.0 | 595 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 86.0 | 602 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1830 | 0.0433 |
| 0.194 | 87.0 | 609 | 0.4797 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 88.0 | 616 | 0.4797 | 0.835 | 0.2521 | 0.8634 | 0.835 | 0.8222 | 0.1830 | 0.0433 |
| 0.194 | 89.0 | 623 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 90.0 | 630 | 0.4798 | 0.835 | 0.2523 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 91.0 | 637 | 0.4797 | 0.835 | 0.2522 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 92.0 | 644 | 0.4797 | 0.835 | 0.2522 | 0.8630 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 93.0 | 651 | 0.4798 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 94.0 | 658 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 95.0 | 665 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 96.0 | 672 | 0.4798 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1834 | 0.0435 |
| 0.194 | 97.0 | 679 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 98.0 | 686 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 99.0 | 693 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 100.0 | 700 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
FerhatDk/wav2vec2-base_music_speech_both_classification | FerhatDk | 2023-07-10T17:56:34Z | 167 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-10T17:00:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base_music_speech_both_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_music_speech_both_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Accuracy: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9458 | 1.0 | 66 | 0.8468 | 0.7405 |
| 0.3785 | 2.0 | 132 | 0.2951 | 0.9771 |
| 0.1762 | 3.0 | 198 | 0.2639 | 0.9313 |
| 0.134 | 4.0 | 264 | 0.1084 | 0.9771 |
| 0.0782 | 5.0 | 330 | 0.0877 | 0.9771 |
| 0.0568 | 6.0 | 396 | 0.0912 | 0.9771 |
| 0.0122 | 7.0 | 462 | 0.4056 | 0.9198 |
| 0.059 | 8.0 | 528 | 0.0586 | 0.9847 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grammarly/detexd-roberta-base | grammarly | 2023-07-10T17:34:23Z | 132 | 10 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-21T18:44:55Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---
# DeTexD-RoBERTa-base delicate text detection
This is a baseline RoBERTa-base model for the delicate text detection task.
* Paper: [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
* [GitHub repository](https://github.com/grammarly/detexd)
The labels meaning according to the paper:
- LABEL_0 -> non-delicate (0)
- LABEL_1 -> very low risk (1)
- LABEL_2 -> low risk (2)
- LABEL_3 -> medium risk (3)
- LABEL_4 -> high risk (4)
- LABEL_5 -> very high risk (5)
## Classification example code
Here's a short usage example with the torch library in a binary classification task:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="grammarly/detexd-roberta-base")
def predict_binary_score(text: str):
# get multiclass probability scores
scores = classifier(text, top_k=None)
# convert to a single score by summing the probability scores
# for the higher-index classes
return sum(score['score']
for score in scores
if score['label'] in ('LABEL_3', 'LABEL_4', 'LABEL_5'))
def predict_delicate(text: str, threshold=0.72496545):
return predict_binary_score(text) > threshold
print(predict_delicate("Time flies like an arrow. Fruit flies like a banana."))
```
Expected output:
```
False
```
## Citation Information
```
@inproceedings{chernodub-etal-2023-detexd,
title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem",
booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.woah-1.2",
pages = "14--28",
abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
``` |
AmbarB12/my_awesome_model | AmbarB12 | 2023-07-10T17:30:33Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T18:03:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AmbarB12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AmbarB12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0631
- Validation Loss: 0.2229
- Train Accuracy: 0.9306
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2523 | 0.1891 | 0.927 | 0 |
| 0.1327 | 0.2007 | 0.9298 | 1 |
| 0.0631 | 0.2229 | 0.9306 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cagarraz/rl_course_vizdoom_health_gathering_supreme | cagarraz | 2023-07-10T17:23:21Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T17:23:08Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.94 +/- 0.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cagarraz/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t1.5_a0.5 | jordyvl | 2023-07-10T17:17:44Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T16:39:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4258
- Accuracy: 0.825
- Brier Loss: 0.2707
- Nll: 0.8867
- F1 Micro: 0.825
- F1 Macro: 0.8116
- Ece: 0.2129
- Aurc: 0.0681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.7307 | 0.22 | 0.8748 | 5.3766 | 0.22 | 0.1294 | 0.2444 | 0.6913 |
| No log | 2.0 | 14 | 1.3514 | 0.405 | 0.7426 | 3.5573 | 0.405 | 0.2280 | 0.2900 | 0.4026 |
| No log | 3.0 | 21 | 0.9121 | 0.62 | 0.5647 | 1.9398 | 0.62 | 0.5595 | 0.2879 | 0.2015 |
| No log | 4.0 | 28 | 0.7084 | 0.695 | 0.4179 | 1.7042 | 0.695 | 0.6379 | 0.2305 | 0.1177 |
| No log | 5.0 | 35 | 0.7167 | 0.735 | 0.3862 | 1.7929 | 0.735 | 0.7392 | 0.2380 | 0.1046 |
| No log | 6.0 | 42 | 0.6442 | 0.765 | 0.3625 | 1.5688 | 0.765 | 0.7549 | 0.2371 | 0.1034 |
| No log | 7.0 | 49 | 0.6147 | 0.805 | 0.3410 | 1.5975 | 0.805 | 0.7789 | 0.2438 | 0.1042 |
| No log | 8.0 | 56 | 0.6444 | 0.775 | 0.3446 | 1.2309 | 0.775 | 0.7725 | 0.2305 | 0.0911 |
| No log | 9.0 | 63 | 0.5964 | 0.8 | 0.3219 | 1.3613 | 0.8000 | 0.7784 | 0.2446 | 0.0734 |
| No log | 10.0 | 70 | 0.5700 | 0.82 | 0.3160 | 1.2605 | 0.82 | 0.7860 | 0.2301 | 0.0632 |
| No log | 11.0 | 77 | 0.5663 | 0.79 | 0.3176 | 1.2939 | 0.79 | 0.7643 | 0.2315 | 0.0666 |
| No log | 12.0 | 84 | 0.5111 | 0.825 | 0.3143 | 1.1082 | 0.825 | 0.8082 | 0.2519 | 0.0844 |
| No log | 13.0 | 91 | 0.5228 | 0.78 | 0.3156 | 0.9444 | 0.78 | 0.7773 | 0.1941 | 0.0650 |
| No log | 14.0 | 98 | 0.5792 | 0.78 | 0.3409 | 1.5054 | 0.78 | 0.7725 | 0.2061 | 0.1019 |
| No log | 15.0 | 105 | 0.4905 | 0.83 | 0.2912 | 1.0068 | 0.83 | 0.8266 | 0.2324 | 0.0545 |
| No log | 16.0 | 112 | 0.4990 | 0.825 | 0.2961 | 1.1452 | 0.825 | 0.8140 | 0.2188 | 0.0632 |
| No log | 17.0 | 119 | 0.4900 | 0.805 | 0.2940 | 1.2027 | 0.805 | 0.8018 | 0.2188 | 0.0860 |
| No log | 18.0 | 126 | 0.4755 | 0.805 | 0.2988 | 1.0223 | 0.805 | 0.7789 | 0.2229 | 0.0792 |
| No log | 19.0 | 133 | 0.4398 | 0.81 | 0.2679 | 0.9732 | 0.81 | 0.7830 | 0.2085 | 0.0585 |
| No log | 20.0 | 140 | 0.4766 | 0.805 | 0.2992 | 0.9730 | 0.805 | 0.7934 | 0.2141 | 0.0662 |
| No log | 21.0 | 147 | 0.4615 | 0.835 | 0.2867 | 0.9343 | 0.835 | 0.8219 | 0.1999 | 0.0751 |
| No log | 22.0 | 154 | 0.4343 | 0.825 | 0.2641 | 1.1353 | 0.825 | 0.8070 | 0.2095 | 0.0603 |
| No log | 23.0 | 161 | 0.4291 | 0.85 | 0.2660 | 1.0109 | 0.85 | 0.8365 | 0.2435 | 0.0615 |
| No log | 24.0 | 168 | 0.4263 | 0.855 | 0.2653 | 0.9395 | 0.855 | 0.8440 | 0.2445 | 0.0623 |
| No log | 25.0 | 175 | 0.4338 | 0.845 | 0.2700 | 0.8794 | 0.845 | 0.8349 | 0.2254 | 0.0584 |
| No log | 26.0 | 182 | 0.4305 | 0.835 | 0.2648 | 0.9062 | 0.835 | 0.8322 | 0.2113 | 0.0658 |
| No log | 27.0 | 189 | 0.4262 | 0.84 | 0.2683 | 0.9967 | 0.8400 | 0.8291 | 0.2240 | 0.0670 |
| No log | 28.0 | 196 | 0.4329 | 0.83 | 0.2724 | 0.9016 | 0.83 | 0.8239 | 0.2016 | 0.0685 |
| No log | 29.0 | 203 | 0.4233 | 0.845 | 0.2653 | 0.9115 | 0.845 | 0.8375 | 0.2005 | 0.0634 |
| No log | 30.0 | 210 | 0.4204 | 0.84 | 0.2638 | 0.8892 | 0.8400 | 0.8348 | 0.2175 | 0.0633 |
| No log | 31.0 | 217 | 0.4240 | 0.83 | 0.2684 | 0.8871 | 0.83 | 0.8217 | 0.2128 | 0.0660 |
| No log | 32.0 | 224 | 0.4246 | 0.84 | 0.2677 | 0.8867 | 0.8400 | 0.8307 | 0.2117 | 0.0670 |
| No log | 33.0 | 231 | 0.4247 | 0.83 | 0.2690 | 0.8917 | 0.83 | 0.8202 | 0.2084 | 0.0679 |
| No log | 34.0 | 238 | 0.4218 | 0.84 | 0.2660 | 0.8848 | 0.8400 | 0.8326 | 0.2138 | 0.0663 |
| No log | 35.0 | 245 | 0.4220 | 0.845 | 0.2667 | 0.8926 | 0.845 | 0.8354 | 0.2109 | 0.0655 |
| No log | 36.0 | 252 | 0.4247 | 0.83 | 0.2694 | 0.8854 | 0.83 | 0.8202 | 0.2213 | 0.0683 |
| No log | 37.0 | 259 | 0.4239 | 0.84 | 0.2683 | 0.8849 | 0.8400 | 0.8326 | 0.2163 | 0.0670 |
| No log | 38.0 | 266 | 0.4239 | 0.835 | 0.2689 | 0.8876 | 0.835 | 0.8208 | 0.2118 | 0.0672 |
| No log | 39.0 | 273 | 0.4252 | 0.83 | 0.2696 | 0.8885 | 0.83 | 0.8180 | 0.2064 | 0.0682 |
| No log | 40.0 | 280 | 0.4237 | 0.835 | 0.2686 | 0.8867 | 0.835 | 0.8208 | 0.2211 | 0.0675 |
| No log | 41.0 | 287 | 0.4256 | 0.83 | 0.2700 | 0.8847 | 0.83 | 0.8180 | 0.2253 | 0.0682 |
| No log | 42.0 | 294 | 0.4243 | 0.835 | 0.2692 | 0.8839 | 0.835 | 0.8208 | 0.2130 | 0.0675 |
| No log | 43.0 | 301 | 0.4248 | 0.83 | 0.2695 | 0.8850 | 0.83 | 0.8180 | 0.2237 | 0.0682 |
| No log | 44.0 | 308 | 0.4246 | 0.83 | 0.2694 | 0.8847 | 0.83 | 0.8180 | 0.2383 | 0.0680 |
| No log | 45.0 | 315 | 0.4253 | 0.83 | 0.2699 | 0.8858 | 0.83 | 0.8180 | 0.2200 | 0.0681 |
| No log | 46.0 | 322 | 0.4246 | 0.83 | 0.2694 | 0.8857 | 0.83 | 0.8180 | 0.2311 | 0.0679 |
| No log | 47.0 | 329 | 0.4253 | 0.83 | 0.2700 | 0.8843 | 0.83 | 0.8180 | 0.2312 | 0.0682 |
| No log | 48.0 | 336 | 0.4252 | 0.83 | 0.2698 | 0.8830 | 0.83 | 0.8180 | 0.2177 | 0.0682 |
| No log | 49.0 | 343 | 0.4257 | 0.83 | 0.2703 | 0.8848 | 0.83 | 0.8180 | 0.2315 | 0.0683 |
| No log | 50.0 | 350 | 0.4256 | 0.83 | 0.2703 | 0.8833 | 0.83 | 0.8180 | 0.2331 | 0.0684 |
| No log | 51.0 | 357 | 0.4254 | 0.83 | 0.2703 | 0.8863 | 0.83 | 0.8180 | 0.2422 | 0.0681 |
| No log | 52.0 | 364 | 0.4261 | 0.83 | 0.2707 | 0.8864 | 0.83 | 0.8180 | 0.2424 | 0.0683 |
| No log | 53.0 | 371 | 0.4249 | 0.83 | 0.2700 | 0.8855 | 0.83 | 0.8180 | 0.2195 | 0.0679 |
| No log | 54.0 | 378 | 0.4255 | 0.83 | 0.2704 | 0.8846 | 0.83 | 0.8180 | 0.2342 | 0.0682 |
| No log | 55.0 | 385 | 0.4256 | 0.825 | 0.2704 | 0.8861 | 0.825 | 0.8116 | 0.2357 | 0.0682 |
| No log | 56.0 | 392 | 0.4264 | 0.83 | 0.2708 | 0.8853 | 0.83 | 0.8180 | 0.2345 | 0.0682 |
| No log | 57.0 | 399 | 0.4257 | 0.825 | 0.2706 | 0.8864 | 0.825 | 0.8116 | 0.2353 | 0.0682 |
| No log | 58.0 | 406 | 0.4258 | 0.825 | 0.2704 | 0.8841 | 0.825 | 0.8116 | 0.2271 | 0.0681 |
| No log | 59.0 | 413 | 0.4255 | 0.825 | 0.2703 | 0.8856 | 0.825 | 0.8116 | 0.2267 | 0.0680 |
| No log | 60.0 | 420 | 0.4259 | 0.825 | 0.2709 | 0.8842 | 0.825 | 0.8116 | 0.2269 | 0.0683 |
| No log | 61.0 | 427 | 0.4254 | 0.83 | 0.2702 | 0.8852 | 0.83 | 0.8180 | 0.2265 | 0.0680 |
| No log | 62.0 | 434 | 0.4261 | 0.83 | 0.2707 | 0.8851 | 0.83 | 0.8180 | 0.2346 | 0.0682 |
| No log | 63.0 | 441 | 0.4257 | 0.825 | 0.2704 | 0.8854 | 0.825 | 0.8116 | 0.2232 | 0.0682 |
| No log | 64.0 | 448 | 0.4261 | 0.825 | 0.2708 | 0.8845 | 0.825 | 0.8116 | 0.2264 | 0.0683 |
| No log | 65.0 | 455 | 0.4259 | 0.825 | 0.2706 | 0.8862 | 0.825 | 0.8116 | 0.2204 | 0.0682 |
| No log | 66.0 | 462 | 0.4258 | 0.825 | 0.2707 | 0.8856 | 0.825 | 0.8116 | 0.2193 | 0.0682 |
| No log | 67.0 | 469 | 0.4255 | 0.83 | 0.2703 | 0.8852 | 0.83 | 0.8180 | 0.2190 | 0.0681 |
| No log | 68.0 | 476 | 0.4260 | 0.825 | 0.2708 | 0.8860 | 0.825 | 0.8116 | 0.2196 | 0.0682 |
| No log | 69.0 | 483 | 0.4259 | 0.825 | 0.2708 | 0.8858 | 0.825 | 0.8116 | 0.2195 | 0.0682 |
| No log | 70.0 | 490 | 0.4255 | 0.825 | 0.2703 | 0.8857 | 0.825 | 0.8116 | 0.2135 | 0.0682 |
| No log | 71.0 | 497 | 0.4258 | 0.825 | 0.2707 | 0.8857 | 0.825 | 0.8116 | 0.2205 | 0.0681 |
| 0.1816 | 72.0 | 504 | 0.4261 | 0.825 | 0.2708 | 0.8857 | 0.825 | 0.8116 | 0.2198 | 0.0682 |
| 0.1816 | 73.0 | 511 | 0.4259 | 0.825 | 0.2706 | 0.8852 | 0.825 | 0.8116 | 0.2192 | 0.0682 |
| 0.1816 | 74.0 | 518 | 0.4259 | 0.825 | 0.2707 | 0.8856 | 0.825 | 0.8116 | 0.2290 | 0.0681 |
| 0.1816 | 75.0 | 525 | 0.4257 | 0.825 | 0.2706 | 0.8864 | 0.825 | 0.8116 | 0.2337 | 0.0681 |
| 0.1816 | 76.0 | 532 | 0.4259 | 0.825 | 0.2707 | 0.8855 | 0.825 | 0.8116 | 0.2211 | 0.0681 |
| 0.1816 | 77.0 | 539 | 0.4255 | 0.825 | 0.2704 | 0.8860 | 0.825 | 0.8116 | 0.2137 | 0.0680 |
| 0.1816 | 78.0 | 546 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2274 | 0.0682 |
| 0.1816 | 79.0 | 553 | 0.4260 | 0.825 | 0.2708 | 0.8859 | 0.825 | 0.8116 | 0.2209 | 0.0682 |
| 0.1816 | 80.0 | 560 | 0.4260 | 0.825 | 0.2708 | 0.8864 | 0.825 | 0.8116 | 0.2135 | 0.0681 |
| 0.1816 | 81.0 | 567 | 0.4259 | 0.825 | 0.2707 | 0.8859 | 0.825 | 0.8116 | 0.2134 | 0.0682 |
| 0.1816 | 82.0 | 574 | 0.4258 | 0.825 | 0.2706 | 0.8862 | 0.825 | 0.8116 | 0.2062 | 0.0681 |
| 0.1816 | 83.0 | 581 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2204 | 0.0681 |
| 0.1816 | 84.0 | 588 | 0.4259 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2204 | 0.0681 |
| 0.1816 | 85.0 | 595 | 0.4257 | 0.825 | 0.2706 | 0.8861 | 0.825 | 0.8116 | 0.2141 | 0.0682 |
| 0.1816 | 86.0 | 602 | 0.4258 | 0.825 | 0.2707 | 0.8861 | 0.825 | 0.8116 | 0.2140 | 0.0682 |
| 0.1816 | 87.0 | 609 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2137 | 0.0680 |
| 0.1816 | 88.0 | 616 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 89.0 | 623 | 0.4258 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2205 | 0.0681 |
| 0.1816 | 90.0 | 630 | 0.4259 | 0.825 | 0.2707 | 0.8865 | 0.825 | 0.8116 | 0.2053 | 0.0680 |
| 0.1816 | 91.0 | 637 | 0.4258 | 0.825 | 0.2706 | 0.8868 | 0.825 | 0.8116 | 0.2130 | 0.0681 |
| 0.1816 | 92.0 | 644 | 0.4258 | 0.825 | 0.2706 | 0.8870 | 0.825 | 0.8116 | 0.2129 | 0.0680 |
| 0.1816 | 93.0 | 651 | 0.4258 | 0.825 | 0.2706 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 94.0 | 658 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 95.0 | 665 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2053 | 0.0680 |
| 0.1816 | 96.0 | 672 | 0.4259 | 0.825 | 0.2707 | 0.8866 | 0.825 | 0.8116 | 0.2053 | 0.0681 |
| 0.1816 | 97.0 | 679 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 98.0 | 686 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0680 |
| 0.1816 | 99.0 | 693 | 0.4258 | 0.825 | 0.2707 | 0.8868 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
| 0.1816 | 100.0 | 700 | 0.4258 | 0.825 | 0.2707 | 0.8867 | 0.825 | 0.8116 | 0.2129 | 0.0681 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
opendiffusion/sentimentcheck | opendiffusion | 2023-07-10T16:58:49Z | 0 | 0 | tf-keras | [
"tf-keras",
"bert",
"region:us"
] | null | 2023-05-11T18:26:04Z |
# Intro
OpenDiffusion's SentimentCheck is an AI model built upon Tensorflow+Keras+Pickles. SentimentCheck harnesses the power of deep learning algorithms to accurately classify sentiment in text, making it a flexible tool for businesses, researchers, and developers.
## Usage
---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
| Dutch | 80k |
| German | 137k |
| French | 140k |
| Italian | 72k |
| Spanish | 50k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
| Dutch | 57% | 93%
| German | 61% | 94%
| French | 59% | 94%
| Italian | 59% | 95%
| Spanish | 58% | 95%
|
Umer1542/task-b-classification | Umer1542 | 2023-07-10T16:35:37Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-classification",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T15:47:23Z | ---
license: other
language:
- en
metrics:
- accuracy
- f1
- recall
pipeline_tag: text-classification
--- |
dashan1992/dsl2 | dashan1992 | 2023-07-10T16:35:33Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-10T16:34:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
yhyhy3/open_llama_7b_v2_med_instruct | yhyhy3 | 2023-07-10T16:22:39Z | 1,461 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"code",
"en",
"dataset:ehartford/dolphin",
"dataset:LinhDuong/chatdoctor-200k",
"dataset:sahil2801/code_instructions_120k",
"dataset:medalpaca/medical_meadow_mediqa",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T17:19:43Z | ---
license: apache-2.0
datasets:
- ehartford/dolphin
- LinhDuong/chatdoctor-200k
- sahil2801/code_instructions_120k
- medalpaca/medical_meadow_mediqa
- kaiokendev/SuperCOT-dataset
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is an instruction-tuned Open LLaMa model with 7B parameters, with specialities in medical QA and code instruction.
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model (QLoRA):** [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
## How to Get Started with the Model
Use the code below to get started with the model.
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'yhyhy3/open_llama_7b_v2_med_dolphin_qlora_merged'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = '''### Instruction: Answer the following question.
### Input: What is the capital of New Jersey?
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
## Training Details
### Training Data
Converted the following datasets to alpaca:instruction format.
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
- Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
- Refined dataset sourced from icliniq medical QA forum
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
- Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI
4. [medalpaca/medical_meadow_mediqa](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
- MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions from medalpaca group.
5. [kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- Code instruction dataset generously created by Kaio Ken
### Training Procedure
Trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRa on [RunPod](https://www.runpod.io/console/gpu-cloud) 8x A6000 on Community Cloud for 3 epochs (~14 hours - ~$70).
<details>
<summary>axolotl training config:</summary>
```yaml
base_model: openlm-research/open_llama_7b_v2
base_model_config: openlm-research/open_llama_7b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
hub_model_id:
hf_use_auth_token:
datasets:
- path: json
type: alpaca
data_files: /disk/flan1m-alpaca-uncensored.jsonl
shards: 8
- path: sahil2801/code_instructions_120k
type: alpaca
- path: LinhDuong/chatdoctor-200k
type: alpaca
shards: 2
- path: kaiokendev/SuperCOT-dataset
type: alpaca
- path: medalpaca/medical_meadow_mediqa
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
adapter: qlora
lora_model_dir:
sequence_len: 2048
max_packed_sequence_len: 2048
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_mode: true
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model: 'openllama_checkpoint'
output_dir: /disk/open_llama_7b_v2_dolphin_qlora
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 1000
eval_steps: 5000
save_steps:
debug:
deepspeed:
weight_decay: 0.0000001
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details> |
Den4ikAI/paraphrase_detector | Den4ikAI | 2023-07-10T16:13:56Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-10T16:09:10Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 312 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 21 with parameters:
```
{'batch_size': 2048, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 200,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 105,
"weight_decay": 1e-05
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
uw-madison/mra-base-512-4 | uw-madison | 2023-07-10T16:11:54Z | 1,482 | 1 | transformers | [
"transformers",
"pytorch",
"mra",
"fill-mask",
"arxiv:2207.10284",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-23T06:04:05Z | # MRA
MRA model for masked language modeling (MLM) for sequence length 512.
## About MRA
The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
*Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.*
This model was contributed by [novice03](https://huggingface.co/novice03).
The original code can be found [here](https://github.com/mlpen/mra-attention). |
jordyvl/vit-tiny_tobacco3482_kd_MSE | jordyvl | 2023-07-10T15:57:47Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-10T15:22:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_tobacco3482_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2482
- Accuracy: 0.82
- Brier Loss: 0.3226
- Nll: 0.8343
- F1 Micro: 0.82
- F1 Macro: 0.8090
- Ece: 0.2625
- Aurc: 0.0606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.6314 | 0.165 | 1.0293 | 8.4704 | 0.165 | 0.0552 | 0.3856 | 0.8464 |
| No log | 2.0 | 14 | 1.4800 | 0.12 | 0.9052 | 7.3409 | 0.12 | 0.0997 | 0.2469 | 0.8374 |
| No log | 3.0 | 21 | 1.0627 | 0.34 | 0.8400 | 5.5369 | 0.34 | 0.2006 | 0.3075 | 0.5758 |
| No log | 4.0 | 28 | 0.8429 | 0.42 | 0.7462 | 3.2442 | 0.4200 | 0.3362 | 0.3072 | 0.3692 |
| No log | 5.0 | 35 | 0.7060 | 0.535 | 0.6558 | 2.7042 | 0.535 | 0.4295 | 0.2993 | 0.2677 |
| No log | 6.0 | 42 | 0.5950 | 0.635 | 0.6056 | 2.0779 | 0.635 | 0.5021 | 0.3375 | 0.1948 |
| No log | 7.0 | 49 | 0.4865 | 0.67 | 0.5486 | 1.4919 | 0.67 | 0.5384 | 0.3633 | 0.1737 |
| No log | 8.0 | 56 | 0.4572 | 0.69 | 0.4897 | 1.4359 | 0.69 | 0.6106 | 0.2889 | 0.1355 |
| No log | 9.0 | 63 | 0.3932 | 0.72 | 0.4496 | 1.0748 | 0.72 | 0.6261 | 0.2914 | 0.1092 |
| No log | 10.0 | 70 | 0.3584 | 0.76 | 0.4091 | 1.1341 | 0.76 | 0.6747 | 0.2946 | 0.0937 |
| No log | 11.0 | 77 | 0.3516 | 0.785 | 0.3906 | 1.0586 | 0.785 | 0.7422 | 0.3026 | 0.0762 |
| No log | 12.0 | 84 | 0.3905 | 0.74 | 0.4155 | 1.1502 | 0.74 | 0.6677 | 0.2827 | 0.1211 |
| No log | 13.0 | 91 | 0.3346 | 0.775 | 0.3640 | 1.0888 | 0.775 | 0.7397 | 0.2743 | 0.0771 |
| No log | 14.0 | 98 | 0.3700 | 0.81 | 0.3728 | 0.9575 | 0.81 | 0.7941 | 0.3125 | 0.0709 |
| No log | 15.0 | 105 | 0.3346 | 0.8 | 0.3631 | 0.9636 | 0.8000 | 0.7731 | 0.3113 | 0.0662 |
| No log | 16.0 | 112 | 0.3084 | 0.785 | 0.3606 | 1.0243 | 0.785 | 0.7610 | 0.2883 | 0.0911 |
| No log | 17.0 | 119 | 0.3266 | 0.785 | 0.3375 | 1.0301 | 0.785 | 0.7603 | 0.2659 | 0.0687 |
| No log | 18.0 | 126 | 0.2987 | 0.805 | 0.3371 | 0.7961 | 0.805 | 0.7895 | 0.2764 | 0.0639 |
| No log | 19.0 | 133 | 0.3468 | 0.815 | 0.3428 | 0.9841 | 0.815 | 0.7884 | 0.2824 | 0.0641 |
| No log | 20.0 | 140 | 0.3111 | 0.81 | 0.3409 | 0.8873 | 0.81 | 0.8010 | 0.2978 | 0.0579 |
| No log | 21.0 | 147 | 0.3042 | 0.8 | 0.3392 | 0.8057 | 0.8000 | 0.7802 | 0.2611 | 0.0556 |
| No log | 22.0 | 154 | 0.2936 | 0.84 | 0.3359 | 0.8659 | 0.8400 | 0.8210 | 0.2920 | 0.0636 |
| No log | 23.0 | 161 | 0.3284 | 0.815 | 0.3457 | 0.9494 | 0.815 | 0.8007 | 0.2793 | 0.0615 |
| No log | 24.0 | 168 | 0.3184 | 0.805 | 0.3474 | 0.9087 | 0.805 | 0.7851 | 0.2793 | 0.0605 |
| No log | 25.0 | 175 | 0.3081 | 0.805 | 0.3357 | 1.0425 | 0.805 | 0.7757 | 0.2781 | 0.0646 |
| No log | 26.0 | 182 | 0.2743 | 0.815 | 0.3425 | 0.7898 | 0.815 | 0.8028 | 0.2814 | 0.0689 |
| No log | 27.0 | 189 | 0.2873 | 0.81 | 0.3254 | 0.8795 | 0.81 | 0.7904 | 0.2755 | 0.0637 |
| No log | 28.0 | 196 | 0.2650 | 0.82 | 0.3236 | 0.7462 | 0.82 | 0.7986 | 0.2714 | 0.0616 |
| No log | 29.0 | 203 | 0.2756 | 0.815 | 0.3310 | 0.8227 | 0.815 | 0.7973 | 0.2766 | 0.0664 |
| No log | 30.0 | 210 | 0.2711 | 0.83 | 0.3337 | 0.8340 | 0.83 | 0.8186 | 0.2992 | 0.0688 |
| No log | 31.0 | 217 | 0.2779 | 0.825 | 0.3234 | 0.8173 | 0.825 | 0.8100 | 0.2579 | 0.0589 |
| No log | 32.0 | 224 | 0.2679 | 0.82 | 0.3216 | 0.7441 | 0.82 | 0.8024 | 0.2940 | 0.0605 |
| No log | 33.0 | 231 | 0.2633 | 0.805 | 0.3277 | 0.8046 | 0.805 | 0.7871 | 0.2710 | 0.0644 |
| No log | 34.0 | 238 | 0.2705 | 0.805 | 0.3333 | 0.8661 | 0.805 | 0.7890 | 0.2626 | 0.0632 |
| No log | 35.0 | 245 | 0.2624 | 0.815 | 0.3295 | 0.8568 | 0.815 | 0.7900 | 0.2865 | 0.0651 |
| No log | 36.0 | 252 | 0.2654 | 0.805 | 0.3262 | 0.8075 | 0.805 | 0.7793 | 0.2726 | 0.0662 |
| No log | 37.0 | 259 | 0.2697 | 0.805 | 0.3293 | 0.8143 | 0.805 | 0.7857 | 0.2587 | 0.0623 |
| No log | 38.0 | 266 | 0.2548 | 0.805 | 0.3267 | 0.8028 | 0.805 | 0.7847 | 0.2606 | 0.0660 |
| No log | 39.0 | 273 | 0.2740 | 0.83 | 0.3218 | 0.8270 | 0.83 | 0.8172 | 0.2697 | 0.0566 |
| No log | 40.0 | 280 | 0.2572 | 0.81 | 0.3302 | 0.8573 | 0.81 | 0.7892 | 0.2722 | 0.0663 |
| No log | 41.0 | 287 | 0.2528 | 0.81 | 0.3300 | 0.8454 | 0.81 | 0.7980 | 0.2555 | 0.0673 |
| No log | 42.0 | 294 | 0.2590 | 0.815 | 0.3271 | 0.8393 | 0.815 | 0.8002 | 0.2554 | 0.0604 |
| No log | 43.0 | 301 | 0.2654 | 0.825 | 0.3273 | 0.8100 | 0.825 | 0.8155 | 0.2687 | 0.0595 |
| No log | 44.0 | 308 | 0.2506 | 0.805 | 0.3290 | 0.8551 | 0.805 | 0.7858 | 0.2425 | 0.0716 |
| No log | 45.0 | 315 | 0.2615 | 0.82 | 0.3255 | 0.8624 | 0.82 | 0.8007 | 0.2773 | 0.0586 |
| No log | 46.0 | 322 | 0.2487 | 0.815 | 0.3240 | 0.8416 | 0.815 | 0.7987 | 0.2757 | 0.0611 |
| No log | 47.0 | 329 | 0.2674 | 0.845 | 0.3207 | 0.8407 | 0.845 | 0.8227 | 0.2963 | 0.0553 |
| No log | 48.0 | 336 | 0.2522 | 0.805 | 0.3312 | 0.9376 | 0.805 | 0.7888 | 0.2714 | 0.0662 |
| No log | 49.0 | 343 | 0.2547 | 0.81 | 0.3280 | 0.7847 | 0.81 | 0.7870 | 0.2696 | 0.0688 |
| No log | 50.0 | 350 | 0.2523 | 0.81 | 0.3213 | 0.7968 | 0.81 | 0.7937 | 0.2599 | 0.0654 |
| No log | 51.0 | 357 | 0.2526 | 0.815 | 0.3291 | 0.8022 | 0.815 | 0.7994 | 0.2888 | 0.0669 |
| No log | 52.0 | 364 | 0.2568 | 0.835 | 0.3180 | 0.8317 | 0.835 | 0.8140 | 0.2750 | 0.0578 |
| No log | 53.0 | 371 | 0.2496 | 0.82 | 0.3267 | 0.8442 | 0.82 | 0.8068 | 0.2825 | 0.0602 |
| No log | 54.0 | 378 | 0.2602 | 0.82 | 0.3229 | 0.7963 | 0.82 | 0.8061 | 0.2714 | 0.0585 |
| No log | 55.0 | 385 | 0.2477 | 0.81 | 0.3237 | 0.8278 | 0.81 | 0.7937 | 0.2511 | 0.0631 |
| No log | 56.0 | 392 | 0.2508 | 0.83 | 0.3210 | 0.8302 | 0.83 | 0.8102 | 0.2706 | 0.0588 |
| No log | 57.0 | 399 | 0.2454 | 0.815 | 0.3240 | 0.8377 | 0.815 | 0.7948 | 0.2607 | 0.0643 |
| No log | 58.0 | 406 | 0.2488 | 0.815 | 0.3229 | 0.8308 | 0.815 | 0.7942 | 0.2569 | 0.0604 |
| No log | 59.0 | 413 | 0.2510 | 0.82 | 0.3223 | 0.8314 | 0.82 | 0.8093 | 0.2779 | 0.0603 |
| No log | 60.0 | 420 | 0.2499 | 0.82 | 0.3235 | 0.8401 | 0.82 | 0.8031 | 0.2578 | 0.0618 |
| No log | 61.0 | 427 | 0.2478 | 0.81 | 0.3227 | 0.8315 | 0.81 | 0.7933 | 0.2645 | 0.0615 |
| No log | 62.0 | 434 | 0.2460 | 0.82 | 0.3231 | 0.8364 | 0.82 | 0.8028 | 0.2795 | 0.0643 |
| No log | 63.0 | 441 | 0.2489 | 0.825 | 0.3224 | 0.8337 | 0.825 | 0.8156 | 0.2759 | 0.0604 |
| No log | 64.0 | 448 | 0.2482 | 0.825 | 0.3230 | 0.8320 | 0.825 | 0.8138 | 0.2753 | 0.0600 |
| No log | 65.0 | 455 | 0.2462 | 0.815 | 0.3231 | 0.8354 | 0.815 | 0.8008 | 0.2551 | 0.0625 |
| No log | 66.0 | 462 | 0.2470 | 0.815 | 0.3219 | 0.8338 | 0.815 | 0.8018 | 0.2729 | 0.0611 |
| No log | 67.0 | 469 | 0.2457 | 0.81 | 0.3231 | 0.8336 | 0.81 | 0.7930 | 0.2587 | 0.0638 |
| No log | 68.0 | 476 | 0.2472 | 0.815 | 0.3225 | 0.8334 | 0.815 | 0.8008 | 0.2706 | 0.0619 |
| No log | 69.0 | 483 | 0.2473 | 0.825 | 0.3223 | 0.8357 | 0.825 | 0.8165 | 0.2668 | 0.0611 |
| No log | 70.0 | 490 | 0.2481 | 0.81 | 0.3223 | 0.8343 | 0.81 | 0.7930 | 0.2685 | 0.0624 |
| No log | 71.0 | 497 | 0.2472 | 0.825 | 0.3227 | 0.8338 | 0.825 | 0.8117 | 0.2839 | 0.0601 |
| 0.232 | 72.0 | 504 | 0.2472 | 0.815 | 0.3220 | 0.8345 | 0.815 | 0.8018 | 0.2617 | 0.0615 |
| 0.232 | 73.0 | 511 | 0.2486 | 0.82 | 0.3218 | 0.8321 | 0.82 | 0.8086 | 0.2768 | 0.0610 |
| 0.232 | 74.0 | 518 | 0.2468 | 0.815 | 0.3219 | 0.8338 | 0.815 | 0.8008 | 0.2717 | 0.0621 |
| 0.232 | 75.0 | 525 | 0.2470 | 0.82 | 0.3223 | 0.8325 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 76.0 | 532 | 0.2474 | 0.825 | 0.3223 | 0.8322 | 0.825 | 0.8165 | 0.2723 | 0.0602 |
| 0.232 | 77.0 | 539 | 0.2476 | 0.805 | 0.3227 | 0.8345 | 0.805 | 0.7859 | 0.2589 | 0.0629 |
| 0.232 | 78.0 | 546 | 0.2479 | 0.82 | 0.3228 | 0.8336 | 0.82 | 0.8090 | 0.2674 | 0.0608 |
| 0.232 | 79.0 | 553 | 0.2478 | 0.82 | 0.3225 | 0.8349 | 0.82 | 0.8090 | 0.2624 | 0.0604 |
| 0.232 | 80.0 | 560 | 0.2477 | 0.81 | 0.3227 | 0.8337 | 0.81 | 0.7938 | 0.2577 | 0.0621 |
| 0.232 | 81.0 | 567 | 0.2478 | 0.82 | 0.3226 | 0.8336 | 0.82 | 0.8090 | 0.2670 | 0.0607 |
| 0.232 | 82.0 | 574 | 0.2480 | 0.825 | 0.3224 | 0.8340 | 0.825 | 0.8165 | 0.2673 | 0.0605 |
| 0.232 | 83.0 | 581 | 0.2479 | 0.82 | 0.3227 | 0.8347 | 0.82 | 0.8090 | 0.2564 | 0.0607 |
| 0.232 | 84.0 | 588 | 0.2480 | 0.82 | 0.3226 | 0.8342 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 85.0 | 595 | 0.2480 | 0.82 | 0.3225 | 0.8339 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 86.0 | 602 | 0.2479 | 0.825 | 0.3226 | 0.8339 | 0.825 | 0.8165 | 0.2677 | 0.0606 |
| 0.232 | 87.0 | 609 | 0.2479 | 0.82 | 0.3225 | 0.8339 | 0.82 | 0.8090 | 0.2624 | 0.0605 |
| 0.232 | 88.0 | 616 | 0.2481 | 0.825 | 0.3225 | 0.8343 | 0.825 | 0.8165 | 0.2675 | 0.0604 |
| 0.232 | 89.0 | 623 | 0.2481 | 0.825 | 0.3225 | 0.8341 | 0.825 | 0.8165 | 0.2722 | 0.0605 |
| 0.232 | 90.0 | 630 | 0.2481 | 0.82 | 0.3225 | 0.8341 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 91.0 | 637 | 0.2481 | 0.82 | 0.3226 | 0.8345 | 0.82 | 0.8090 | 0.2629 | 0.0608 |
| 0.232 | 92.0 | 644 | 0.2481 | 0.825 | 0.3226 | 0.8342 | 0.825 | 0.8165 | 0.2675 | 0.0605 |
| 0.232 | 93.0 | 651 | 0.2481 | 0.825 | 0.3225 | 0.8340 | 0.825 | 0.8165 | 0.2675 | 0.0605 |
| 0.232 | 94.0 | 658 | 0.2481 | 0.82 | 0.3225 | 0.8343 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 95.0 | 665 | 0.2482 | 0.82 | 0.3226 | 0.8345 | 0.82 | 0.8090 | 0.2627 | 0.0606 |
| 0.232 | 96.0 | 672 | 0.2482 | 0.82 | 0.3225 | 0.8343 | 0.82 | 0.8090 | 0.2627 | 0.0607 |
| 0.232 | 97.0 | 679 | 0.2482 | 0.82 | 0.3226 | 0.8344 | 0.82 | 0.8090 | 0.2627 | 0.0607 |
| 0.232 | 98.0 | 686 | 0.2482 | 0.82 | 0.3226 | 0.8344 | 0.82 | 0.8090 | 0.2626 | 0.0606 |
| 0.232 | 99.0 | 693 | 0.2482 | 0.82 | 0.3226 | 0.8343 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
| 0.232 | 100.0 | 700 | 0.2482 | 0.82 | 0.3226 | 0.8343 | 0.82 | 0.8090 | 0.2625 | 0.0606 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sianadouglas/ensembletest | sianadouglas | 2023-07-10T15:48:14Z | 0 | 0 | null | [
"en",
"license:other",
"region:us"
] | null | 2023-07-10T15:47:23Z | ---
license: other
language:
- en
--- |
Khushnur/t5-base-end2end-questions-generation_squad | Khushnur | 2023-07-10T15:47:50Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-10T15:02:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-end2end-questions-generation_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_squad
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5658 | 0.67 | 100 | 1.8866 |
| 1.958 | 1.35 | 200 | 1.7150 |
| 1.8516 | 2.02 | 300 | 1.6701 |
| 1.7965 | 2.69 | 400 | 1.6560 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mgmeskill/Pixelcopter-PLE-v0 | mgmeskill | 2023-07-10T15:38:32Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T15:26:11Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 37.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tyavika/LR1E4-BS16-Bert_CNN512LSTM256NoBid | tyavika | 2023-07-10T15:31:42Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-09T20:06:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Bert_CNN512LSTM256NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Bert_CNN512LSTM256NoBid
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7267 | 1.0 | 3290 | 1.5092 |
| 1.2394 | 2.0 | 6580 | 1.3933 |
| 0.8348 | 3.0 | 9870 | 1.5591 |
| 0.542 | 4.0 | 13160 | 1.6667 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MnLgt/textual_inversion_muir_1_5 | MnLgt | 2023-07-10T15:31:36Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T14:16:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - jordandavis/textual_inversion_muir_1_5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
agercas/speecht5_finetuned_voxpopuli_nl | agercas | 2023-07-10T15:27:22Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-10T09:21:57Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5221 | 4.3 | 1000 | 0.4774 |
| 0.505 | 8.61 | 2000 | 0.4648 |
| 0.4929 | 12.91 | 3000 | 0.4583 |
| 0.4901 | 17.21 | 4000 | 0.4572 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/afriberta-finetuned-hausa | grace-pro | 2023-07-10T15:26:48Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-10T14:49:51Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1242
- Precision: 0.7104
- Recall: 0.5095
- F1: 0.5934
- Accuracy: 0.9647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1369 | 1.0 | 2624 | 0.1256 | 0.6856 | 0.4541 | 0.5463 | 0.9614 |
| 0.1103 | 2.0 | 5248 | 0.1195 | 0.7014 | 0.4947 | 0.5802 | 0.9637 |
| 0.0868 | 3.0 | 7872 | 0.1242 | 0.7104 | 0.5095 | 0.5934 | 0.9647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dariowsz/wav2vec2-base-finetuned-gtzan | dariowsz | 2023-07-10T15:03:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-04T13:47:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5537
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7898 | 1.0 | 113 | 1.8052 | 0.45 |
| 1.4297 | 2.0 | 226 | 1.2229 | 0.62 |
| 1.041 | 3.0 | 339 | 0.9934 | 0.65 |
| 1.3882 | 4.0 | 452 | 1.1735 | 0.62 |
| 0.7248 | 5.0 | 565 | 0.8461 | 0.69 |
| 0.6128 | 6.0 | 678 | 0.7391 | 0.75 |
| 0.3225 | 7.0 | 791 | 0.8754 | 0.74 |
| 0.6483 | 8.0 | 904 | 0.8341 | 0.79 |
| 0.2755 | 9.0 | 1017 | 0.5537 | 0.88 |
| 0.4398 | 10.0 | 1130 | 0.6076 | 0.85 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NICFRU/bart-base-paraphrasing-news | NICFRU | 2023-07-10T15:02:02Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-10T14:46:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-paraphrasing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paraphrasing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
- Rouge1: 57.7088
- Rouge2: 51.0096
- Rougel: 54.7514
- Rougelsum: 56.3943
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.2 | 10 | 0.5263 | 58.2676 | 51.5842 | 55.5057 | 57.1584 | 19.94 |
| No log | 0.4 | 20 | 0.5050 | 56.1604 | 48.7383 | 54.0373 | 55.372 | 20.0 |
| No log | 0.6 | 30 | 0.4674 | 58.0617 | 51.4993 | 56.0368 | 56.9665 | 20.0 |
| No log | 0.8 | 40 | 0.4545 | 57.5375 | 51.0203 | 55.5247 | 56.5761 | 19.94 |
| No log | 1.0 | 50 | 0.4373 | 57.7263 | 50.8021 | 55.0549 | 56.35 | 19.98 |
| No log | 1.2 | 60 | 0.4313 | 57.87 | 50.9904 | 54.9727 | 56.5379 | 19.97 |
| No log | 1.4 | 70 | 0.4855 | 56.5101 | 49.3124 | 54.1572 | 55.0671 | 20.0 |
| No log | 1.6 | 80 | 0.4202 | 56.6535 | 50.0302 | 53.6891 | 55.1016 | 19.96 |
| No log | 1.8 | 90 | 0.4544 | 57.315 | 50.6289 | 54.642 | 55.7326 | 19.95 |
| 0.5858 | 2.0 | 100 | 0.4157 | 56.4569 | 48.8105 | 53.937 | 55.3515 | 20.0 |
| 0.5858 | 2.2 | 110 | 0.4555 | 57.8424 | 51.5966 | 55.6655 | 56.6862 | 20.0 |
| 0.5858 | 2.4 | 120 | 0.4196 | 58.2562 | 51.7596 | 55.5085 | 57.1823 | 19.97 |
| 0.5858 | 2.6 | 130 | 0.4334 | 58.6906 | 51.6106 | 55.6631 | 57.5254 | 19.89 |
| 0.5858 | 2.8 | 140 | 0.4710 | 56.5401 | 49.33 | 53.8792 | 55.0282 | 20.0 |
| 0.5858 | 3.0 | 150 | 0.4357 | 58.2083 | 52.0049 | 55.9938 | 57.1928 | 20.0 |
| 0.5858 | 3.2 | 160 | 0.4735 | 58.8112 | 52.2196 | 56.5004 | 57.7703 | 19.94 |
| 0.5858 | 3.4 | 170 | 0.4428 | 57.6778 | 50.6377 | 54.8752 | 56.4778 | 20.0 |
| 0.5858 | 3.6 | 180 | 0.4983 | 57.4124 | 50.4244 | 54.6163 | 56.0992 | 20.0 |
| 0.5858 | 3.8 | 190 | 0.4620 | 58.0701 | 51.5021 | 55.7222 | 56.8737 | 20.0 |
| 0.2865 | 4.0 | 200 | 0.4502 | 59.1191 | 52.7516 | 56.4389 | 57.7153 | 20.0 |
| 0.2865 | 4.2 | 210 | 0.4805 | 58.9064 | 52.7148 | 56.1058 | 57.6709 | 20.0 |
| 0.2865 | 4.4 | 220 | 0.4755 | 58.6883 | 52.1464 | 55.9164 | 57.3825 | 20.0 |
| 0.2865 | 4.6 | 230 | 0.4524 | 58.9916 | 52.1101 | 56.4116 | 57.9468 | 19.9 |
| 0.2865 | 4.8 | 240 | 0.4726 | 58.9953 | 52.8173 | 56.5846 | 58.0805 | 20.0 |
| 0.2865 | 5.0 | 250 | 0.4841 | 58.1058 | 51.614 | 55.3374 | 56.7617 | 20.0 |
| 0.2865 | 5.2 | 260 | 0.5047 | 58.2785 | 51.1874 | 55.5336 | 56.8795 | 20.0 |
| 0.2865 | 5.4 | 270 | 0.4658 | 57.2753 | 49.6038 | 53.9588 | 55.6038 | 19.91 |
| 0.2865 | 5.6 | 280 | 0.5261 | 58.1691 | 51.5254 | 55.2685 | 56.7787 | 20.0 |
| 0.2865 | 5.8 | 290 | 0.4833 | 57.8088 | 51.2838 | 54.8739 | 56.4374 | 20.0 |
| 0.1668 | 6.0 | 300 | 0.5067 | 58.2021 | 51.3629 | 55.3548 | 56.9093 | 19.99 |
| 0.1668 | 6.2 | 310 | 0.5461 | 58.0327 | 51.4051 | 55.3426 | 56.7923 | 20.0 |
| 0.1668 | 6.4 | 320 | 0.5463 | 58.1027 | 51.3706 | 55.1733 | 56.7923 | 19.9 |
| 0.1668 | 6.6 | 330 | 0.5837 | 57.6284 | 50.8245 | 54.6253 | 56.2127 | 20.0 |
| 0.1668 | 6.8 | 340 | 0.5221 | 58.0869 | 51.5448 | 55.4226 | 56.7532 | 20.0 |
| 0.1668 | 7.0 | 350 | 0.5433 | 58.7676 | 52.0403 | 56.2634 | 57.6441 | 20.0 |
| 0.1668 | 7.2 | 360 | 0.5498 | 57.9172 | 50.9727 | 55.1006 | 56.6018 | 20.0 |
| 0.1668 | 7.4 | 370 | 0.5581 | 57.4669 | 50.698 | 54.6448 | 56.1325 | 20.0 |
| 0.1668 | 7.6 | 380 | 0.5526 | 57.0821 | 50.298 | 54.1635 | 55.8059 | 20.0 |
| 0.1668 | 7.8 | 390 | 0.5548 | 57.5422 | 50.2734 | 54.2446 | 56.1223 | 20.0 |
| 0.1071 | 8.0 | 400 | 0.5620 | 57.4548 | 50.2657 | 54.5094 | 55.9422 | 20.0 |
| 0.1071 | 8.2 | 410 | 0.5772 | 57.4144 | 50.2443 | 54.5173 | 55.9331 | 20.0 |
| 0.1071 | 8.4 | 420 | 0.5857 | 57.2975 | 50.2116 | 54.5918 | 55.9931 | 20.0 |
| 0.1071 | 8.6 | 430 | 0.5827 | 58.4767 | 51.4318 | 55.4792 | 57.1284 | 20.0 |
| 0.1071 | 8.8 | 440 | 0.5728 | 58.4414 | 51.3523 | 55.2838 | 57.202 | 20.0 |
| 0.1071 | 9.0 | 450 | 0.5919 | 58.0499 | 51.3783 | 55.0748 | 56.6939 | 20.0 |
| 0.1071 | 9.2 | 460 | 0.5937 | 57.7604 | 50.845 | 54.8941 | 56.351 | 20.0 |
| 0.1071 | 9.4 | 470 | 0.5970 | 57.3655 | 50.4126 | 54.4522 | 55.7815 | 20.0 |
| 0.1071 | 9.6 | 480 | 0.5911 | 58.203 | 51.0367 | 55.3215 | 56.8485 | 20.0 |
| 0.1071 | 9.8 | 490 | 0.6121 | 58.2898 | 51.2749 | 55.4292 | 57.0241 | 20.0 |
| 0.0718 | 10.0 | 500 | 0.5903 | 58.2487 | 51.3838 | 55.4237 | 56.8863 | 20.0 |
| 0.0718 | 10.2 | 510 | 0.5983 | 58.2681 | 51.0925 | 55.2887 | 56.9562 | 20.0 |
| 0.0718 | 10.4 | 520 | 0.6308 | 57.9797 | 50.7386 | 54.995 | 56.5939 | 20.0 |
| 0.0718 | 10.6 | 530 | 0.6307 | 57.6269 | 50.5515 | 54.446 | 56.1544 | 20.0 |
| 0.0718 | 10.8 | 540 | 0.6173 | 57.9545 | 51.1005 | 54.9406 | 56.5732 | 20.0 |
| 0.0718 | 11.0 | 550 | 0.6322 | 58.3718 | 51.4321 | 55.4241 | 57.1879 | 20.0 |
| 0.0718 | 11.2 | 560 | 0.6027 | 58.6581 | 51.8607 | 55.6436 | 57.32 | 20.0 |
| 0.0718 | 11.4 | 570 | 0.6140 | 58.6476 | 51.7822 | 55.5845 | 57.3018 | 20.0 |
| 0.0718 | 11.6 | 580 | 0.6184 | 59.2454 | 52.4204 | 56.2174 | 57.9278 | 20.0 |
| 0.0718 | 11.8 | 590 | 0.6281 | 59.2945 | 52.8165 | 56.547 | 58.0674 | 20.0 |
| 0.0512 | 12.0 | 600 | 0.6128 | 58.2165 | 51.3689 | 55.37 | 56.8342 | 20.0 |
| 0.0512 | 12.2 | 610 | 0.6482 | 57.9196 | 50.9793 | 55.0883 | 56.6986 | 20.0 |
| 0.0512 | 12.4 | 620 | 0.6267 | 57.4782 | 50.1118 | 54.2802 | 55.8872 | 20.0 |
| 0.0512 | 12.6 | 630 | 0.6198 | 57.457 | 50.4079 | 54.2449 | 55.8118 | 20.0 |
| 0.0512 | 12.8 | 640 | 0.6500 | 57.6903 | 51.0627 | 55.0743 | 56.3025 | 20.0 |
| 0.0512 | 13.0 | 650 | 0.6265 | 57.4394 | 50.9013 | 54.7936 | 56.1688 | 20.0 |
| 0.0512 | 13.2 | 660 | 0.6817 | 58.4345 | 51.7087 | 55.291 | 57.0057 | 20.0 |
| 0.0512 | 13.4 | 670 | 0.6322 | 57.869 | 50.9503 | 54.8937 | 56.5178 | 20.0 |
| 0.0512 | 13.6 | 680 | 0.6424 | 57.8285 | 51.1014 | 55.0072 | 56.5022 | 20.0 |
| 0.0512 | 13.8 | 690 | 0.6668 | 58.7067 | 51.9929 | 55.5044 | 57.1517 | 20.0 |
| 0.0397 | 14.0 | 700 | 0.6537 | 58.8717 | 52.4036 | 55.6521 | 57.4855 | 20.0 |
| 0.0397 | 14.2 | 710 | 0.6463 | 58.9623 | 52.4749 | 55.8145 | 57.8095 | 20.0 |
| 0.0397 | 14.4 | 720 | 0.6630 | 58.8097 | 52.1997 | 55.8204 | 57.6325 | 20.0 |
| 0.0397 | 14.6 | 730 | 0.6839 | 59.0479 | 52.6573 | 56.0439 | 57.7322 | 20.0 |
| 0.0397 | 14.8 | 740 | 0.6541 | 59.2854 | 52.6109 | 56.1891 | 57.9446 | 20.0 |
| 0.0397 | 15.0 | 750 | 0.6486 | 58.8419 | 52.2004 | 55.8071 | 57.49 | 20.0 |
| 0.0397 | 15.2 | 760 | 0.6578 | 57.6161 | 50.7276 | 54.5514 | 56.2359 | 20.0 |
| 0.0397 | 15.4 | 770 | 0.6673 | 57.5458 | 50.8286 | 54.4597 | 56.1513 | 20.0 |
| 0.0397 | 15.6 | 780 | 0.6624 | 57.6634 | 51.0017 | 54.6769 | 56.3837 | 20.0 |
| 0.0397 | 15.8 | 790 | 0.6469 | 57.9037 | 51.137 | 54.8939 | 56.6427 | 20.0 |
| 0.0301 | 16.0 | 800 | 0.6373 | 57.8696 | 51.0899 | 54.8543 | 56.4596 | 20.0 |
| 0.0301 | 16.2 | 810 | 0.6712 | 58.614 | 52.0052 | 55.6436 | 57.3211 | 20.0 |
| 0.0301 | 16.4 | 820 | 0.6812 | 58.5214 | 51.8911 | 55.7447 | 57.2663 | 20.0 |
| 0.0301 | 16.6 | 830 | 0.6716 | 58.5818 | 51.929 | 55.7993 | 57.4064 | 20.0 |
| 0.0301 | 16.8 | 840 | 0.6590 | 57.745 | 51.0481 | 54.8545 | 56.4781 | 20.0 |
| 0.0301 | 17.0 | 850 | 0.6695 | 57.6663 | 50.9646 | 54.7863 | 56.3687 | 20.0 |
| 0.0301 | 17.2 | 860 | 0.6858 | 57.5552 | 51.0436 | 54.7092 | 56.3079 | 20.0 |
| 0.0301 | 17.4 | 870 | 0.6840 | 57.9091 | 51.3823 | 54.8309 | 56.6186 | 20.0 |
| 0.0301 | 17.6 | 880 | 0.6751 | 57.8223 | 51.1688 | 54.7562 | 56.5558 | 20.0 |
| 0.0301 | 17.8 | 890 | 0.6589 | 57.9956 | 51.1425 | 54.9509 | 56.6868 | 20.0 |
| 0.0482 | 18.0 | 900 | 0.6634 | 58.0392 | 51.3121 | 55.0726 | 56.7878 | 20.0 |
| 0.0482 | 18.2 | 910 | 0.6907 | 58.2021 | 51.4548 | 55.1874 | 56.91 | 20.0 |
| 0.0482 | 18.4 | 920 | 0.6977 | 58.1124 | 51.4254 | 55.062 | 56.8412 | 20.0 |
| 0.0482 | 18.6 | 930 | 0.6832 | 58.0776 | 51.3168 | 55.0849 | 56.8226 | 20.0 |
| 0.0482 | 18.8 | 940 | 0.6672 | 57.925 | 51.2475 | 54.9661 | 56.655 | 20.0 |
| 0.0482 | 19.0 | 950 | 0.6582 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.2 | 960 | 0.6575 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.4 | 970 | 0.6619 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.6 | 980 | 0.6658 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.8 | 990 | 0.6635 | 57.7222 | 51.0096 | 54.8166 | 56.4623 | 20.0 |
| 0.0201 | 20.0 | 1000 | 0.6617 | 57.7088 | 51.0096 | 54.7514 | 56.3943 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alienware/layoutlmv3-finetuned-cord_100 | alienware | 2023-07-10T15:01:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T12:32:12Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9569093610698366
- name: Recall
type: recall
value: 0.9640718562874252
- name: F1
type: f1
value: 0.9604772557792692
- name: Accuracy
type: accuracy
value: 0.9681663837011885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Precision: 0.9569
- Recall: 0.9641
- F1: 0.9605
- Accuracy: 0.9682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.3320 | 0.9011 | 0.9207 | 0.9108 | 0.9253 |
| 0.3502 | 3.12 | 500 | 0.2811 | 0.9281 | 0.9371 | 0.9326 | 0.9427 |
| 0.3502 | 4.69 | 750 | 0.2429 | 0.9210 | 0.9341 | 0.9275 | 0.9435 |
| 0.162 | 6.25 | 1000 | 0.2264 | 0.9385 | 0.9476 | 0.9430 | 0.9542 |
| 0.162 | 7.81 | 1250 | 0.1996 | 0.9373 | 0.9513 | 0.9443 | 0.9601 |
| 0.0971 | 9.38 | 1500 | 0.1686 | 0.9569 | 0.9633 | 0.9601 | 0.9690 |
| 0.0971 | 10.94 | 1750 | 0.1814 | 0.9532 | 0.9603 | 0.9567 | 0.9652 |
| 0.0704 | 12.5 | 2000 | 0.1915 | 0.9539 | 0.9611 | 0.9575 | 0.9656 |
| 0.0704 | 14.06 | 2250 | 0.1833 | 0.9590 | 0.9633 | 0.9612 | 0.9677 |
| 0.0513 | 15.62 | 2500 | 0.1720 | 0.9569 | 0.9641 | 0.9605 | 0.9682 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.19 | SHENMU007 | 2023-07-10T15:01:47Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-07T08:50:51Z | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rickareo/distilbert-base-uncased-finetuned-emotion | rickareo | 2023-07-10T14:59:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T14:44:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9229910973969778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8271 | 1.0 | 250 | 0.3166 | 0.903 | 0.8989 |
| 0.2469 | 2.0 | 500 | 0.2155 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
ericNguyen0132/DepRoBERTa-2ndStage | ericNguyen0132 | 2023-07-10T14:56:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-10T13:42:58Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: DepRoBERTa-2ndStage
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DepRoBERTa-2ndStage
This model is a fine-tuned version of [rafalposwiata/deproberta-large-v1](https://huggingface.co/rafalposwiata/deproberta-large-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6330
- Accuracy: 0.855
- F1: 0.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3572 | 0.8617 | 0.9224 |
| 0.4953 | 2.0 | 938 | 0.3593 | 0.8783 | 0.9315 |
| 0.3493 | 3.0 | 1407 | 0.4274 | 0.8483 | 0.9091 |
| 0.313 | 4.0 | 1876 | 0.5488 | 0.8617 | 0.9187 |
| 0.2622 | 5.0 | 2345 | 0.6330 | 0.855 | 0.9134 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pcuenq/falcon-7b-instruct-transformers | pcuenq | 2023-07-10T14:54:25Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T12:57:31Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
license: apache-2.0
duplicated_from: pcuenq/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] |
firecoral/ppo-LunarLander-v2 | firecoral | 2023-07-10T14:49:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T14:49:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.63 +/- 20.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nyme/textual_inversion_cat | Nyme | 2023-07-10T14:49:16Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T09:17:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Nyme/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Birchlabs/llama-13b-stepwise-adapter | Birchlabs | 2023-07-10T14:37:32Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:43:18Z | ---
license: apache-2.0
---
Finetunes Llama-13b+Alaca to solve problems via stepwise reasoning (OpenAI [PRM800k dataset](https://github.com/openai/prm800k), or rather our postprocessed version, [`Birchlabs/openai-prm800k-solutions-only`](https://huggingface.co/datasets/Birchlabs/openai-prm800k-solutions-only)).
## Model description
This is a fork of [`llama-13b`](https://huggingface.co/huggyllama/llama-13b) + [`chansung/alpaca-lora-13b`](https://huggingface.co/chansung/alpaca-lora-13b).
That is: we loaded Llama-13b, we applied Alpaca LoRA, expanded vocabulary, then QLoRA 4-bit finetuned from there.
Parts:
- base model [`llama-13b`](https://huggingface.co/huggyllama/llama-13b)
- LoRA 0 [`chansung/alpaca-lora-13b`](https://huggingface.co/chansung/alpaca-lora-13b)
- LoRA 1 [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter) (this)
- tokenizer [`Birchlabs/llama-13b-stepwise-tokenizer`](https://huggingface.co/Birchlabs/llama-13b-stepwise-tokenizer)
- finetuned input/output embedding layers: [`Birchlabs/llama-13b-stepwise-embeddings`](https://huggingface.co/Birchlabs/llama-13b-stepwise-embeddings)
## Training
Trained using [`qlora.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/qlora.py) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`522d86b`](https://github.com/scottlogic-alex/qlora/blob/522d86b447d9fe85e99ece33141fb37c4e947cda/qlora.py).
`python -m qlora --model_name_or_path huggyllama/llama-13b --lora_name_or_path chansung/alpaca-lora-13b --dataset prm800k-solutions --dataset_format prm800k-solutions --bf16 --max_memory_MB 24000 --use_bos_token_in_prompt --truncate_toward_center --source_max_len 184 --target_max_len 998 --gradient_accumulation_steps 4 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 0.0002 --run_name 13b_alpaca_special_tokens_long --report_to wandb --save_steps 64 --save_total_limit 3 --max_steps 1664 --evaluation_strategy steps --eval_steps 64 --generate_steps 16 --register_process_supervision_tokens`
- [(Private) W&B run](https://wandb.ai/scottlogic/llm-stepwise/runs/nvdyo6aw?workspace=user-birchlabs)
- [(Public) W&B report](https://api.wandb.ai/links/scottlogic/65wo5d2o)
## Usage
You can load using [`evaluate.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/evaluate.py#L209-L278) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`522d86b`](https://github.com/scottlogic-alex/qlora/blob/522d86b447d9fe85e99ece33141fb37c4e947cda/evaluate.py).
Download `embed_tokens.pt` and `lm_head.pt` from [`Birchlabs/llama-13b-stepwise-embeddings`](https://huggingface.co/Birchlabs/llama-13b-stepwise-embeddings/tree/main), then run evaluator like so:
```bash
python -m evaluate \
--model_name_or_path huggyllama/llama-13b \
--base_lora_model_name_or_path chansung/alpaca-lora-13b \
--tokenizer_model_name_or_path Birchlabs/llama-13b-stepwise-tokenizer \
--lora_model_name_or_path Birchlabs/llama-13b-stepwise-adapter \
--input_embedding_path embed_tokens.pt \
--output_embedding_path lm_head.pt \
--bf16 \
--use_bos_token_in_prompt \
--overrun_countermeasures False
```
## Prompt style
The model is finetuned for instruction-response.
`prompt = '<s>Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nHow do I feed a family of 5 without ingredients?\n\n### Response:\n<|step_start|>'`
## Example output:
`$ How do I drink a matcha latte?`
`To drink a matcha latte, I should first remove the matcha from the bag. <|step_end|> <|step_start|> I can do that by opening the bag and shaking out the matcha into a glass or bowl. <|step_end|> <|step_start|> Then, I should add some hot water into the glass or bowl and start stirring the matcha. <|step_end|> <|step_start|> This is to make the matcha easier to mix with the water. <|step_end|> <|step_start|> Next, I should add milk into the glass or bowl and continue stirring. <|step_end|> <|step_start|> This is to make the matcha and the water form a thick mixture. <|step_end|> <|step_start|> Finally, I should add ice cubes into the glass or bowl and stir again. <|step_end|> <|step_start|> This is to make the mixture even thicker and chilled. <|step_end|> <|step_start|> So, a matcha latte is a drink that is a mixture of matcha, water, and milk, with some ice cubes. <|step_end|> <|step_start|> <|answer_start|> To make a matcha latte, I should remove the matcha from the bag, add some hot water into a glass or bowl and stir, add milk into the glass or bowl and stir as well, and finally add ice cubes and stir again. <|step_end|> <|step_start|> <|answer_start|> To drink a matcha latte, I should remove`
`$ What is the third Fibonacci number?`
`I know that the first two Fibonacci numbers are 1 and 1, since they are just the ones that appear in the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, and so on, where each term is the sum of the previous two. <|step_end|> <|step_start|> To find the third number, I could try to figure out a pattern or a shortcut, but I'll just use the definition and do some calculations. <|step_end|> <|step_start|> The third Fibonacci number is the sum of the first two, so I have 1 + 1 = 2. <|step_end|> <|step_start|> <|answer_start|> 2 <|answer_end|>`
`$ How do I make a cup of tea?`
`Ok, the first step is to open the cup of tea. <|step_end|> <|step_start|> Then the second step is to jump in the tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> Let's reverse the order of the steps so that they are more intuitive. <|step_end|> <|step_start|> The first step is to jump in the tea. <|step_end|> <|step_start|> Then the second step is to open the cup of tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> So, I think this is a good way to make a cup of tea. <|step_end|> <|step_start|> Yes, it is. <|answer_start|> How do I make a cup of tea? <|answer_end|>`
`$ How do I open a jar?`
`To open a jar, I need to use a key that fits into the lock on the jar. <|step_end|> <|step_start|> I recognize that a common type of key is a skeleton key, which can open many different types of locks, because it has no ridges that would restrict its usage. <|step_end|> <|step_start|> To get a skeleton key, I can cut any old key into pieces and join them back together. <|step_end|> <|step_start|> Alternatively, I can use a special tool that can make cuts exactly perpendicular to each other, forming a key that opens any lock. <|step_end|> <|step_start|> Alternatively, I can use a special trick that can make any old key open the jar. <|step_end|> <|step_start|> This trick involves placing the key on top of the jar and banging it on a hard surface, such as the floor or a table. <|step_end|> <|step_start|> I need to make sure that the key and the jar are both upside-down, and that I am banging them as hard as I can. <|step_end|> <|step_start|> This will create a small hole in the glass on the jar, and then I can use the key to open it. <|step_end|> <|step_start|> I need to be careful not to get cut by the glass, and I need to check if the jar is empty first, otherwise I might poke my finger and` |
EmbraceAGI/chatglm2.cpp | EmbraceAGI | 2023-07-10T14:34:47Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-10T08:37:02Z | ---
license: openrail
---
chatglm2-ggml.bin model for chatglm.cpp.
```
https://github.com/li-plus/chatglm.cpp
```
|
marsh5/Reinforce-cartpole | marsh5 | 2023-07-10T14:31:44Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T14:31:34Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JennnDexter/textual_inversion | JennnDexter | 2023-07-10T14:24:31Z | 29 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-07T11:57:47Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - JennnDexter/textual_inversion
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
LmloCin/TEST_MODEL | LmloCin | 2023-07-10T14:07:36Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-10T13:34:58Z |
import socket,warnings
try:
socket.setdefaulttimeout(1)
socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53))
except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on")
import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')
if iskaggle:
!pip install -Uqq fastai
# Skip this cell if you already have duckduckgo_search installed
!pip install -Uqq duckduckgo_search
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot('image')
urls = search_images('duck images', max_images=1)
urls[0]
from fastdownload import download_url
dest = 'duck.jpg'
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images('lakes photos', max_images=1)[0], 'lakes.jpg', show_progress=False)
Image.open('lakes.jpg').to_thumb(256,256)
searches = 'lakes','duck'
path = Path('duck_or_not')
from time import sleep
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
sleep(2) # Pause between searches to avoid over-loading server
download_images(dest, urls=search_images(f'{o} sun photo'))
sleep(2)
download_images(dest, urls=search_images(f'{o} shade photo'))
sleep(2)
resize_images(path/o, max_size=400, dest=path/o)
failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_duck,_,probs = learn.predict(PILImage.create('duck.jpg'))
print(f"This is a: {is_duck}.")
print(f"Probability it's a duck: {probs[0]:.4f}") |
kfkas/LawBot-v1_koalpaca_legalQA_easylaw_cro | kfkas | 2023-07-10T14:06:22Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-10T14:06:16Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
dilip-reddy/ppo-LunarLander | dilip-reddy | 2023-07-10T13:57:53Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T13:57:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.69 +/- 17.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cointegrated/rubert-base-lesha17-punctuation | cointegrated | 2023-07-10T13:56:54Z | 125 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | The model for https://github.com/Lesha17/Punctuation; all credits go to the owner of this repository. |
abwqr/t5 | abwqr | 2023-07-10T13:49:16Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-07T07:46:46Z | Generates a question of the text
example input: Apples offer numerous health benefits due to their rich nutritional profile. They are packed with dietary fiber, promoting healthy digestion and aiding weight management. Apples are a good source of antioxidants, supporting immune function and reducing the risk of chronic diseases. Regular apple consumption is associated with improved heart health and a lower risk of diabetes.
This text is about the benefits of apple |
JoaoReis/Neuronet | JoaoReis | 2023-07-10T13:45:12Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-10T13:29:55Z | import socket,warnings
try:
socket.setdefaulttimeout(1)
socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53))
except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on")
import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')
if iskaggle:
!pip install -Uqq fastai
!pip install -Uqq duckduckgo_search
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot('image')
urls = search_images(' star fox photos', max_images=1)
urls[0]
from fastdownload import download_url
dest = 'starfox.jpg'
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images('eva 01', max_images=1)[0], 'forest.jpg', show_progress=False)
Image.open('forest.jpg').to_thumb(256,256)
searches = 'eva 01','star fox'
path = Path('eva 01_or_not')
from time import sleep
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
sleep(10) # Pause between searches to avoid over-loading server
download_images(dest, urls=search_images(f'{o} sun photo'))
sleep(10)
download_images(dest, urls=search_images(f'{o} shade photo'))
sleep(10)
resize_images(path/o, max_size=400, dest=path/o)
failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_bird}.")
print(f"Probability it's a bird: {probs[0]:.4f}")
|
boostcamp-5th-nlp07/qlora-koalpaca-polyglot-5.8b-fast | boostcamp-5th-nlp07 | 2023-07-10T13:29:43Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-10T13:29:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
AladarMezga/detr-resnet-50_finetuned_cppe5 | AladarMezga | 2023-07-10T13:26:52Z | 192 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-07-10T12:06:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
idolior99/RLU1 | idolior99 | 2023-07-10T13:24:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T13:24:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.75 +/- 18.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Neko-Institute-of-Science/guanaco-unchained-33b-qlora | Neko-Institute-of-Science | 2023-07-10T13:24:08Z | 0 | 3 | null | [
"dataset:CheshireAI/guanaco-unchained",
"region:us"
] | null | 2023-07-10T00:10:04Z | ---
datasets:
- CheshireAI/guanaco-unchained
---
Let's see how this goes.
Training in 8 bit and at full context. Is 8bit even a qlora?
```
python qlora.py \
--model_name_or_path /UI/text-generation-webui/models/llama-30b \
--output_dir ./output/guanaco-33b \
--logging_steps 1 \
--save_strategy steps \
--data_seed 42 \
--save_steps 69 \
--save_total_limit 999 \
--per_device_eval_batch_size 1 \
--dataloader_num_workers 3 \
--group_by_length \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--do_eval false \
--do_mmlu_eval false \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--bf16 \
--bits 8 \
--warmup_ratio 0.03 \
--lr_scheduler_type constant \
--gradient_checkpointing \
--gradient_accumulation_steps 32 \
--dataset oasst1 \
--source_max_len 2048 \
--target_max_len 2048 \
--per_device_train_batch_size 1 \
--num_train_epochs 3 \
--learning_rate 0.0001 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.05 \
--weight_decay 0.0 \
--seed 0
``` |
datenmassiv/falcon-7b-instruct | datenmassiv | 2023-07-10T13:00:34Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"coreml",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T13:00:33Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: Hey Falcon! Any recommendations for my holidays in Abu Dhabi?
example_title: Abu Dhabi Trip
- text: What's the Everett interpretation of quantum mechanics?
example_title: 'Q/A: Quantum & Answers'
- text: >-
Give me a list of the top 10 dive sites you would recommend around the
world.
example_title: Diving Top 10
- text: Can you tell me more about deep-water soloing?
example_title: Extreme sports
- text: >-
Can you write a short tweet about the Apache 2.0 release of our latest AI
model, Falcon LLM?
example_title: Twitter Helper
- text: What are the responsabilities of a Chief Llama Officer?
example_title: Trendy Jobs
license: apache-2.0
duplicated_from: tiiuae/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] |
ccattomio/Reinforce-CartPole-v1 | ccattomio | 2023-07-10T12:59:48Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T12:59:37Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
1aurent/q-FrozenLake-v1-4x4-noSlippery | 1aurent | 2023-07-10T12:58:37Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T12:58:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="1aurent/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-simple-wiki-mod | NasimB | 2023-07-10T12:53:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T11:00:44Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-simple-wiki-mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-simple-wiki-mod
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6721 | 0.29 | 500 | 5.6311 |
| 5.3162 | 0.59 | 1000 | 5.2012 |
| 4.9814 | 0.88 | 1500 | 4.9493 |
| 4.708 | 1.17 | 2000 | 4.8102 |
| 4.5523 | 1.47 | 2500 | 4.6918 |
| 4.4524 | 1.76 | 3000 | 4.5941 |
| 4.3303 | 2.06 | 3500 | 4.5209 |
| 4.1432 | 2.35 | 4000 | 4.4726 |
| 4.1182 | 2.64 | 4500 | 4.4154 |
| 4.0753 | 2.94 | 5000 | 4.3598 |
| 3.8614 | 3.23 | 5500 | 4.3514 |
| 3.8147 | 3.52 | 6000 | 4.3176 |
| 3.7996 | 3.82 | 6500 | 4.2839 |
| 3.6896 | 4.11 | 7000 | 4.2834 |
| 3.5307 | 4.4 | 7500 | 4.2783 |
| 3.5227 | 4.7 | 8000 | 4.2595 |
| 3.5108 | 4.99 | 8500 | 4.2484 |
| 3.3413 | 5.28 | 9000 | 4.2624 |
| 3.3338 | 5.58 | 9500 | 4.2605 |
| 3.3305 | 5.87 | 10000 | 4.2597 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mgmeskill/CartPole-v33 | mgmeskill | 2023-07-10T12:39:27Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T12:39:20Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v33
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 30.60 +/- 24.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Nianhua123/ppo-LunarLander-v2 | Nianhua123 | 2023-07-10T12:24:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T12:24:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.69 +/- 15.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PraveenJesu/openai-whisper-medium-peft-lora-v2.2.4 | PraveenJesu | 2023-07-10T12:24:27Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-10T12:24:25Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
2481asd/ssrfun | 2481asd | 2023-07-10T12:21:45Z | 0 | 8 | null | [
"region:us"
] | null | 2023-06-01T13:33:50Z | ---
title: SsrfunModels
emoji: 🔥
colorFrom: green
colorTo: gray
sdk: static
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Mavila/First_DRL | Mavila | 2023-07-10T12:06:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-10T12:06:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.12 +/- 13.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-concat-guten-mod-2k-rarity-all-4k-p12k | NasimB | 2023-07-10T12:05:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T10:12:31Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-mod-2k-rarity-all-4k-p12k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-mod-2k-rarity-all-4k-p12k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6867 | 0.29 | 500 | 5.6408 |
| 5.3387 | 0.59 | 1000 | 5.2070 |
| 4.9873 | 0.88 | 1500 | 4.9547 |
| 4.7105 | 1.18 | 2000 | 4.7987 |
| 4.5512 | 1.47 | 2500 | 4.6771 |
| 4.4432 | 1.76 | 3000 | 4.5711 |
| 4.3124 | 2.06 | 3500 | 4.4934 |
| 4.1304 | 2.35 | 4000 | 4.4492 |
| 4.1003 | 2.65 | 4500 | 4.3875 |
| 4.0551 | 2.94 | 5000 | 4.3342 |
| 3.8505 | 3.24 | 5500 | 4.3336 |
| 3.8007 | 3.53 | 6000 | 4.3010 |
| 3.7833 | 3.82 | 6500 | 4.2676 |
| 3.6557 | 4.12 | 7000 | 4.2684 |
| 3.521 | 4.41 | 7500 | 4.2603 |
| 3.512 | 4.71 | 8000 | 4.2465 |
| 3.4976 | 5.0 | 8500 | 4.2324 |
| 3.3273 | 5.29 | 9000 | 4.2482 |
| 3.317 | 5.59 | 9500 | 4.2467 |
| 3.3218 | 5.88 | 10000 | 4.2459 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
161381373-qq/ee | 161381373-qq | 2023-07-10T12:04:47Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-10T12:04:14Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Salama1429/whisper-tiny-english | Salama1429 | 2023-07-10T11:57:33Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-10T10:05:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-english
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.27150380339379755
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-english
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6834
- Wer Ortho: 0.280025
- Wer: 0.271504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0007 | 17.86 | 500 | 0.6834 | 0.280025 | 0.271504 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
Benita12/videomae-base-finetuned-ucf101-subset | Benita12 | 2023-07-10T11:39:42Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-06-13T14:44:31Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3070
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 450
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1183 | 0.33 | 150 | 1.2051 | 0.4714 |
| 1.1544 | 1.33 | 300 | 0.4946 | 0.8 |
| 0.1042 | 2.33 | 450 | 0.2526 | 0.9429 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi | tyavika | 2023-07-10T11:29:13Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-09T20:24:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi](https://huggingface.co/tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Junr-syl/tweet_sentiments_analysis | Junr-syl | 2023-07-10T11:21:39Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-07T09:19:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3953
- eval_accuracy: 0.8660
- eval_runtime: 254.1512
- eval_samples_per_second: 31.473
- eval_steps_per_second: 3.935
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AquaV/Hoers13B-ggml-q4_0 | AquaV | 2023-07-10T11:17:51Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-10T10:06:01Z | I believe this was trained on the dataset available at https://huggingface.co/datasets/Amo/FimFic_Omega_V3. The dataset comprises user-generated stories inspired by the "My Little Pony: Friendship is Magic" series.
I'm not the original creator of the model however based on the training data the format for the prompts might be:
```
<|startoftext|>
[tags: author: (author name), character: (character name), character: (character name), genre: (genre), series: (series name) warning: (content warnings)]
***
CHAPTER: (chapter name)
{Cursor here}
```
I am uncertain if "<|startoftext|>" should be included.
Here are two examples from the training data:
```
<|startoftext|>
[tags: author: device heretic, character: Other, character: Princess Celestia, character: Twilight Sparkle, genre: Sad, genre: Slice of Life, genre: Tragedy, series: My Little Pony: Friendship is Magic ]
***
CHAPTER: The Underlying Truth
{Cursor here}
```
```
<|startoftext|>
[tags: author: Bloodline Spike, character: Cutie Mark Crusaders, character: Main 6, character: Princess Celestia, character: Princess Luna, character: Spike, genre: Adventure, genre: Dark, genre: Romance, genre: Sad, genre: Tragedy, series: My Little Pony: Friendship is Magic, warning: Gore ]
***
CHAPTER: Chapter 1 Entering the Medallion
{Cursor here}
```
As I am just an archiver of this model, I may not be able to provide further support or solve issues you encounter while using it. |
JFoz/test_nvs | JFoz | 2023-07-10T11:06:54Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-07-06T15:23:06Z | ---
license: cc-by-nc-4.0
---
## GenVS (partial) reimplementation
Model weights for a partial, somewhat unfaithful reimplementation of GeNVS https://nvlabs.github.io/genvs/media/genvs.pdf
Code repo at https://github.com/jfozard/nvs_test
### Dataset source
Model trained on ShapeNet car renderings from https://github.com/vsitzmann/scene-representation-networks
These are not for commercial use (ShapeNet license conditions).
### Example results
Conditioning image

Reconstructed views

|
Subsets and Splits