modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 18:28:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rohitdavas/Taxi-V3-with-Q-Learning | rohitdavas | 2023-09-07T09:50:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T09:50:39Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3-with-Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rohitdavas/Taxi-V3-with-Q-Learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GregaVrbancic/OTS_2023 | GregaVrbancic | 2023-09-07T09:43:46Z | 0 | 0 | null | [
"onnx",
"region:us"
]
| null | 2023-09-06T15:02:32Z | # OTS 2023
## Ko se napovedni modeli strojnega učenja srečajo z realnim okoljem in končnimi uporabniki
### Napovedni modeli
- [minilm-ucased-squad2](https://huggingface.co/deepset/minilm-uncased-squad2)
- [roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
- [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad)
|
rohitdavas/q-FrozenLake-v1-4x4-noSlippery | rohitdavas | 2023-09-07T09:43:07Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T09:43:02Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rohitdavas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigmorning/whisper_4_with_init_sun_char_0095 | bigmorning | 2023-09-07T09:42:51Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T09:42:43Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0095
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0095
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1133
- Train Accuracy: 0.0666
- Train Wermet: 0.7860
- Validation Loss: 2.3550
- Validation Accuracy: 0.0315
- Validation Wermet: 1.3283
- Epoch: 94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
| 1.2412 | 0.0620 | 0.5956 | 2.2020 | 0.0307 | 0.9376 | 75 |
| 1.2320 | 0.0624 | 0.5726 | 2.2278 | 0.0308 | 1.5467 | 76 |
| 1.2235 | 0.0626 | 0.7086 | 2.1929 | 0.0314 | 0.5619 | 77 |
| 1.2520 | 0.0614 | 0.7158 | 2.1414 | 0.0315 | 0.8414 | 78 |
| 1.2306 | 0.0621 | 0.7386 | 2.2487 | 0.0313 | 0.8498 | 79 |
| 1.2182 | 0.0627 | 0.6691 | 2.0785 | 0.0317 | 1.2870 | 80 |
| 1.2080 | 0.0630 | 0.7715 | 2.2775 | 0.0310 | 1.6700 | 81 |
| 1.2217 | 0.0624 | 0.7984 | 2.1358 | 0.0314 | 2.0753 | 82 |
| 1.2117 | 0.0628 | 0.8299 | 2.2871 | 0.0305 | 1.4698 | 83 |
| 1.1786 | 0.0642 | 0.6979 | 2.2602 | 0.0315 | 1.6544 | 84 |
| 1.1776 | 0.0643 | 0.7391 | 2.2246 | 0.0314 | 1.0500 | 85 |
| 1.1613 | 0.0651 | 0.7607 | 2.2078 | 0.0316 | 0.9168 | 86 |
| 1.1323 | 0.0660 | 0.7046 | 2.3419 | 0.0315 | 0.8306 | 87 |
| 1.1172 | 0.0667 | 0.7140 | 2.3248 | 0.0310 | 1.3227 | 88 |
| 1.1247 | 0.0664 | 0.7725 | 2.1606 | 0.0315 | 0.8301 | 89 |
| 1.1395 | 0.0656 | 0.7530 | 2.3058 | 0.0313 | 2.6814 | 90 |
| 1.1289 | 0.0660 | 0.7383 | 2.4022 | 0.0304 | 1.8903 | 91 |
| 1.1743 | 0.0644 | 0.9273 | 2.1835 | 0.0312 | 0.8217 | 92 |
| 1.1036 | 0.0670 | 0.8103 | 2.3628 | 0.0311 | 1.3153 | 93 |
| 1.1133 | 0.0666 | 0.7860 | 2.3550 | 0.0315 | 1.3283 | 94 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
CyberHarem/koga_koharu_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T09:41:11Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/koga_koharu_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T09:12:18Z | ---
license: mit
datasets:
- CyberHarem/koga_koharu_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of koga_koharu_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3680, you need to download `3680/koga_koharu_theidolmastercinderellagirlsu149.pt` as the embedding and `3680/koga_koharu_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3680**, with the score of 0.974. The trigger words are:
1. `koga_koharu_theidolmastercinderellagirlsu149`
2. `short_hair, brown_eyes, bow, hairband, brown_hair, smile, pink_bow, bangs, blonde_hair, open_mouth, upper_body, hair_bow`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6900 | 0.910 | [Download](6900/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) |  |  |
| 6440 | 0.946 | [Download](6440/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) |  |  |
| 5980 | 0.946 | [Download](5980/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) |  |  |
| 5520 | 0.935 | [Download](5520/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) |  |  |
| 5060 | 0.898 | [Download](5060/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) |  |  |
| 4600 | 0.913 | [Download](4600/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) |  |  |
| 4140 | 0.943 | [Download](4140/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) |  |  |
| **3680** | **0.974** | [**Download**](3680/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) |  |  |
| 3220 | 0.906 | [Download](3220/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) |  |  |
| 2760 | 0.902 | [Download](2760/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) |  |  |
| 2300 | 0.952 | [Download](2300/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) |  |  |
| 1840 | 0.912 | [Download](1840/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) |  |  |
| 1380 | 0.872 | [Download](1380/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) |  |  |
| 920 | 0.852 | [Download](920/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) |  |  |
| 460 | 0.841 | [Download](460/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) |  |  |
|
syoius/hfdrl_unit3 | syoius | 2023-09-07T09:37:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T09:36:30Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 565.00 +/- 245.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga syoius -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga syoius -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga syoius
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Chenhsing/sdxl-part-model | Chenhsing | 2023-09-07T09:32:35Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-09-07T07:21:24Z |
---
license: creativeml-openrail-m
base_model: /mnt/blob/stable-diffusion-xl-base-1.0
dataset: lambdalabs/pokemon-blip-captions
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - Chenhsing/sdxl-part-model
This pipeline was finetuned from **/mnt/blob/stable-diffusion-xl-base-1.0** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: A jeep car is moving on the snow.:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
detect42/pokemon-lora | detect42 | 2023-09-07T09:32:14Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-07T05:05:21Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - detect42/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_tfidf-2 | ThuyNT03 | 2023-09-07T09:26:58Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T07:40:56Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_tfidf-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8971
- Accuracy: 0.71
- F1: 0.7064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0056 | 1.0 | 88 | 0.8277 | 0.67 | 0.6501 |
| 0.7703 | 2.0 | 176 | 0.7912 | 0.57 | 0.5253 |
| 0.642 | 3.0 | 264 | 0.7158 | 0.71 | 0.7036 |
| 0.5139 | 4.0 | 352 | 0.6648 | 0.73 | 0.7272 |
| 0.3862 | 5.0 | 440 | 0.7784 | 0.72 | 0.7150 |
| 0.3029 | 6.0 | 528 | 0.8894 | 0.7 | 0.6924 |
| 0.2315 | 7.0 | 616 | 0.8696 | 0.71 | 0.7050 |
| 0.1903 | 8.0 | 704 | 0.8971 | 0.71 | 0.7064 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
JetBrains-Research/cmg-race-with-history | JetBrains-Research | 2023-09-07T09:26:49Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"commit_message_generation",
"code",
"text2text-generation",
"en",
"dataset:JetBrains-Research/commit-chronicle",
"arxiv:2308.07655",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-01T09:29:40Z | ---
language:
- code
- en
license: apache-2.0
tags:
- commit_message_generation
- code
datasets:
- JetBrains-Research/commit-chronicle
pipeline_tag: text2text-generation
---
# CMG/CMC: RACE (with history)
This is the checkpoint for [RACE](https://aclanthology.org/2022.emnlp-main.372.pdf) model, fine-tuned for the commit message generation (and/or completion) task as part of the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
## Details
> 🔍 For further details, please refer to:
> * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655)
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
* This model is based on the fine-tuned CodeT5 checkpoint [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history) and uses RACE architecture introduced in 📜 [RACE: Retrieval-Augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372.pdf).
* Note: Requires a custom model class. Check [our implementation](https://github.com/JetBrains-Research/commit_message_generation/blob/appendix_cmg/src/model/configurations/utils/race.py) or [the replication package](https://github.com/DeepSoftwareAnalytics/RACE) provided by RACE authors.
* This model was trained with commit diffs as well as WITH commit message history.
* This model was trained on the CommitChronicle dataset introduced in our study.
* Our hyperparameter setting is mostly based on 📜 [RACE: Retrieval-augmented Commit Message Generation](https://aclanthology.org/2022.emnlp-main.372/).
The exact values are provided below:
| Hyperparameter | Value |
|:--------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------:|
| Encoder context max length | 512 |
| Decoder context max length | 512 |
| Number of training epochs | 1 |
| Batch size | 32 |
| Optimizer | [AdamW](https://pytorch.org/docs/1.12/generated/torch.optim.AdamW.html?highlight=adamw#torch.optim.AdamW) |
| Warmup | [Linear](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) |
| Number of warmup steps | 100 |
| Peak learning rate | 0.00002 |
## Available checkpoints
We also released checkpoints for other models fine-tuned as part of our study.
* Models trained *with commit message history*:
* **CodeT5:** 🤗 [`JetBrains-Research/cmg-codet5-with-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-with-history)
* **CodeReviewer:** 🤗 [`JetBrains-Research/cmg-codereviewer-with-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-with-history)
* **RACE:** 🤗 [`JetBrains-Research/cmg-race-with-history`](https://huggingface.co/JetBrains-Research/cmg-race-with-history) (this model)
* Models trained *without commit message history*:
* **CodeT5:** 🤗 [`JetBrains-Research/cmg-codet5-without-history`](https://huggingface.co/JetBrains-Research/cmg-codet5-without-history)
* **CodeReviewer:** 🤗 [`JetBrains-Research/cmg-codereviewer-without-history`](https://huggingface.co/JetBrains-Research/cmg-codereviewer-without-history)
* **RACE:** 🤗 [`JetBrains-Research/cmg-race-without-history`](https://huggingface.co/JetBrains-Research/cmg-race-without-history)
## Citation
```
TODO
``` |
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_w2v-2 | ThuyNT03 | 2023-09-07T09:21:13Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T07:32:16Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_w2v-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0071
- Accuracy: 0.73
- F1: 0.7272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.962 | 1.0 | 86 | 0.7741 | 0.72 | 0.7110 |
| 0.6927 | 2.0 | 172 | 0.7040 | 0.67 | 0.6458 |
| 0.5162 | 3.0 | 258 | 0.7437 | 0.72 | 0.7157 |
| 0.3641 | 4.0 | 344 | 0.7528 | 0.74 | 0.7353 |
| 0.244 | 5.0 | 430 | 0.8498 | 0.73 | 0.7262 |
| 0.1787 | 6.0 | 516 | 0.8976 | 0.73 | 0.7290 |
| 0.1143 | 7.0 | 602 | 0.9672 | 0.74 | 0.7378 |
| 0.0887 | 8.0 | 688 | 1.0071 | 0.73 | 0.7272 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SafetyMary/SpaceInvadersNoFrameskip-v4 | SafetyMary | 2023-09-07T09:15:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T09:15:16Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 510.50 +/- 145.58
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SafetyMary -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SafetyMary -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SafetyMary
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_BERT-2 | ThuyNT03 | 2023-09-07T09:06:46Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T07:19:06Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_BERT-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0732
- Accuracy: 0.7
- F1: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.944 | 1.0 | 88 | 0.8199 | 0.66 | 0.6427 |
| 0.6694 | 2.0 | 176 | 0.7223 | 0.7 | 0.7007 |
| 0.4933 | 3.0 | 264 | 0.7039 | 0.73 | 0.7321 |
| 0.3532 | 4.0 | 352 | 0.7914 | 0.73 | 0.7297 |
| 0.2619 | 5.0 | 440 | 0.8506 | 0.72 | 0.7176 |
| 0.1807 | 6.0 | 528 | 0.9830 | 0.71 | 0.7090 |
| 0.1365 | 7.0 | 616 | 1.0183 | 0.7 | 0.7016 |
| 0.1035 | 8.0 | 704 | 1.0732 | 0.7 | 0.7004 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
prognosis/cardio-llama-2-7b-miniguanaco-lora-v16 | prognosis | 2023-09-07T09:04:05Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T08:52:31Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
francoj/chinese-alpaca-2-13b-16k-gguf | francoj | 2023-09-07T09:02:07Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-09-07T08:02:56Z | ---
language:
- en
- zh
---https://huggingface.co/ziqingyang/chinese-alpaca-2-13b-16k |
samar4/bloom-lora-token-classification | samar4 | 2023-09-07T09:00:42Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T09:00:39Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
vishalxeth/llama2-qlora-finetunined-french | vishalxeth | 2023-09-07T08:57:00Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T08:51:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_w2v-2 | ThuyNT03 | 2023-09-07T08:54:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T07:05:30Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_w2v-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2182
- Accuracy: 0.69
- F1: 0.6910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9152 | 1.0 | 86 | 0.7599 | 0.68 | 0.6556 |
| 0.6043 | 2.0 | 172 | 0.7114 | 0.7 | 0.7023 |
| 0.4061 | 3.0 | 258 | 0.7314 | 0.73 | 0.7344 |
| 0.2797 | 4.0 | 344 | 0.9199 | 0.71 | 0.7051 |
| 0.183 | 5.0 | 430 | 1.0362 | 0.71 | 0.7083 |
| 0.1371 | 6.0 | 516 | 1.1032 | 0.71 | 0.7065 |
| 0.0894 | 7.0 | 602 | 1.1811 | 0.71 | 0.7101 |
| 0.0779 | 8.0 | 688 | 1.2182 | 0.69 | 0.6910 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_synonym-2 | ThuyNT03 | 2023-09-07T08:48:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T06:57:44Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_synonym-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2465
- Accuracy: 0.69
- F1: 0.6880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9109 | 1.0 | 88 | 0.8214 | 0.65 | 0.6425 |
| 0.6223 | 2.0 | 176 | 0.6999 | 0.7 | 0.7021 |
| 0.424 | 3.0 | 264 | 0.7126 | 0.73 | 0.7305 |
| 0.2932 | 4.0 | 352 | 0.8673 | 0.72 | 0.7172 |
| 0.1692 | 5.0 | 440 | 1.0126 | 0.68 | 0.6806 |
| 0.1192 | 6.0 | 528 | 1.1561 | 0.69 | 0.6889 |
| 0.067 | 7.0 | 616 | 1.2002 | 0.68 | 0.6835 |
| 0.0481 | 8.0 | 704 | 1.2465 | 0.69 | 0.6880 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
eliept1/rl_course_vizdoom_health_gathering_supreme | eliept1 | 2023-09-07T08:47:04Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T08:46:44Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.73 +/- 5.22
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r eliept1/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
osieosie/mnli-4bit-7b-bnb-seed65 | osieosie | 2023-09-07T08:42:15Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T08:42:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_4_with_init_sun_char_0075 | bigmorning | 2023-09-07T08:42:06Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T08:41:57Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0075
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0075
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2803
- Train Accuracy: 0.0606
- Train Wermet: 0.5457
- Validation Loss: 2.0848
- Validation Accuracy: 0.0316
- Validation Wermet: 0.9665
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
osieosie/mnli-4bit-7b-bnb-seed87 | osieosie | 2023-09-07T08:35:36Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T08:35:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T08:32:25Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T08:18:15Z | ---
license: mit
datasets:
- CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ichihara_nina_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5880, you need to download `5880/ichihara_nina_theidolmastercinderellagirlsu149.pt` as the embedding and `5880/ichihara_nina_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5880**, with the score of 0.987. The trigger words are:
1. `ichihara_nina_theidolmastercinderellagirlsu149`
2. `brown_hair, long_hair, bangs, brown_eyes, blunt_bangs, smile, open_mouth, cosplay, bow, kigurumi, yellow_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6300 | 0.926 | [Download](6300/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) |  |  |
| **5880** | **0.987** | [**Download**](5880/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5880/previews/nude.png) | [<NSFW, click to see>](5880/previews/nude2.png) |  |  |
| 5460 | 0.971 | [Download](5460/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5460/previews/nude.png) | [<NSFW, click to see>](5460/previews/nude2.png) |  |  |
| 5040 | 0.879 | [Download](5040/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4620 | 0.856 | [Download](4620/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 4200 | 0.905 | [Download](4200/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3780 | 0.945 | [Download](3780/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3360 | 0.931 | [Download](3360/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2940 | 0.789 | [Download](2940/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2940/previews/nude.png) | [<NSFW, click to see>](2940/previews/nude2.png) |  |  |
| 2520 | 0.863 | [Download](2520/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2100 | 0.801 | [Download](2100/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) |  |  |
| 1680 | 0.848 | [Download](1680/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) |  |  |
| 1260 | 0.748 | [Download](1260/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1260/previews/nude.png) | [<NSFW, click to see>](1260/previews/nude2.png) |  |  |
| 840 | 0.692 | [Download](840/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](840/previews/nude.png) | [<NSFW, click to see>](840/previews/nude2.png) |  |  |
| 420 | 0.187 | [Download](420/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](420/previews/nude.png) | [<NSFW, click to see>](420/previews/nude2.png) |  |  |
|
bigmorning/whisper_4_with_init_sun_char_0070 | bigmorning | 2023-09-07T08:26:54Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T08:26:46Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0070
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0070
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3321
- Train Accuracy: 0.0587
- Train Wermet: 0.3546
- Validation Loss: 2.1066
- Validation Accuracy: 0.0314
- Validation Wermet: 0.6181
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ibrahimciko/poca-SoccerTwos | ibrahimciko | 2023-09-07T08:23:01Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-07T08:22:48Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ibrahimciko/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_BERT-1 | ThuyNT03 | 2023-09-07T08:09:50Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T06:28:27Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_BERT-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9104
- Accuracy: 0.7
- F1: 0.6961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9733 | 1.0 | 88 | 0.7845 | 0.65 | 0.6367 |
| 0.7476 | 2.0 | 176 | 0.7677 | 0.67 | 0.6638 |
| 0.5987 | 3.0 | 264 | 0.7065 | 0.74 | 0.7360 |
| 0.4856 | 4.0 | 352 | 0.7206 | 0.7 | 0.6987 |
| 0.3812 | 5.0 | 440 | 0.8077 | 0.71 | 0.7080 |
| 0.3172 | 6.0 | 528 | 0.8131 | 0.73 | 0.7274 |
| 0.2332 | 7.0 | 616 | 0.8747 | 0.71 | 0.7089 |
| 0.2205 | 8.0 | 704 | 0.9104 | 0.7 | 0.6961 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kasperchen/ppo-LunarLander-v2 | kasperchen | 2023-09-07T08:07:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T08:06:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.36 +/- 19.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_tfidf-1 | ThuyNT03 | 2023-09-07T08:04:06Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T06:22:34Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_tfidf-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9783
- Accuracy: 0.7
- F1: 0.6953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9709 | 1.0 | 88 | 0.7751 | 0.71 | 0.7004 |
| 0.7328 | 2.0 | 176 | 0.7444 | 0.72 | 0.7110 |
| 0.5756 | 3.0 | 264 | 0.7369 | 0.72 | 0.7100 |
| 0.4436 | 4.0 | 352 | 0.7851 | 0.71 | 0.7024 |
| 0.3441 | 5.0 | 440 | 0.8120 | 0.7 | 0.6967 |
| 0.2631 | 6.0 | 528 | 0.8517 | 0.71 | 0.7054 |
| 0.2097 | 7.0 | 616 | 0.9411 | 0.71 | 0.7079 |
| 0.1771 | 8.0 | 704 | 0.9783 | 0.7 | 0.6953 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Bazaar/cv_corridor_garbage_detection | Bazaar | 2023-09-07T07:59:09Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-07T07:50:39Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cv_corridor_garbage_detection
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9733333587646484
---
# cv_corridor_garbage_detection
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### garbage

#### no garbage
 |
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_w2v-1 | ThuyNT03 | 2023-09-07T07:58:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T06:13:40Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_w2v-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0746
- Accuracy: 0.73
- F1: 0.7281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9414 | 1.0 | 86 | 0.7396 | 0.69 | 0.6554 |
| 0.6476 | 2.0 | 172 | 0.6620 | 0.75 | 0.7502 |
| 0.4651 | 3.0 | 258 | 0.6393 | 0.78 | 0.7841 |
| 0.3542 | 4.0 | 344 | 0.8022 | 0.7 | 0.6905 |
| 0.2252 | 5.0 | 430 | 0.8766 | 0.71 | 0.7105 |
| 0.1639 | 6.0 | 516 | 0.9983 | 0.72 | 0.7189 |
| 0.1194 | 7.0 | 602 | 1.0347 | 0.73 | 0.7306 |
| 0.0817 | 8.0 | 688 | 1.0746 | 0.73 | 0.7281 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
YokaiKoibito/llama2_70b_chat_uncensored-GGUF | YokaiKoibito | 2023-09-07T07:52:14Z | 17 | 4 | null | [
"gguf",
"uncensored",
"wizard",
"vicuna",
"llama",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-05T17:55:28Z | ---
license: llama2
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
tags:
- uncensored
- wizard
- vicuna
- llama
---
This is an GGUF version of [jarradh/llama2_70b_chat_uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored)
(Arguable a better name for this model would be something like Llama-2-70B_Wizard-Vicuna-Uncensored-GGUF, but to avoid confusion I'm sticking with jarradh's naming scheme.)
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
The clients and libraries below are expecting to add GGUF support shortly:
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGML)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference, plus fp16 GGUF for requantizing](https://huggingface.co/YokaiKoibito/llama2_70b_chat_uncensored-GGUF)
* [Jarrad Hope's unquantised model in fp16 pytorch format, for GPU inference and further conversions](https://huggingface.co/YokaiKoibito/llama2_70b_chat_uncensored-fp16)
* [Jarrad Hope's original unquantised fp32 model in pytorch format, for further conversions](https://huggingface.co/jarradh/llama2_70b_chat_uncensored)
<!-- repositories-available end -->
## Prompt template: Human-Response
```
### HUMAN:
{prompt}
### RESPONSE:
```
|
sk0032/coqui-tts-model | sk0032 | 2023-09-07T07:50:39Z | 2 | 1 | transformers | [
"transformers",
"tensorboard",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-07T06:14:41Z | EPOCH: 5138
GLOBAL_STEP: 1113100
Adam
Trained on 3 Hours of audio data
|
GNReplay/bert-finetuned-ner | GNReplay | 2023-09-07T07:43:04Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-07T07:28:52Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9332341761692282
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9417160010005836
- name: Accuracy
type: accuracy
value: 0.9864602342968152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
- Precision: 0.9332
- Recall: 0.9504
- F1: 0.9417
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0794 | 1.0 | 1756 | 0.0834 | 0.9045 | 0.9322 | 0.9181 | 0.9787 |
| 0.0393 | 2.0 | 3512 | 0.0552 | 0.9257 | 0.9480 | 0.9367 | 0.9853 |
| 0.0259 | 3.0 | 5268 | 0.0581 | 0.9332 | 0.9504 | 0.9417 | 0.9865 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
turing-motors/heron-preliminary-git-Llama-2-70b-v0 | turing-motors | 2023-09-07T07:41:54Z | 36 | 1 | transformers | [
"transformers",
"pytorch",
"git_llama",
"text-generation",
"heron",
"vision",
"image-captioning",
"image-to-text",
"ja",
"arxiv:2205.14100",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"region:us"
]
| image-to-text | 2023-09-07T01:08:09Z | ---
language:
- ja
tags:
- heron
- vision
- image-captioning
pipeline_tag: image-to-text
license:
- llama2
inference: false
---
# Heron GIT Llama 2 70B Preliminary

## Model Details
Heron GIT Llama 2 70B Preliminary is a vision-language model that was pretrained with image-text pairs.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
<b>*Note: This model is a preliminary trained version. Its accuracy and performance are under verification, and we do not provide any guarantees. We plan to update it with a further trained version in the future.*</b>
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository).
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [Llama-2 70B chat hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
* **Language(s)**: English
* **License**: This model is licensed under [the LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE).
### Training
This model was trained with the Adaptor using M3IT Coco Captions.
### Training Dataset
- [MMInstruction M3IT](https://huggingface.co/datasets/MMInstruction/M3IT)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitElyzaFast,
url = {[https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0](https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0)},
title = {Heron GIT Llama 2 70B Preliminary},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
license: llama2
---
|
bigmorning/whisper_4_with_init_sun_char_0055 | bigmorning | 2023-09-07T07:41:26Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T07:41:17Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0055
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0055
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5008
- Train Accuracy: 0.0532
- Train Wermet: 0.2386
- Validation Loss: 1.9368
- Validation Accuracy: 0.0309
- Validation Wermet: 0.2045
- Epoch: 54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_tfidf-1 | ThuyNT03 | 2023-09-07T07:38:27Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T05:54:38Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_tfidf-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1845
- Accuracy: 0.71
- F1: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9115 | 1.0 | 88 | 0.7285 | 0.71 | 0.6983 |
| 0.5972 | 2.0 | 176 | 0.7379 | 0.73 | 0.7238 |
| 0.3991 | 3.0 | 264 | 0.7867 | 0.72 | 0.7169 |
| 0.2894 | 4.0 | 352 | 0.8736 | 0.73 | 0.7310 |
| 0.2112 | 5.0 | 440 | 0.9920 | 0.74 | 0.7403 |
| 0.1393 | 6.0 | 528 | 1.0496 | 0.75 | 0.7486 |
| 0.1191 | 7.0 | 616 | 1.1640 | 0.72 | 0.7177 |
| 0.098 | 8.0 | 704 | 1.1845 | 0.71 | 0.7075 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_w2v-1 | ThuyNT03 | 2023-09-07T07:32:12Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T05:47:13Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_w2v-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0909
- Accuracy: 0.76
- F1: 0.7596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9106 | 1.0 | 86 | 0.7115 | 0.73 | 0.7319 |
| 0.5874 | 2.0 | 172 | 0.6895 | 0.71 | 0.7119 |
| 0.4037 | 3.0 | 258 | 0.8004 | 0.69 | 0.6842 |
| 0.2653 | 4.0 | 344 | 0.7982 | 0.72 | 0.7264 |
| 0.1761 | 5.0 | 430 | 0.9948 | 0.76 | 0.7608 |
| 0.1044 | 6.0 | 516 | 1.0613 | 0.75 | 0.7518 |
| 0.0844 | 7.0 | 602 | 1.0984 | 0.75 | 0.7478 |
| 0.0604 | 8.0 | 688 | 1.0909 | 0.76 | 0.7596 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_synonym-1 | ThuyNT03 | 2023-09-07T07:26:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T05:39:36Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_synonym-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1893
- Accuracy: 0.7
- F1: 0.6994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8943 | 1.0 | 88 | 0.7593 | 0.68 | 0.6676 |
| 0.5876 | 2.0 | 176 | 0.7350 | 0.67 | 0.6686 |
| 0.4016 | 3.0 | 264 | 0.8227 | 0.7 | 0.7001 |
| 0.2584 | 4.0 | 352 | 0.9111 | 0.69 | 0.6856 |
| 0.1981 | 5.0 | 440 | 1.0283 | 0.73 | 0.7308 |
| 0.1335 | 6.0 | 528 | 1.1292 | 0.7 | 0.7000 |
| 0.087 | 7.0 | 616 | 1.1323 | 0.7 | 0.7013 |
| 0.0726 | 8.0 | 704 | 1.1893 | 0.7 | 0.6994 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-train-1 | ThuyNT03 | 2023-09-07T07:12:47Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T05:30:03Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-train-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-train-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8720
- Accuracy: 0.71
- F1: 0.7085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0125 | 1.0 | 44 | 0.8552 | 0.63 | 0.5142 |
| 0.7443 | 2.0 | 88 | 0.6888 | 0.7 | 0.6941 |
| 0.5851 | 3.0 | 132 | 0.6873 | 0.72 | 0.7164 |
| 0.4457 | 4.0 | 176 | 0.7423 | 0.7 | 0.7021 |
| 0.374 | 5.0 | 220 | 0.7960 | 0.71 | 0.7019 |
| 0.2885 | 6.0 | 264 | 0.8073 | 0.7 | 0.7016 |
| 0.2711 | 7.0 | 308 | 0.8329 | 0.71 | 0.7088 |
| 0.2317 | 8.0 | 352 | 0.8720 | 0.71 | 0.7085 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Graphcore/gpt2-wikitext-103 | Graphcore | 2023-09-07T07:12:06Z | 17 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"optimum_graphcore",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-23T10:06:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: clm_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Graphcore/gpt2-wikitext-103
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9902
## Training and evaluation data
- [HuggingFace/wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--ipu_config_name Graphcore/gpt2-small-ipu \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 \
--do_train \
--do_eval \
--num_train_epochs 10 \
--dataloader_num_workers 64 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 128 \
--output_dir /tmp/clm_output \
--logging_steps 5 \
--learning_rate 1e-5 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--ipu_config_overrides="embedding_serialization_factor=4,optimizer_state_offchip=true,inference_device_iterations=5" \
--dataloader_drop_last \
--pod_type pod16
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
"epoch": 10.0,
"train_loss": 3.1787637246621623,
"train_runtime": 4372.4031,
"train_samples": 114248,
"train_samples_per_second": 261.293,
"train_steps_per_second": 0.254
***** eval metrics *****
"eval_loss": 2.990234375,
"eval_samples": 240,
"perplexity": 19.89034374461794
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/whisper_4_with_init_sun_char_0045 | bigmorning | 2023-09-07T07:11:10Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T07:11:00Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6539
- Train Accuracy: 0.0487
- Train Wermet: 0.1192
- Validation Loss: 1.8249
- Validation Accuracy: 0.0309
- Validation Wermet: 0.1901
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_swap-1 | ThuyNT03 | 2023-09-07T07:09:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T05:24:15Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_swap-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_swap-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2440
- Accuracy: 0.69
- F1: 0.6896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8972 | 1.0 | 87 | 0.7698 | 0.62 | 0.5744 |
| 0.5881 | 2.0 | 174 | 0.7581 | 0.64 | 0.6314 |
| 0.3953 | 3.0 | 261 | 0.8167 | 0.68 | 0.6791 |
| 0.2472 | 4.0 | 348 | 0.8476 | 0.74 | 0.7435 |
| 0.1639 | 5.0 | 435 | 1.0144 | 0.71 | 0.7139 |
| 0.0969 | 6.0 | 522 | 1.1456 | 0.7 | 0.7004 |
| 0.079 | 7.0 | 609 | 1.1831 | 0.7 | 0.7009 |
| 0.0576 | 8.0 | 696 | 1.2440 | 0.69 | 0.6896 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ryanyip7777/pmc_vit-l-14_hf | ryanyip7777 | 2023-09-07T07:05:56Z | 106 | 1 | transformers | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"endpoints_compatible",
"region:us"
]
| zero-shot-image-classification | 2023-09-07T05:58:49Z | ---
base_model: openai/clip-vit-large-patch14
tags:
- generated_from_trainer
model-index:
- name: clip-vit-l-14-pmc-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-l-14-pmc-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an **pmc_oa** (https://huggingface.co/datasets/axiong/pmc_oa) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
### finetune this model use the script from *run_clip.py* (https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)
```shell
python -W ignore run_clip.py --model_name_or_path openai/clip-vit-large-patch14 \
--output_dir ./clip-vit-l-14-pmc-finetuned \
--train_file data/pmc_roco_train.csv \
--validation_file data/pmc_roco_valid.csv \
--image_column image --caption_column caption \
--max_seq_length 77 \
--do_train --do_eval \
--per_device_train_batch_size 16 --per_device_eval_batch_size 8 \
--remove_unused_columns=False \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--num_train_epochs 10 \
--logging_dir ./pmc_vit_logs \
--save_total_limit 2 \
--report_to tensorboard
```
### usage
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("ryanyip7777/pmc_vit-l-14_hf")
processor = CLIPProcessor.from_pretrained("ryanyip7777/pmc_vit-l-14_hf")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
``` |
sminchoi/llama-2-7b-chat-hf_guanaco-llama2_230907 | sminchoi | 2023-09-07T07:01:27Z | 0 | 0 | peft | [
"peft",
"pytorch",
"llama",
"region:us"
]
| null | 2023-09-07T06:02:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
bigmorning/whisper_4_with_init_sun_char_0040 | bigmorning | 2023-09-07T06:56:02Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T06:55:53Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7846
- Train Accuracy: 0.0454
- Train Wermet: 0.0855
- Validation Loss: 1.8107
- Validation Accuracy: 0.0305
- Validation Wermet: 0.1385
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
satyam9097/J-GPT | satyam9097 | 2023-09-07T06:51:15Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-09-07T06:42:35Z | import openai
import gradio as gr
openai.api_key = "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"
messages = [
{"role": "system", "content": "You are a professor and career counsellor who mainly helps engineering diploma students with their studies, such as clearing doubts related to their branch or their future."}
]
def CustomChatGPT(user_input, engineering_branch, year_of_study):
messages.append({"role": "user", "content": user_input})
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)
ChatGPT_reply = response["choices"][0]["message"]["content"]
messages.append({"role": "assistant", "content": ChatGPT_reply})
return ChatGPT_reply
def validate_year_of_study(year_of_study):
if year_of_study < 1 or year_of_study > 4:
raise ValueError("Year of study should be between 1 and 4.")
inputs = [
gr.inputs.Textbox(label="User Input", placeholder="Enter your question or doubt here", lines=2),
gr.inputs.Dropdown(["Mechanical Engineering", "Electrical Engineering", "Civil Engineering"], label="Engineering Branch"),
gr.inputs.Number(label="Year of Study",)
]
outputs = gr.outputs.Textbox(label="Assistant Reply")
title = "Digital Professor"
description = "Ask questions and get assistance from the Digital Professor, a professor and career counsellor for engineering diploma students."
examples = [
["What are the career prospects for mechanical engineering?", "The career prospects for mechanical engineering are diverse. You can work in industries such as automotive, aerospace, energy, and more."],
["Can you help me with my doubt related to electrical circuits?", "Of course! Please provide more details about your doubt related to electrical circuits."],
["What are the key skills required for a successful career in civil engineering?", "Some key skills required for a successful career in civil engineering include problem-solving, analytical thinking, and good communication skills."],
]
gr.Interface(fn=CustomChatGPT, inputs=inputs, outputs=outputs, title=title, description=description, examples=examples, theme="compact").launch()
|
syoius/q-FrozenLake-v1-4x4-noSlippery | syoius | 2023-09-07T06:48:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T06:48:17Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="syoius/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hw2942/chinese-lert-base-SSEC | hw2942 | 2023-09-07T06:48:11Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:hfl/chinese-lert-base",
"base_model:finetune:hfl/chinese-lert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T06:41:27Z | ---
license: apache-2.0
base_model: hfl/chinese-lert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-lert-base-wallstreetcn-morning-news-market-overview-SSEC-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-lert-base-wallstreetcn-morning-news-market-overview-SSEC-v6
This model is a fine-tuned version of [hfl/chinese-lert-base](https://huggingface.co/hfl/chinese-lert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8201
- Accuracy: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 34 | 0.6572 | 0.6875 |
| No log | 2.0 | 68 | 0.6435 | 0.6875 |
| No log | 3.0 | 102 | 0.6914 | 0.6875 |
| No log | 4.0 | 136 | 0.6636 | 0.6875 |
| No log | 5.0 | 170 | 1.1175 | 0.625 |
| No log | 6.0 | 204 | 1.6301 | 0.625 |
| No log | 7.0 | 238 | 1.8331 | 0.6562 |
| No log | 8.0 | 272 | 1.5317 | 0.7188 |
| No log | 9.0 | 306 | 1.7106 | 0.6875 |
| No log | 10.0 | 340 | 1.8201 | 0.6875 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
aegon-h/mpt-7b | aegon-h | 2023-09-07T06:44:26Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-07T05:16:11Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
model_creator: mosaicml
model_link: https://huggingface.co/mosaicml/mpt-7b
model_name: mpt-7b
edited_by: agonh
inference: false
---
# MPT-7B
Model creator: [MosaicML](https://www.mosaicml.com).
Original model: [mpt-7b](https://huggingface.co/mosaicml/mpt-7b).
## Description
This repo contains model files for [mosaicml's mpt-7b](https://huggingface.co/mosaicml/mpt-7b).
|
SateeshAmbesange/my_awesome_model | SateeshAmbesange | 2023-09-07T06:43:44Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T03:59:42Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: SateeshAmbesange/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SateeshAmbesange/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0670
- Validation Loss: 0.2178
- Train Accuracy: 0.9323
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2514 | 0.1844 | 0.9288 | 0 |
| 0.1344 | 0.2147 | 0.9206 | 1 |
| 0.0670 | 0.2178 | 0.9323 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0035 | bigmorning | 2023-09-07T06:40:52Z | 60 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T06:40:43Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8933
- Train Accuracy: 0.0432
- Train Wermet: 0.0682
- Validation Loss: 1.8761
- Validation Accuracy: 0.0295
- Validation Wermet: 0.1173
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_BERT-1 | ThuyNT03 | 2023-09-07T06:35:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T21:30:13Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_BERT-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7460
- Accuracy: 0.75
- F1: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0554 | 1.0 | 88 | 0.9377 | 0.5 | 0.4177 |
| 0.8929 | 2.0 | 176 | 0.8133 | 0.64 | 0.5654 |
| 0.7778 | 3.0 | 264 | 0.6756 | 0.73 | 0.7154 |
| 0.6686 | 4.0 | 352 | 0.6923 | 0.75 | 0.7378 |
| 0.5672 | 5.0 | 440 | 0.6880 | 0.77 | 0.7706 |
| 0.5009 | 6.0 | 528 | 0.7243 | 0.77 | 0.7668 |
| 0.3978 | 7.0 | 616 | 0.7148 | 0.76 | 0.7584 |
| 0.3843 | 8.0 | 704 | 0.7460 | 0.75 | 0.7473 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Xisumavoid_RVC | 0x3e9 | 2023-09-07T06:34:50Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:56Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Xisumavoid

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Xisumavoid | 200 | RVC V2 | [Download](https://huggingface.co/0x3e9/Xisumavoid_RVC/resolve/main/xisumavoid.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1130321704141471935) |
|
maxolotl/falcon-wait3-en-es-v2-trainer | maxolotl | 2023-09-07T06:29:33Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T04:40:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1 | ThuyNT03 | 2023-09-07T06:28:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T21:22:29Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7044
- Accuracy: 0.76
- F1: 0.7519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.083 | 1.0 | 88 | 0.9789 | 0.6 | 0.4933 |
| 0.9576 | 2.0 | 176 | 0.7989 | 0.66 | 0.6019 |
| 0.8381 | 3.0 | 264 | 0.8103 | 0.67 | 0.6320 |
| 0.744 | 4.0 | 352 | 0.6355 | 0.74 | 0.7250 |
| 0.6186 | 5.0 | 440 | 0.6820 | 0.77 | 0.7660 |
| 0.5534 | 6.0 | 528 | 0.6782 | 0.76 | 0.7519 |
| 0.4677 | 7.0 | 616 | 0.6447 | 0.79 | 0.7810 |
| 0.4132 | 8.0 | 704 | 0.7044 | 0.76 | 0.7519 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Two_Minute_Papers_RVC | 0x3e9 | 2023-09-07T06:27:31Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:55Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Two Minute Papers

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Two Minute Papers | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Two_Minute_Papers_RVC/resolve/main/twominutepapers.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1134363420221784145) |
|
severinsimmler/xlm-roberta-longformer-large-16384 | severinsimmler | 2023-09-07T06:26:43Z | 329 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"feature-extraction",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2004.05150",
"license:mit",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-09-06T15:34:37Z | ---
model-index:
- name: xlm-roberta-longformer-base-16384
results: []
license: mit
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# xlm-roberta-longformer-large-16384
⚠️ This is just the PyTorch version of [`hyperonym/xlm-roberta-longformer-large-16384`](https://huggingface.co/hyperonym/xlm-roberta-longformer-large-16384) without any modifications.
**xlm-roberta-longformer** is a multilingual [Longformer](https://arxiv.org/abs/2004.05150) initialized with [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large)'s weights without further pretraining. It is intended to be fine-tuned on a downstream task.
|
0x3e9/Trump_RVC | 0x3e9 | 2023-09-07T06:25:46Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:55Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Trump

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Trump | 600 | RVC V2 | [Download](https://huggingface.co/0x3e9/Trump_RVC/resolve/main/trump.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1124550350276407347) |
|
Sonny4Sonnix/test_trainer | Sonny4Sonnix | 2023-09-07T06:18:25Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-05T08:25:18Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6704 | 0.2 | 500 | 0.6205 |
| 0.5688 | 0.4 | 1000 | 0.5265 |
| 0.564 | 0.6 | 1500 | 0.6922 |
| 0.5507 | 0.8 | 2000 | 0.5712 |
| 0.5431 | 1.0 | 2500 | 0.5408 |
| 0.4814 | 1.2 | 3000 | 0.5074 |
| 0.4636 | 1.4 | 3500 | 0.4950 |
| 0.4825 | 1.6 | 4000 | 0.4812 |
| 0.4604 | 1.8 | 4500 | 0.5300 |
| 0.4626 | 2.0 | 5000 | 0.5234 |
| 0.4094 | 2.2 | 5500 | 0.5565 |
| 0.4156 | 2.4 | 6000 | 0.5373 |
| 0.3952 | 2.6 | 6500 | 0.5398 |
| 0.3742 | 2.8 | 7000 | 0.5223 |
| 0.3847 | 3.0 | 7500 | 0.5249 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Bad_mic__Stable_Ronaldo_RVC | 0x3e9 | 2023-09-07T06:14:59Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:53Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Bad mic Stable Ronaldo

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Bad mic Stable Ronaldo | 50 | RVC V2 | [Download](https://huggingface.co/0x3e9/Bad_mic__Stable_Ronaldo_RVC/resolve/main/stableronaldo.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1130310529500586034) |
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_synonym-1 | ThuyNT03 | 2023-09-07T06:12:50Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T21:04:43Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_synonym-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8727
- Accuracy: 0.76
- F1: 0.7592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0219 | 1.0 | 87 | 0.7346 | 0.68 | 0.6276 |
| 0.7518 | 2.0 | 174 | 0.5934 | 0.75 | 0.7425 |
| 0.6023 | 3.0 | 261 | 0.7553 | 0.7 | 0.6975 |
| 0.479 | 4.0 | 348 | 0.7275 | 0.72 | 0.7118 |
| 0.3515 | 5.0 | 435 | 0.9068 | 0.74 | 0.7306 |
| 0.2646 | 6.0 | 522 | 0.7953 | 0.77 | 0.7633 |
| 0.2123 | 7.0 | 609 | 0.8673 | 0.77 | 0.7652 |
| 0.1535 | 8.0 | 696 | 0.8727 | 0.76 | 0.7592 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0025 | bigmorning | 2023-09-07T06:10:46Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-07T06:10:38Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1717
- Train Accuracy: 0.0392
- Train Wermet: 0.0635
- Validation Loss: 1.9791
- Validation Accuracy: 0.0282
- Validation Wermet: 0.0928
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
elkcloner/Reinforce-1 | elkcloner | 2023-09-07T06:09:51Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T06:09:41Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_synonym-1 | ThuyNT03 | 2023-09-07T06:07:32Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:57:06Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_synonym-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1150
- Accuracy: 0.7
- F1: 0.7043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9797 | 1.0 | 86 | 0.7643 | 0.69 | 0.6814 |
| 0.7133 | 2.0 | 172 | 0.6942 | 0.72 | 0.7190 |
| 0.5442 | 3.0 | 258 | 0.7180 | 0.7 | 0.6996 |
| 0.4045 | 4.0 | 344 | 0.9280 | 0.72 | 0.7240 |
| 0.2954 | 5.0 | 430 | 0.9419 | 0.68 | 0.6914 |
| 0.2222 | 6.0 | 516 | 1.0002 | 0.71 | 0.7164 |
| 0.1716 | 7.0 | 602 | 1.0722 | 0.71 | 0.7195 |
| 0.1527 | 8.0 | 688 | 1.1150 | 0.7 | 0.7043 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Shancs8876/Apoo | Shancs8876 | 2023-09-07T06:05:02Z | 0 | 1 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2023-09-07T06:05:02Z | ---
license: bigscience-openrail-m
---
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_BERT-1 | ThuyNT03 | 2023-09-07T05:57:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:48:13Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_BERT-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3341
- Accuracy: 0.67
- F1: 0.6755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0115 | 1.0 | 87 | 0.8610 | 0.61 | 0.5566 |
| 0.7417 | 2.0 | 174 | 0.7207 | 0.7 | 0.6969 |
| 0.6031 | 3.0 | 261 | 0.6915 | 0.75 | 0.7488 |
| 0.4368 | 4.0 | 348 | 0.8041 | 0.73 | 0.7358 |
| 0.3308 | 5.0 | 435 | 1.0670 | 0.65 | 0.6541 |
| 0.2463 | 6.0 | 522 | 1.0742 | 0.68 | 0.6881 |
| 0.1811 | 7.0 | 609 | 1.2753 | 0.68 | 0.6865 |
| 0.1364 | 8.0 | 696 | 1.3341 | 0.67 | 0.6755 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Obama_RVC | 0x3e9 | 2023-09-07T05:56:28Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:50Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Obama

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Obama | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Obama_RVC/resolve/main/obama.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1112296612191019008) |
|
randomnumb/ppo-Huggy | randomnumb | 2023-09-07T05:54:39Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-07T05:54:33Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: randomnumb/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0x3e9/Mystery_Recapped_RVC | 0x3e9 | 2023-09-07T05:52:15Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:50Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Mystery Recapped

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Mystery Recapped | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Mystery_Recapped_RVC/resolve/main/mysteryrecapped.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1135448772227387482) |
|
versae/nb-nst-tts | versae | 2023-09-07T05:50:23Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"speecht5",
"text-to-audio",
"text-to-speech",
"no",
"nb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-08-30T18:43:11Z | ---
license: apache-2.0
language:
- 'no'
- nb
library_name: transformers
pipeline_tag: text-to-speech
--- |
0x3e9/MumboJumbo_RVC | 0x3e9 | 2023-09-07T05:50:13Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:49Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# MumboJumbo

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| MumboJumbo | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/MumboJumbo_RVC/resolve/main/mumbojumbo.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1128212529139695636) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1 | ThuyNT03 | 2023-09-07T05:49:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:39:22Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1592
- Accuracy: 0.72
- F1: 0.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9839 | 1.0 | 87 | 0.7510 | 0.63 | 0.5632 |
| 0.6788 | 2.0 | 174 | 0.7245 | 0.71 | 0.7109 |
| 0.5471 | 3.0 | 261 | 0.7273 | 0.66 | 0.6683 |
| 0.3945 | 4.0 | 348 | 0.7304 | 0.72 | 0.7261 |
| 0.3062 | 5.0 | 435 | 0.9655 | 0.73 | 0.7360 |
| 0.2197 | 6.0 | 522 | 0.9765 | 0.73 | 0.7357 |
| 0.1692 | 7.0 | 609 | 1.1266 | 0.73 | 0.7357 |
| 0.1331 | 8.0 | 696 | 1.1592 | 0.72 | 0.7269 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Minging_RVC | 0x3e9 | 2023-09-07T05:46:36Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:49Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Minging

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Minging | 1000 | RVC V2 | [Download](https://huggingface.co/0x3e9/Minging_RVC/resolve/main/minging.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1143106184388296835) |
|
0x3e9/Mana_Renewal_RVC | 0x3e9 | 2023-09-07T05:45:43Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:49Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Mana Renewal

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Mana Renewal | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Mana_Renewal_RVC/resolve/main/ManaRenewal.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1131869655867334717) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_w2v-1 | ThuyNT03 | 2023-09-07T05:41:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:29:54Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_w2v-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2040
- Accuracy: 0.72
- F1: 0.7257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9886 | 1.0 | 85 | 0.7499 | 0.65 | 0.5970 |
| 0.6861 | 2.0 | 170 | 0.7312 | 0.7 | 0.7029 |
| 0.5673 | 3.0 | 255 | 0.6732 | 0.73 | 0.7328 |
| 0.4086 | 4.0 | 340 | 0.8771 | 0.73 | 0.7308 |
| 0.2958 | 5.0 | 425 | 0.9051 | 0.74 | 0.7453 |
| 0.2039 | 6.0 | 510 | 1.0350 | 0.73 | 0.7314 |
| 0.1743 | 7.0 | 595 | 1.1745 | 0.7 | 0.7097 |
| 0.1458 | 8.0 | 680 | 1.2040 | 0.72 | 0.7257 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Kafu_RVC | 0x3e9 | 2023-09-07T05:41:20Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:48Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Kafu

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Kafu | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Kafu_RVC/resolve/main/kafu.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1114069662783782972) |
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_synonym-1 | ThuyNT03 | 2023-09-07T05:40:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:31:22Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_synonym-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1442
- Accuracy: 0.74
- F1: 0.7298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0022 | 1.0 | 88 | 0.7751 | 0.7 | 0.6764 |
| 0.7565 | 2.0 | 176 | 0.7416 | 0.67 | 0.6181 |
| 0.6017 | 3.0 | 264 | 0.6780 | 0.72 | 0.7105 |
| 0.4341 | 4.0 | 352 | 0.6895 | 0.77 | 0.7620 |
| 0.3477 | 5.0 | 440 | 0.7465 | 0.76 | 0.7535 |
| 0.2429 | 6.0 | 528 | 0.9202 | 0.73 | 0.7207 |
| 0.1659 | 7.0 | 616 | 1.1246 | 0.74 | 0.7267 |
| 0.1623 | 8.0 | 704 | 1.1442 | 0.74 | 0.7298 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Jerma_Teacher_Noise_RVC | 0x3e9 | 2023-09-07T05:39:20Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:47Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Jerma Teacher Noise

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Jerma Teacher Noise | 2000 | RVC V2 | [Download](https://huggingface.co/0x3e9/Jerma_Teacher_Noise_RVC/resolve/main/teachernoise.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1143302989172461598) |
|
0x3e9/H2ODelirious_RVC | 0x3e9 | 2023-09-07T05:38:30Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:47Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# H2ODelirious

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| H2ODelirious | 150 | RVC V2 | [Download](https://huggingface.co/0x3e9/H2ODelirious_RVC/resolve/main/H2ODelirious.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1124967106694365194) |
|
0x3e9/Guitar_RVC | 0x3e9 | 2023-09-07T05:35:36Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:46Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Guitar

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Guitar | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Guitar_RVC/resolve/main/guitar.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1114668642873913414) |
|
0x3e9/Grizzy_RVC | 0x3e9 | 2023-09-07T05:32:23Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:46Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Grizzy

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Grizzy | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Grizzy_RVC/resolve/main/grizzy.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1144125420518785124) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_synonym-1 | ThuyNT03 | 2023-09-07T05:32:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:19:22Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_synonym-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0320
- Accuracy: 0.72
- F1: 0.7220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0097 | 1.0 | 87 | 0.9465 | 0.59 | 0.5204 |
| 0.824 | 2.0 | 174 | 0.7438 | 0.68 | 0.6540 |
| 0.6486 | 3.0 | 261 | 0.7329 | 0.66 | 0.6590 |
| 0.4726 | 4.0 | 348 | 0.7294 | 0.7 | 0.7029 |
| 0.358 | 5.0 | 435 | 0.8954 | 0.69 | 0.6983 |
| 0.2555 | 6.0 | 522 | 0.8258 | 0.73 | 0.7315 |
| 0.2173 | 7.0 | 609 | 1.0117 | 0.73 | 0.7328 |
| 0.173 | 8.0 | 696 | 1.0320 | 0.72 | 0.7220 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-train-1 | ThuyNT03 | 2023-09-07T05:30:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:27:32Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-train-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-train-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5935
- Accuracy: 0.75
- F1: 0.7399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1004 | 1.0 | 44 | 1.0883 | 0.38 | 0.2093 |
| 0.994 | 2.0 | 88 | 0.8795 | 0.62 | 0.5932 |
| 0.895 | 3.0 | 132 | 0.7303 | 0.65 | 0.5849 |
| 0.7332 | 4.0 | 176 | 0.7155 | 0.69 | 0.6766 |
| 0.6186 | 5.0 | 220 | 0.5556 | 0.72 | 0.7040 |
| 0.5392 | 6.0 | 264 | 0.5756 | 0.76 | 0.7501 |
| 0.4558 | 7.0 | 308 | 0.5960 | 0.72 | 0.7050 |
| 0.4164 | 8.0 | 352 | 0.5935 | 0.75 | 0.7399 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Grian_RVC | 0x3e9 | 2023-09-07T05:29:01Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:45Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Grian

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Grian | 150 | RVC V2 | [Download](https://huggingface.co/0x3e9/Grian_RVC/resolve/main/grian.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1127884963774201856) |
|
0x3e9/GoodTimesWithScar_RVC | 0x3e9 | 2023-09-07T05:25:35Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:45Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# GoodTimesWithScar

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| GoodTimesWithScar | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/GoodTimesWithScar_RVC/resolve/main/goodtimeswithscar.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1128406096957145178) |
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_swap-1 | ThuyNT03 | 2023-09-07T05:20:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-04T20:12:55Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_swap-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_swap-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2104
- Accuracy: 0.75
- F1: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0503 | 1.0 | 87 | 0.9473 | 0.62 | 0.5062 |
| 0.7772 | 2.0 | 174 | 0.6460 | 0.74 | 0.7214 |
| 0.5668 | 3.0 | 261 | 0.6739 | 0.76 | 0.7474 |
| 0.3978 | 4.0 | 348 | 0.7077 | 0.78 | 0.7737 |
| 0.2502 | 5.0 | 435 | 1.0460 | 0.75 | 0.7340 |
| 0.1757 | 6.0 | 522 | 1.0285 | 0.74 | 0.7355 |
| 0.1439 | 7.0 | 609 | 1.1870 | 0.75 | 0.7454 |
| 0.1178 | 8.0 | 696 | 1.2104 | 0.75 | 0.7434 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
danorel/poca-SoccerTwos | danorel | 2023-09-07T05:16:06Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-07T05:15:47Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: danorel/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0x3e9/Darth_Vader_RVC | 0x3e9 | 2023-09-07T05:15:25Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:43Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Darth Vader

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Darth Vader | 150 | RVC V2 | [Download](https://huggingface.co/0x3e9/Darth_Vader_RVC/resolve/main/vader.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1113193444421161100) |
|
0x3e9/CDawgVA_RVC | 0x3e9 | 2023-09-07T05:13:29Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:43Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# CDawgVA

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| CDawgVA | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/CDawgVA_RVC/resolve/main/cdawgva.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1123376846071922728) |
|
tiggerhelloworld/rl_course_vizdoom_health_gathering_supreme | tiggerhelloworld | 2023-09-07T05:07:13Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T05:07:05Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.86 +/- 4.82
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tiggerhelloworld/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
0x3e9/BdoubleO100_RVC | 0x3e9 | 2023-09-07T05:06:33Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T09:00:42Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# BdoubleO100

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| BdoubleO100 | 200 | RVC V2 | [Download](https://huggingface.co/0x3e9/BdoubleO100_RVC/resolve/main/BdoubleO100.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1129709422260801546) |
|
0x3e9/AbroadInJapan_RVC | 0x3e9 | 2023-09-07T04:59:44Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"region:us"
]
| audio-to-audio | 2023-09-04T08:57:14Z | ---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# AbroadInJapan

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| AbroadInJapan | 500 | RVC V2 | [Download](https://huggingface.co/0x3e9/AbroadInJapan_RVC/resolve/main/abroadinjapan.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1126434786427888133) |
|
tensor-diffusion/melaura-v1-1 | tensor-diffusion | 2023-09-07T04:50:54Z | 2 | 3 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"DiffusionPipeline",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-05T12:35:01Z | ---
license: openrail++
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
- diffusers
- DiffusionPipeline
inference:
parameter:
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality, jpeg,
artifacts, signature, watermark, username, blurry, ugly, duplicate,
morbid, mutilated, extra fingers, mutated hands, poorly drawn hands,
poorly drawn face, mutation, deformed, blurry, bad anatomy, bad
proportions, cloned face, disfigured, out of frame, extra limbs, bad
anatomy, gross proportions, malformed limbs, missing arms, missing legs,
extra arms, extra legs, mutated hands, fused fingers, too many fingers,
long neck, text, letters, signature, web address, copyright name,
username, error, extra digit, fewer digits, loadscreen, grid, stock image,
a stock photo, promo poster, fat, text, logo, brand, watermark, water
mark, low quality,
widget:
- text: melaura, girl, hd, pink lips, detailed, age 16, Off-shoulder top
example_title: Off-shoulder top
- text: melaura, girl, hd, shiny cheeks
example_title: shiny cheeks
library_name: diffusers
--- |
CyberHarem/matoba_risa_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T04:46:24Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/matoba_risa_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T04:29:30Z | ---
license: mit
datasets:
- CyberHarem/matoba_risa_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matoba_risa_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3080, you need to download `3080/matoba_risa_theidolmastercinderellagirlsu149.pt` as the embedding and `3080/matoba_risa_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3080**, with the score of 0.895. The trigger words are:
1. `matoba_risa_theidolmastercinderellagirlsu149`
2. `black_hair, long_hair, twintails, yellow_eyes, ribbon, hair_ribbon, jewelry, necklace, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6600 | 0.843 | [Download](6600/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6160 | 0.793 | [Download](6160/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5720 | 0.850 | [Download](5720/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5280 | 0.834 | [Download](5280/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4840 | 0.784 | [Download](4840/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) |  |  |
| 4400 | 0.873 | [Download](4400/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 3960 | 0.842 | [Download](3960/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3520 | 0.805 | [Download](3520/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) |  |  |
| **3080** | **0.895** | [**Download**](3080/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) |  |  |
| 2640 | 0.785 | [Download](2640/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 2200 | 0.774 | [Download](2200/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) |  |  |
| 1760 | 0.890 | [Download](1760/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) |  |  |
| 1320 | 0.862 | [Download](1320/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 880 | 0.808 | [Download](880/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) |  |  |
| 440 | 0.759 | [Download](440/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) |  |  |
|
newronai/clma2-13b-Chat-Adapter-Plus | newronai | 2023-09-07T04:46:03Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T04:45:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
yesj1234/mbart_cycle0_ko-ja | yesj1234 | 2023-09-07T04:42:27Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"ko",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-07T03:40:57Z | ---
language:
- ko
- ja
base_model: ./ja_reduced_model
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart_cycle0_ko-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_cycle0_ko-ja
This model is a fine-tuned version of [mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an custom dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0107
- Bleu: 25.8676
- Gen Len: 20.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| No log | 3.57 | 50 | 12.5219 | 0.0216 | 443.0833 |
| No log | 7.14 | 100 | 9.2255 | 0.0315 | 1024.0 |
| No log | 10.71 | 150 | 6.4885 | 0.0151 | 779.0 |
| No log | 14.29 | 200 | 5.3925 | 0.928 | 101.5 |
| No log | 17.86 | 250 | 5.4016 | 13.1472 | 105.6667 |
| No log | 21.43 | 300 | 6.5062 | 11.5401 | 158.3333 |
| No log | 25.0 | 350 | 6.0911 | 20.6997 | 25.1667 |
| No log | 28.57 | 400 | 6.5541 | 18.9521 | 20.6667 |
| No log | 32.14 | 450 | 6.6978 | 21.2662 | 25.1667 |
| 6.3858 | 35.71 | 500 | 6.9643 | 10.1265 | 17.3333 |
| 6.3858 | 39.29 | 550 | 6.6467 | 25.8218 | 19.6667 |
| 6.3858 | 42.86 | 600 | 7.1260 | 13.6948 | 18.75 |
| 6.3858 | 46.43 | 650 | 7.0505 | 19.5121 | 21.0 |
| 6.3858 | 50.0 | 700 | 7.0107 | 25.8676 | 20.5833 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
abymmathew/RoBERTa-large-PM-M3-Voc-hf-finetuned-ner | abymmathew | 2023-09-07T04:34:37Z | 102 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"medical",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-06T22:55:03Z | ---
tags:
- generated_from_trainer
- medical
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa-large-PM-M3-Voc-hf-finetuned-ner
results: []
license: afl-3.0
widget:
- text: "CASE: A 28-year-old previously healthy man presented with a 6-week history of palpitations.
The symptoms occurred during rest, 2–3 times per week, lasted up to 30 minutes at a time and were associated with dyspnea.
Except for a grade 2/6 holosystolic tricuspid regurgitation murmur (best heard at the left sternal border with inspiratory accentuation), physical examination yielded unremarkable findings."
example_title: "example 1"
- text: "A 63-year-old woman with no known cardiac history presented with a sudden onset of dyspnea requiring intubation and ventilatory support out of hospital.
She denied preceding symptoms of chest discomfort, palpitations, syncope or infection.
The patient was afebrile and normotensive, with a sinus tachycardia of 140 beats/min."
example_title: "example 2"
- text: "A 48 year-old female presented with vaginal bleeding and abnormal Pap smears.
Upon diagnosis of invasive non-keratinizing SCC of the cervix, she underwent a radical hysterectomy with salpingo-oophorectomy which demonstrated positive spread to the pelvic lymph nodes and the parametrium.
Pathological examination revealed that the tumour also extensively involved the lower uterine segment."
example_title: "example 3"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-large-PM-M3-Voc-hf-finetuned-ner
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3493
- Precision: 0.6836
- Recall: 0.8494
- F1: 0.7575
- Accuracy: 0.9116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 1.2145 | 0.1762 | 0.0340 | 0.0569 | 0.7236 |
| No log | 2.0 | 46 | 0.8308 | 0.4435 | 0.3547 | 0.3942 | 0.7735 |
| No log | 3.0 | 69 | 0.7051 | 0.4419 | 0.6091 | 0.5122 | 0.7842 |
| No log | 4.0 | 92 | 0.6051 | 0.4989 | 0.6416 | 0.5613 | 0.8085 |
| No log | 5.0 | 115 | 0.5500 | 0.5501 | 0.6449 | 0.5937 | 0.8243 |
| No log | 6.0 | 138 | 0.5272 | 0.5351 | 0.6892 | 0.6025 | 0.8277 |
| No log | 7.0 | 161 | 0.5256 | 0.5426 | 0.7143 | 0.6167 | 0.8316 |
| No log | 8.0 | 184 | 0.4943 | 0.5583 | 0.7582 | 0.6431 | 0.8479 |
| No log | 9.0 | 207 | 0.4196 | 0.6217 | 0.7475 | 0.6788 | 0.8773 |
| No log | 10.0 | 230 | 0.4065 | 0.6270 | 0.7789 | 0.6948 | 0.8850 |
| No log | 11.0 | 253 | 0.4367 | 0.6012 | 0.8062 | 0.6887 | 0.8776 |
| No log | 12.0 | 276 | 0.3917 | 0.6301 | 0.8125 | 0.7098 | 0.8915 |
| No log | 13.0 | 299 | 0.3563 | 0.6736 | 0.8191 | 0.7393 | 0.9042 |
| No log | 14.0 | 322 | 0.3654 | 0.6653 | 0.8335 | 0.7400 | 0.9040 |
| No log | 15.0 | 345 | 0.3637 | 0.6611 | 0.8439 | 0.7414 | 0.9057 |
| No log | 16.0 | 368 | 0.3522 | 0.6785 | 0.8453 | 0.7528 | 0.9100 |
| No log | 17.0 | 391 | 0.3469 | 0.6841 | 0.8472 | 0.7569 | 0.9115 |
| No log | 18.0 | 414 | 0.3520 | 0.6821 | 0.8490 | 0.7565 | 0.9110 |
| No log | 19.0 | 437 | 0.3485 | 0.6848 | 0.8494 | 0.7583 | 0.9121 |
| No log | 20.0 | 460 | 0.3493 | 0.6836 | 0.8494 | 0.7575 | 0.9116 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
4bit/Qwen-VL-Chat-Int4 | 4bit | 2023-09-07T04:10:08Z | 81 | 16 | transformers | [
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2308.12966",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-09-07T04:05:34Z | ---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
---
# Qwen-VL-Chat-Int4
<br>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_vl.jpg" width="400"/>
<p>
<br>
<p align="center">
Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>  | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>  | Qwen-VL-Chat-Int4 <a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
<br>
<a href="assets/wechat.png">WeChat</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>  |  <a href="https://arxiv.org/abs/2308.12966">Report</a>
</p>
<br>
**Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat的量化模型Qwen-VL-Chat-Int4仓库。
We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat-Int4.
<br>
## 安装要求 (Requirements)
* python 3.8及以上版本
* pytorch2.0及以上版本
* 建议使用CUDA 11.4及以上
* python 3.8 and above
* pytorch 2.0 and above are recommended
* CUDA 11.4 and above are recommended
<br>
## 快速开始 (Quickstart)
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat-Int4。
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
Below, we provide simple examples to show how to use Qwen-VL-Chat-Int4 with 🤗 Transformers.
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
pip install optimum
git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
pip install -v .
```
接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
#### 🤗 Transformers
To use Qwen-VL-Chat-Int4 for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat-Int4", trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat-Int4", device_map="cuda", trust_remote_code=True).eval()
# 1st dialogue turn
query = tokenizer.from_list_format([
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'text': '这是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
# 2nd dialogue turn
response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
print(response)
# <ref>击掌</ref><box>(517,508),(589,611)</box>
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
image.save('1.jpg')
else:
print("no box")
```
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
<p>
<br>
## 量化 (Quantization)
### 效果评测 (Performance)
我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | ZH. | EN |
| ------------ | :--------: | :-----------: |
| BF16 | 401.2 | 645.2 |
| Int4 | 386.6 | 651.4 |
### 推理速度 (Inference Speed)
我们测算了在输入一张图片(即258个token)的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
| Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------ | :-----------------: | :-----------------: |
| BF16 | 28.87 | 24.32 |
| Int4 | 37.79 | 34.34 |
推理速度测算是在单卡 A100-SXM4-80G GPU上运行,使用PyTorch 2.0.1及CUDA 11.4。
The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
### GPU显存占用 (GPU Memory Usage)
我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示:
We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
| Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 22.60GB | 28.01GB |
| Int4 | 11.82GB | 17.23GB |
上述速度和显存测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
<br>
## 评测
我们从两个角度评测了两个模型的能力:
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
- General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
- Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
- Referring Expression Compression:评测模型给定物体描述画检测框的能力;
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
- 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
- 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
- 评测同时包含英文版本和中文版本。
评测结果如下:
We evaluated the model's ability from two perspectives:
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
- The benchmark includes both English and Chinese versions.
The results of the evaluation are as follows:
Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
<p>
### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="2">Zero-shot Captioning</th>
<th colspan="5">General VQA</th>
</tr>
<tr>
<th>NoCaps</th>
<th>Flickr30K</th>
<th>VQAv2<sup>dev</sup></th>
<th>OK-VQA</th>
<th>GQA</th>
<th>SciQA-Img<br>(0-shot)</th>
<th>VizWiz<br>(0-shot)</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="10">Generalist<br>Models</td>
<td>Flamingo-9B</td>
<td>-</td>
<td>61.5</td>
<td>51.8</td>
<td>44.7</td>
<td>-</td>
<td>-</td>
<td>28.8</td>
</tr>
<tr>
<td>Flamingo-80B</td>
<td>-</td>
<td>67.2</td>
<td>56.3</td>
<td>50.6</td>
<td>-</td>
<td>-</td>
<td>31.6</td>
</tr>
<tr>
<td>Unified-IO-XL</td>
<td>100.0</td>
<td>-</td>
<td>77.9</td>
<td>54.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Kosmos-1</td>
<td>-</td>
<td>67.1</td>
<td>51.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>29.2</td>
</tr>
<tr>
<td>Kosmos-2</td>
<td>-</td>
<td>66.7</td>
<td>45.6</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>BLIP-2 (Vicuna-13B)</td>
<td>103.9</td>
<td>71.6</td>
<td>65.0</td>
<td>45.9</td>
<td>32.3</td>
<td>61.0</td>
<td>19.6</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td><strong>121.9</strong></td>
<td>82.8</td>
<td>-</td>
<td>-</td>
<td>49.5</td>
<td>63.1</td>
<td>33.4</td>
</tr>
<tr>
<td>Shikra (Vicuna-13B)</td>
<td>-</td>
<td>73.9</td>
<td>77.36</td>
<td>47.16</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td><strong>Qwen-VL (Qwen-7B)</strong></td>
<td>121.4</td>
<td><b>85.8</b></td>
<td><b>78.8</b></td>
<td><b>58.6</b></td>
<td><b>59.3</b></td>
<td>67.1</td>
<td>35.2</td>
</tr>
<!-- <tr>
<td>Qwen-VL (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>63.6</td>
<td>-</td>
<td>-</td>
<td>39.1</td>
</tr> -->
<tr>
<td>Qwen-VL-Chat</td>
<td>120.2</td>
<td>81.0</td>
<td>78.2</td>
<td>56.6</td>
<td>57.5</td>
<td><b>68.2</b></td>
<td><b>38.9</b></td>
</tr>
<!-- <tr>
<td>Qwen-VL-Chat (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>60.6</td>
<td>-</td>
<td>-</td>
<td>44.45</td>
</tr> -->
<tr>
<td>Previous SOTA<br>(Per Task Fine-tuning)</td>
<td>-</td>
<td>127.0<br>(PALI-17B)</td>
<td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
<td>86.1<br>(PALI-X<br>-55B)</td>
<td>66.1<br>(PALI-X<br>-55B)</td>
<td>72.1<br>(CFR)</td>
<td>92.53<br>(LLaVa+<br>GPT-4)</td>
<td>70.9<br>(PALI-X<br>-55B)</td>
</tr>
</tbody>
</table>
- 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
- 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
### 文本导向的视觉问答 (Text-oriented VQA)
<table>
<thead>
<tr>
<th>Model type</th>
<th>Model</th>
<th>TextVQA</th>
<th>DocVQA</th>
<th>ChartQA</th>
<th>AI2D</th>
<th>OCR-VQA</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="5">Generalist Models</td>
<td>BLIP-2 (Vicuna-13B)</td>
<td>42.4</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td>50.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>mPLUG-DocOwl (LLaMA-7B)</td>
<td>52.6</td>
<td>62.2</td>
<td>57.4</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Pic2Struct-Large (1.3B)</td>
<td>-</td>
<td><b>76.6</b></td>
<td>58.6</td>
<td>42.1</td>
<td>71.3</td>
</tr>
<tr>
<td>Qwen-VL (Qwen-7B)</td>
<td><b>63.8</b></td>
<td>65.1</td>
<td><b>65.7</b></td>
<td><b>62.3</b></td>
<td><b>75.7</b></td>
</tr>
<tr>
<td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
<td>71.44</td>
<td>80.0</td>
<td>70.0</td>
<td>81.2</td>
<td>75.0</td>
</tr>
</tbody>
</table>
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
### 细粒度视觉定位 (Referring Expression Comprehension)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="3">RefCOCO</th>
<th colspan="3">RefCOCO+</th>
<th colspan="2">RefCOCOg</th>
<th>GRIT</th>
</tr>
<tr>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val-u</th>
<th>test-u</th>
<th>refexp</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="8">Generalist Models</td>
<td>GPV-2</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>51.50</td>
</tr>
<tr>
<td>OFA-L*</td>
<td>79.96</td>
<td>83.67</td>
<td>76.39</td>
<td>68.29</td>
<td>76.00</td>
<td>61.75</td>
<td>67.57</td>
<td>67.58</td>
<td>61.70</td>
</tr>
<tr>
<td>Unified-IO</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>78.61</b></td>
</tr>
<tr>
<td>VisionLLM-H</td>
<td></td>
<td>86.70</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Shikra-7B</td>
<td>87.01</td>
<td>90.61</td>
<td>80.24 </td>
<td>81.60</td>
<td>87.36</td>
<td>72.12</td>
<td>82.27</td>
<td>82.19</td>
<td>69.34</td>
</tr>
<tr>
<td>Shikra-13B</td>
<td>87.83 </td>
<td>91.11</td>
<td>81.81</td>
<td>82.89</td>
<td>87.79</td>
<td>74.41</td>
<td>82.64</td>
<td>83.16</td>
<td>69.03</td>
</tr>
<tr>
<td>Qwen-VL-7B</td>
<td><b>89.36</b></td>
<td>92.26</td>
<td><b>85.34</b></td>
<td><b>83.12</b></td>
<td>88.25</td>
<td><b>77.21</b></td>
<td>85.58</td>
<td>85.48</td>
<td>78.22</td>
</tr>
<tr>
<td>Qwen-VL-7B-Chat</td>
<td>88.55</td>
<td><b>92.27</b></td>
<td>84.51</td>
<td>82.82</td>
<td><b>88.59</b></td>
<td>76.79</td>
<td><b>85.96</b></td>
<td><b>86.32</b></td>
<td>-</td>
<tr>
<td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>G-DINO-L</td>
<td>90.56 </td>
<td>93.19</td>
<td>88.24</td>
<td>82.75</td>
<td>88.95</td>
<td>75.92</td>
<td>86.13</td>
<td>87.02</td>
<td>-</td>
</tr>
<tr>
<td>UNINEXT-H</td>
<td>92.64 </td>
<td>94.33</td>
<td>91.46</td>
<td>85.24</td>
<td>89.63</td>
<td>79.79</td>
<td>88.73</td>
<td>89.37</td>
<td>-</td>
</tr>
<tr>
<td>ONE-PEACE</td>
<td>92.58 </td>
<td>94.18</td>
<td>89.26</td>
<td>88.77</td>
<td>92.21</td>
<td>83.23</td>
<td>89.22</td>
<td>89.27</td>
<td>-</td>
</tr>
</tbody>
</table>
- 在定位任务上,Qwen-VL 全面超过 Shikra-13B,取得了目前 Generalist LVLM 模型上在 Refcoco 上的 **SOTA**。
- Qwen-VL 并没有在任何中文定位数据上训练过,但通过中文 Caption 数据和 英文 Grounding 数据的训练,可以 Zero-shot 泛化出中文 Grounding 能力。
我们提供了以上**所有**评测脚本以供复现我们的实验结果。请阅读 [eval/EVALUATION.md](eval/EVALUATION.md) 了解更多信息。
- Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
- Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
### 闲聊能力测评 (Chat Evaluation)
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
#### 英语 (English)
| Model | Score |
|---------------|-------|
| PandaGPT | 488.5 |
| MiniGPT4 | 531.7 |
| InstructBLIP | 552.4 |
| LLaMA-AdapterV2 | 590.1 |
| mPLUG-Owl | 605.4 |
| LLaVA | 602.7 |
| Qwen-VL-Chat | 645.2 |
#### 中文 (Chinese)
| Model | Score |
|---------------|-------|
| VisualGLM | 247.1 |
| Qwen-VL-Chat | 401.2 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
<br>
## 常见问题 (FAQ)
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 使用协议 (License Agreement)
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
<br>
## 引用 (Citation)
如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen-VL,
title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
<br>
## 联系我们 (Contact Us)
如果你想给我们的研发团队和产品团队留言,请通过邮件([email protected])联系我们。
If you are interested to leave a message to either our research team or product team, feel free to send an email to [email protected].
```
```
|
ngoan/Llama-2-7b-vietnamese-20k | ngoan | 2023-09-07T03:59:05Z | 143 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"llama-2-7B",
"llama2-vietnamese",
"vietnamese",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-24T06:54:42Z | ---
tags:
- text-generation
- llama-2
- llama-2-7B
- llama2-vietnamese
- vietnamese
---
# Model Card for Llama 2 Fine-Tuned on Vietnamese Instructions
## Model Details
- Model Name: Llama-2-7b-vietnamese-20k
- Architecture: Llama 2 7B
- Fine-tuning Data Size: 20,000 instruction samples
- Purpose: To demonstrate the performance of the Llama 2 model on Vietnamese and gather initial insights. A more comprehensive model and evaluation will be released soon.
- Availability: The model checkpoint can be accessed on Hugging Face: ngoantech/Llama-2-7b-vietnamese-20k
## Intended Use
This model is intended for researchers, developers, and enthusiasts who are interested in understanding the performance of the Llama 2 model on Vietnamese. It can be used for generating Vietnamese text based on given instructions or for any other task that requires a Vietnamese language model.
## Example Output

## Limitations
- Data Size: The model was fine-tuned on a relatively small dataset of 20,000 instruction samples, which might not capture the full complexity and nuances of the Vietnamese language.
- Preliminary Model: This is an initial experiment with the Llama 2 architecture on Vietnamese. More refined versions and evaluations will be available soon.
- Performance:
Specific performance metrics on this fine-tuned model will be provided in the upcoming comprehensive evaluation.
## Ethical Considerations
- Bias and Fairness: Like any other machine learning model, there is a possibility that this model might reproduce or amplify biases present in the training data.
- Use in Critical Systems: As this is a preliminary model, it is recommended not to use it for mission-critical applications without proper validation.
- Fine-tuning Data:
The model was fine-tuned on a custom dataset of 20,000 instruction samples in Vietnamese. More details about the composition and source of this dataset will be provided in the detailed evaluation report.
## Credits
I would like to express our gratitude to the creators of the Llama 2 architecture and the Hugging Face community for their tools and resources.
## Contact
[email protected]
https://github.com/ngoanpv |
Subsets and Splits