modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-10 06:28:14
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 515
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-10 06:27:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Madhura/qa-model | Madhura | 2023-07-28T16:29:24Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-28T15:59:32Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: qa-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3091 |
| 2.7041 | 2.0 | 500 | 1.7406 |
| 2.7041 | 3.0 | 750 | 1.6376 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
jarguello76/text-to-speech-speecht5_finetuned_voxpopuli_es | jarguello76 | 2023-07-28T16:13:55Z | 84 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-07-27T22:45:24Z | ---
license: mit
base_model: microsoft/speecht5_tts
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_tts-voxpopuli-jarguello
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts-voxpopuli-jarguello
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5089 | 4.32 | 1000 | 0.4632 |
| 0.4835 | 8.64 | 2000 | 0.4512 |
| 0.4862 | 12.97 | 3000 | 0.4487 |
| 0.4825 | 17.29 | 4000 | 0.4464 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3 |
Geotrend/distilbert-base-en-pt-cased | Geotrend | 2023-07-28T16:13:06Z | 130 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-pt-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-pt-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
alifaheem94/RomanUrduPOS | alifaheem94 | 2023-07-28T16:11:41Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-28T14:52:23Z | ---
pipeline_tag: token-classification
---
This model is trained on Roman Urdu POS dataset of 100000 tokens. |
edures/Reinforce-vtest | edures | 2023-07-28T16:07:14Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T16:02:45Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-vtest
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
azhang1212/angela_shuffle_test | azhang1212 | 2023-07-28T15:49:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-28T14:28:54Z | ---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: angela_shuffle_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angela_shuffle_test
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1672
- Precision: 0.6214
- Recall: 0.4942
- F1: 0.5505
- Accuracy: 0.9504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1882 | 1.0 | 1283 | 0.1566 | 0.6823 | 0.4277 | 0.5258 | 0.9518 |
| 0.1551 | 2.0 | 2566 | 0.1507 | 0.6940 | 0.4451 | 0.5423 | 0.9533 |
| 0.1385 | 3.0 | 3849 | 0.1545 | 0.6903 | 0.4503 | 0.5450 | 0.9532 |
| 0.1163 | 4.0 | 5132 | 0.1610 | 0.6288 | 0.4943 | 0.5535 | 0.9507 |
| 0.0994 | 5.0 | 6415 | 0.1672 | 0.6214 | 0.4942 | 0.5505 | 0.9504 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
avidoavid/RWKV-14b-finetuned | avidoavid | 2023-07-28T15:49:05Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:RWKV/rwkv-raven-14b",
"base_model:finetune:RWKV/rwkv-raven-14b",
"region:us"
] | null | 2023-07-28T15:07:06Z | ---
base_model: RWKV/rwkv-raven-14b
tags:
- generated_from_trainer
model-index:
- name: RWKV-14b-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RWKV-14b-finetuned
This model is a fine-tuned version of [RWKV/rwkv-raven-14b](https://huggingface.co/RWKV/rwkv-raven-14b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0926 | 1.0 | 1 | 0.9535 |
| 1.0628 | 2.0 | 2 | 2.0585 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5 | jordyvl | 2023-07-28T15:40:52Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-28T07:51:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4315
- Accuracy: 0.8522
- Brier Loss: 0.2145
- Nll: 1.3474
- F1 Micro: 0.8522
- F1 Macro: 0.8535
- Ece: 0.0573
- Aurc: 0.0300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 167 | 1.6705 | 0.6378 | 0.4837 | 2.4248 | 0.6378 | 0.6323 | 0.0655 | 0.1457 |
| No log | 2.0 | 334 | 1.1423 | 0.7322 | 0.3740 | 1.9847 | 0.7322 | 0.7285 | 0.0695 | 0.0846 |
| 1.7909 | 3.0 | 501 | 0.9082 | 0.7682 | 0.3248 | 1.7674 | 0.7682 | 0.7676 | 0.0620 | 0.0642 |
| 1.7909 | 4.0 | 668 | 0.8494 | 0.7865 | 0.3082 | 1.7306 | 0.7865 | 0.7904 | 0.0665 | 0.0560 |
| 1.7909 | 5.0 | 835 | 0.7837 | 0.798 | 0.2988 | 1.6072 | 0.798 | 0.7953 | 0.0729 | 0.0553 |
| 0.4994 | 6.0 | 1002 | 0.6867 | 0.804 | 0.2862 | 1.5014 | 0.804 | 0.8059 | 0.0794 | 0.0471 |
| 0.4994 | 7.0 | 1169 | 0.7037 | 0.8157 | 0.2797 | 1.5533 | 0.8157 | 0.8178 | 0.0807 | 0.0478 |
| 0.4994 | 8.0 | 1336 | 0.6709 | 0.8163 | 0.2756 | 1.5297 | 0.8163 | 0.8166 | 0.0728 | 0.0478 |
| 0.2478 | 9.0 | 1503 | 0.6132 | 0.825 | 0.2576 | 1.4349 | 0.825 | 0.8247 | 0.0728 | 0.0398 |
| 0.2478 | 10.0 | 1670 | 0.6389 | 0.8235 | 0.2671 | 1.4455 | 0.8235 | 0.8266 | 0.0746 | 0.0419 |
| 0.2478 | 11.0 | 1837 | 0.6043 | 0.8257 | 0.2585 | 1.4609 | 0.8257 | 0.8293 | 0.0752 | 0.0403 |
| 0.1683 | 12.0 | 2004 | 0.5639 | 0.8327 | 0.2457 | 1.4470 | 0.8327 | 0.8350 | 0.0676 | 0.0375 |
| 0.1683 | 13.0 | 2171 | 0.5665 | 0.8317 | 0.2508 | 1.4054 | 0.8317 | 0.8324 | 0.0731 | 0.0388 |
| 0.1683 | 14.0 | 2338 | 0.5505 | 0.8403 | 0.2427 | 1.4059 | 0.8403 | 0.8408 | 0.0649 | 0.0377 |
| 0.131 | 15.0 | 2505 | 0.5321 | 0.836 | 0.2428 | 1.4078 | 0.836 | 0.8372 | 0.0684 | 0.0365 |
| 0.131 | 16.0 | 2672 | 0.5161 | 0.8373 | 0.2383 | 1.3900 | 0.8373 | 0.8373 | 0.0711 | 0.0368 |
| 0.131 | 17.0 | 2839 | 0.5177 | 0.8403 | 0.2371 | 1.3828 | 0.8403 | 0.8413 | 0.0633 | 0.0354 |
| 0.1071 | 18.0 | 3006 | 0.5113 | 0.8407 | 0.2377 | 1.3832 | 0.8407 | 0.8432 | 0.0718 | 0.0343 |
| 0.1071 | 19.0 | 3173 | 0.4949 | 0.8415 | 0.2332 | 1.3767 | 0.8415 | 0.8428 | 0.0667 | 0.0338 |
| 0.1071 | 20.0 | 3340 | 0.4857 | 0.848 | 0.2271 | 1.3664 | 0.848 | 0.8492 | 0.0615 | 0.0338 |
| 0.0877 | 21.0 | 3507 | 0.4812 | 0.847 | 0.2283 | 1.3360 | 0.847 | 0.8478 | 0.0602 | 0.0346 |
| 0.0877 | 22.0 | 3674 | 0.4715 | 0.8495 | 0.2243 | 1.3761 | 0.8495 | 0.8506 | 0.0560 | 0.0320 |
| 0.0877 | 23.0 | 3841 | 0.4622 | 0.8508 | 0.2206 | 1.3584 | 0.8508 | 0.8515 | 0.0557 | 0.0323 |
| 0.0694 | 24.0 | 4008 | 0.4432 | 0.8515 | 0.2167 | 1.3653 | 0.8515 | 0.8531 | 0.0555 | 0.0309 |
| 0.0694 | 25.0 | 4175 | 0.4467 | 0.8498 | 0.2193 | 1.3499 | 0.8498 | 0.8512 | 0.0581 | 0.0309 |
| 0.0694 | 26.0 | 4342 | 0.4412 | 0.8545 | 0.2162 | 1.3535 | 0.8545 | 0.8560 | 0.0534 | 0.0306 |
| 0.0586 | 27.0 | 4509 | 0.4402 | 0.8498 | 0.2180 | 1.3390 | 0.8498 | 0.8510 | 0.0597 | 0.0309 |
| 0.0586 | 28.0 | 4676 | 0.4408 | 0.8522 | 0.2174 | 1.3568 | 0.8522 | 0.8536 | 0.0576 | 0.0306 |
| 0.0586 | 29.0 | 4843 | 0.4391 | 0.851 | 0.2168 | 1.3429 | 0.851 | 0.8523 | 0.0585 | 0.0305 |
| 0.0549 | 30.0 | 5010 | 0.4371 | 0.853 | 0.2160 | 1.3389 | 0.853 | 0.8543 | 0.0573 | 0.0303 |
| 0.0549 | 31.0 | 5177 | 0.4382 | 0.8498 | 0.2168 | 1.3486 | 0.8498 | 0.8513 | 0.0602 | 0.0304 |
| 0.0549 | 32.0 | 5344 | 0.4372 | 0.853 | 0.2166 | 1.3501 | 0.853 | 0.8540 | 0.0591 | 0.0306 |
| 0.0527 | 33.0 | 5511 | 0.4379 | 0.852 | 0.2156 | 1.3546 | 0.852 | 0.8531 | 0.0576 | 0.0304 |
| 0.0527 | 34.0 | 5678 | 0.4353 | 0.8532 | 0.2154 | 1.3381 | 0.8532 | 0.8543 | 0.0574 | 0.0302 |
| 0.0527 | 35.0 | 5845 | 0.4347 | 0.8525 | 0.2148 | 1.3550 | 0.8525 | 0.8535 | 0.0591 | 0.0304 |
| 0.0511 | 36.0 | 6012 | 0.4311 | 0.8542 | 0.2141 | 1.3233 | 0.8542 | 0.8552 | 0.0572 | 0.0299 |
| 0.0511 | 37.0 | 6179 | 0.4323 | 0.852 | 0.2150 | 1.3332 | 0.852 | 0.8532 | 0.0586 | 0.0302 |
| 0.0511 | 38.0 | 6346 | 0.4321 | 0.8515 | 0.2152 | 1.3382 | 0.8515 | 0.8527 | 0.0583 | 0.0299 |
| 0.0494 | 39.0 | 6513 | 0.4335 | 0.8495 | 0.2152 | 1.3385 | 0.8495 | 0.8511 | 0.0593 | 0.0303 |
| 0.0494 | 40.0 | 6680 | 0.4323 | 0.852 | 0.2146 | 1.3603 | 0.852 | 0.8533 | 0.0576 | 0.0299 |
| 0.0494 | 41.0 | 6847 | 0.4309 | 0.8512 | 0.2143 | 1.3448 | 0.8512 | 0.8525 | 0.0570 | 0.0299 |
| 0.0477 | 42.0 | 7014 | 0.4327 | 0.8525 | 0.2149 | 1.3439 | 0.8525 | 0.8539 | 0.0580 | 0.0299 |
| 0.0477 | 43.0 | 7181 | 0.4309 | 0.8532 | 0.2140 | 1.3406 | 0.8532 | 0.8544 | 0.0560 | 0.0299 |
| 0.0477 | 44.0 | 7348 | 0.4308 | 0.8528 | 0.2141 | 1.3404 | 0.8528 | 0.8540 | 0.0573 | 0.0299 |
| 0.0466 | 45.0 | 7515 | 0.4317 | 0.8525 | 0.2147 | 1.3402 | 0.8525 | 0.8538 | 0.0580 | 0.0299 |
| 0.0466 | 46.0 | 7682 | 0.4317 | 0.8535 | 0.2144 | 1.3475 | 0.8535 | 0.8547 | 0.0553 | 0.0298 |
| 0.0466 | 47.0 | 7849 | 0.4314 | 0.8525 | 0.2143 | 1.3479 | 0.8525 | 0.8537 | 0.0559 | 0.0299 |
| 0.0465 | 48.0 | 8016 | 0.4314 | 0.8525 | 0.2143 | 1.3479 | 0.8525 | 0.8538 | 0.0559 | 0.0299 |
| 0.0465 | 49.0 | 8183 | 0.4316 | 0.8528 | 0.2145 | 1.3471 | 0.8528 | 0.8540 | 0.0573 | 0.0299 |
| 0.0465 | 50.0 | 8350 | 0.4315 | 0.8522 | 0.2145 | 1.3474 | 0.8522 | 0.8535 | 0.0573 | 0.0300 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mauroluiz/Criativo | mauroluiz | 2023-07-28T15:30:39Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-28T15:16:30Z | Insanely detailed and elaborate jungle scene in a glass globe |
liuyt75/t5-large_prefix_tuning_sentences_75agree_10 | liuyt75 | 2023-07-28T15:28:53Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T15:28:51Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Christabelle/thesis-concept-art | Christabelle | 2023-07-28T15:04:50Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-26T20:18:06Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Christabelle/thesis-concept-art
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the Christabelle/thesis-concept-art-train dataset. You can find some example images in the following.




|
budecosystem/genz-13b-v2 | budecosystem | 2023-07-28T14:51:28Z | 1,480 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-26T05:40:09Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
---
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div>
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
---
## Introduction 🎉
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/MTBench_CompareChart_28July2023.png" width="500"></p>
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver.
And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀
---
<h2>Milestone Releases ️🏁</h2>
**[27 July 2023]**
[_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml).
**[27 July 2023]**
[_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit).
**[26 July 2023]**
[_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2).
**[20 July 2023]**
[_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b).
---
<img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/screenshot_genz13bv2.png" width="100%">
|  |  | 
|:--:|:--:|:--:|
| *Code Generation* | *Poem Generation* | *Email Generation* |
<!--
<p align="center"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Python.gif" width="33%" alt="Python Code"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Poem.gif" width="33%"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Email.gif" width="33%"></p>
-->
<h2>Getting Started on Hugging Face 🤗</h2>
Getting up and running with our models on Hugging Face is a breeze. Follow these steps:
<h3>1️⃣ : Import necessary modules</h3>
Start by importing the necessary modules from the ‘transformers’ library and ‘torch’.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
```
<h3>2️⃣ : Load the tokenizer and the model</h3>
Next, load up the tokenizer and the model for ‘budecosystem/genz-13b-v2’ from Hugging Face using the ‘from_pretrained’ method.
```python
tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b-v2", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b-v2", torch_dtype=torch.bfloat16)
```
<h3>3️⃣ : Generate responses</h3>
Now that you have the model and tokenizer, you're ready to generate responses. Here's how you can do it:
```python
inputs = tokenizer("The meaning of life is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
In this example, "The meaning of life is" is the prompt template used for inference. You can replace it with any string you like.
Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄
<h2>Fine-tuning 🎯</h2>
It's time to upgrade the model by fine-tuning the model. You can do this using our provided finetune.py script. Here's an example command:
```bash
python finetune.py \
--model_name meta-llama/Llama-2-13b \
--data_path dataset.json \
--output_dir output \
--trust_remote_code \
--prompt_column instruction \
--response_column output \
--pad_token_id 50256
```
---
<h2 >Bonus: Colab Notebooks 📚 <b><i>(WIP)</i></b></h2>
Looking for an even simpler way to get started with GenZ? We've got you covered. We've prepared a pair of detailed Colab notebooks - one for Inference and one for Fine-tuning. These notebooks come pre-filled with all the information and code you'll need. All you'll have to do is run them!
Keep an eye out for these notebooks. They'll be added to the repository soon!
---
<h2>Why Use GenZ? 💡</h2>
You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models.
While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge.
What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there.
But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself.
Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models.
---
<h2>Model Card for GenZ 13B 📄</h2>
Here's a quick overview of everything you need to know about GenZ 13B.
<h3>Model Details:</h3>
- Developed by: Bud Ecosystem
- Base pretrained model type: Llama V2 13B
- Model Architecture: GenZ 13B, fine-tuned on Llama V2 13B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 13B leveraged Supervised Fine-Tuning (SFT)
- License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
---
<h2>Intended Use 💼</h2>
When we created GenZ 13B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 13B.
<h3>Direct Use</h3>
GenZ 13B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as:
- Text summarization
- Text generation
- Chatbot creation
- And much more!
<h3>Out-of-Scope Use 🚩</h3>
While GenZ 13B is versatile, there are certain uses that are out of scope:
- Production use without adequate assessment of risks and mitigation
- Any use cases which may be considered irresponsible or harmful
- Use in any manner that violates applicable laws or regulations, including trade compliance laws
- Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2
Remember, GenZ 13B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online.
<h3>Recommendations 🧠</h3>
We recommend users of GenZ 13B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 13B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.
---
<h2>Training Details 📚</h2>
When fine-tuning GenZ 13B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 13B model in the most effective way. Here's a look at the key details of our training process:
<h3>Fine-Tuning Training Data</h3>
For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks.
<h3>Fine-Tuning Procedure</h3>
We performed a full-parameter fine-tuning using Supervised Fine-Tuning (SFT). This was carried out on 4 A100 80GB GPUs, and the process took under 100 hours. To make the process more efficient, we used DeepSpeed's ZeRO-3 optimization.
<h3>Tokenizer</h3>
We used the SentencePiece tokenizer during the fine-tuning process. This tokenizer is known for its capability to handle open-vocabulary language tasks efficiently.
<h3>Hyperparameters</h3>
Here are the hyperparameters we used for fine-tuning:
| Hyperparameter | Value |
| -------------- | ----- |
| Warmup Ratio | 0.04 |
| Learning Rate Scheduler Type | Cosine |
| Learning Rate | 2e-5 |
| Number of Training Epochs | 3 |
| Per Device Training Batch Size | 4 |
| Gradient Accumulation Steps | 4 |
| Precision | FP16 |
| Optimizer | AdamW |
---
<h2>Evaluations 🎯</h2>
Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 13B:
<h3>Benchmark Comparison</h3>
We've compared GenZ V1 with V2 to understand the improvements our fine-tuning has achieved.
| Model Name | MT Bench | Vicuna Bench | MMLU | Human Eval | Hellaswag | BBH |
|:----------:|:--------:|:------------:|:----:|:----------:|:---------:|:----:|
| Genz 13B | 6.12 | 86.1 | 53.62| 17.68 | 77.38 | 37.76|
| Genz 13B v2| 6.79 | 87.2 | 53.68| 21.95 | 77.48 | 38.1 |
<h3>MT Bench Score</h3>
A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks.
We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p>
In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements.
---
<h2>Looking Ahead 👀</h2>
We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together.
Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀
---
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ) |
vnktrmnb/bert-base-multilingual-cased-finetuned-TyDiQA_AUQC | vnktrmnb | 2023-07-28T14:46:12Z | 62 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-28T13:18:33Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/bert-base-multilingual-cased-finetuned-TyDiQA_AUQC
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/bert-base-multilingual-cased-finetuned-TyDiQA_AUQC
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5563
- Train End Logits Accuracy: 0.6377
- Train Start Logits Accuracy: 0.6615
- Validation Loss: 0.5213
- Validation End Logits Accuracy: 0.8376
- Validation Start Logits Accuracy: 0.8905
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 804, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5563 | 0.6377 | 0.6615 | 0.5213 | 0.8376 | 0.8905 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
cenkersisman/gpt2-turkish-10m | cenkersisman | 2023-07-28T14:43:04Z | 139 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"tr",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-27T14:24:17Z | ---
widget:
- text: 'cengiz'
example_title: Örnek 1
language:
- tr
---
# Model Card for Model ID
Model Card for GPT-2 Turkish Model
## Model Details
### Model Description
GPT-2 Turkish Model is a language model based on the GPT-2 architecture, fine-tuned for the Turkish language. It is capable of generating human-like text based on a given prompt and has been trained on a large corpus of Turkish text data.
- **Developed by:** Cenker Sisman
- **Model type:**
- **Language(s) (NLP):**
- **License:**
- **Finetuned from model :** GPT-2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Sınırlamalar ve Önyargılar
Bu model, bir özyineli dil modeli olarak eğitildi. Bu, temel işlevinin bir metin dizisi alıp bir sonraki belirteci tahmin etmek olduğu anlamına gelir. Dil modelleri bunun dışında birçok görev için yaygın olarak kullanılsa da, bu çalışmayla ilgili birçok bilinmeyen bulunmaktadır.
Model, küfür, açık saçıklık ve aksi davranışlara yol açan metinleri içerdiği bilinen bir veri kümesi üzerinde eğitildi. Kullanım durumunuza bağlı olarak, bu model toplumsal olarak kabul edilemez metinler üretebilir.
Tüm dil modellerinde olduğu gibi, bu modelin belirli bir girişe nasıl yanıt vereceğini önceden tahmin etmek zordur ve uyarı olmaksızın saldırgan içerik ortaya çıkabilir. Sonuçları yayınlamadan önce hem istenmeyen içeriği sansürlemek hem de sonuçların kalitesini iyileştirmek için insanların çıktıları denetlemesini veya filtrelemesi önerilir.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
```python
# Example code for inference with the model
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_name = "cenkersisman/gpt2-turkish" # Change to the model name you have uploaded to Hugging Face
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
prompt = "cengiz"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZhiguangHan/test-clm | ZhiguangHan | 2023-07-28T14:42:03Z | 182 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T14:02:13Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: test-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-clm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5547 | 1.0 | 2334 | 3.6373 |
| 3.4926 | 2.0 | 4668 | 3.6361 |
| 3.4692 | 3.0 | 7002 | 3.6319 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
greg-szopinski/Reinforce-pixelcopter-128 | greg-szopinski | 2023-07-28T14:38:21Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T14:36:26Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-128
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.80 +/- 16.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sailoromkar/q-taxi | sailoromkar | 2023-07-28T14:31:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T14:31:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sailoromkar/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025 | jordyvl | 2023-07-28T14:26:36Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-23T21:50:37Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.9394
- Exit 0 Accuracy: 0.2721
- Exit 1 Accuracy: 0.4875
- Exit 2 Accuracy: 0.8051
- Exit 3 Accuracy: 0.8840
- Exit 4 Accuracy: 0.9339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| 0.5909 | 1.0 | 2222 | 0.2945 | 0.9158 | 0.2021 | 0.3569 | 0.7091 | 0.8143 | 0.9092 |
| 0.4951 | 2.0 | 4444 | 0.2469 | 0.9292 | 0.2262 | 0.4336 | 0.7677 | 0.8614 | 0.9258 |
| 0.4279 | 3.0 | 6666 | 0.2281 | 0.9336 | 0.2530 | 0.4682 | 0.7898 | 0.8768 | 0.9302 |
| 0.39 | 4.0 | 8888 | 0.2241 | 0.9385 | 0.2600 | 0.483 | 0.8008 | 0.8827 | 0.9328 |
| 0.3602 | 5.0 | 11110 | 0.2244 | 0.9394 | 0.2721 | 0.4875 | 0.8051 | 0.8840 | 0.9339 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Pierre-Arthur/distilroberta_base_eurolex_mlm_model | Pierre-Arthur | 2023-07-28T14:22:58Z | 171 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eurlex_resources",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-28T11:29:42Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- eurlex_resources
model-index:
- name: distilroberta_base_eurolex_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta_base_eurolex_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the eurlex_resources dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | nan |
| No log | 2.0 | 76 | nan |
| No log | 3.0 | 114 | nan |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Lajonbot/tableBeluga-7B-instruct-pl-lora_unload | Lajonbot | 2023-07-28T14:12:09Z | 1,396 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T13:59:47Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
twbrandon7/rl-course-unit1 | twbrandon7 | 2023-07-28T14:02:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T14:01:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.84 +/- 17.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lajonbot/tableBeluga-7B-instruct-pl-lora_GGML | Lajonbot | 2023-07-28T13:59:47Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] | text-generation | 2023-07-28T13:49:06Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
StereoLuigi99/ali-gungor | StereoLuigi99 | 2023-07-28T13:55:00Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-28T13:44:42Z | # Ali GÜNGÖR AI Voice (200 Epochs / Harvest)
RVC v2 kullanılarak oluşturuldu.
Recep Baltaş tek başına kalmasın diye yaptım, konuşmalar için kullanın.
## Özellikler
- 38 dakikalık Ali GÜNGÖR sesi kullanıldı.
- RVC v2 ile Google Colab üzerinden yapıldı: https://colab.research.google.com/drive/1r4IRL0UA7JEoZ0ZK8PKfMyTIBHKpyhcw
- Harvest metodu ile çalıştırıldı.
- 200 Epoch bulunuyor. |
Naruke/ppo-Pyramidsv1 | Naruke | 2023-07-28T13:50:56Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-28T13:25:08Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Naruke/ppo-Pyramidsv1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Lajonbot/tableBeluga-7B-instruct-pl-lora_adapter_model | Lajonbot | 2023-07-28T13:49:05Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] | text-generation | 2023-07-28T13:49:04Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
FinchResearch/llama2-archimedes-7b-lora | FinchResearch | 2023-07-28T13:47:43Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:47:36Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
HaziqRazali/ppo-Pyramids | HaziqRazali | 2023-07-28T13:45:47Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-28T13:45:14Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HaziqRazali/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/bluearchive_haruna-10 | LarryAIDraw | 2023-07-28T13:42:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T13:32:21Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/52253/kurodate-haruna-or-blue-archive |
LarryAIDraw/narb-overlord-13 | LarryAIDraw | 2023-07-28T13:42:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T13:31:14Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/117294/narberal-gamma-overlord |
LarryAIDraw/sakurako1-000006 | LarryAIDraw | 2023-07-28T13:41:56Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T13:30:55Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/114592/utazumi-sakurako-blue-archive-or-character-lora-344 |
LarryAIDraw/fymt | LarryAIDraw | 2023-07-28T13:41:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T13:29:55Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/96806/arknights-fiammetta |
Envertime/test_deliberate_v2 | Envertime | 2023-07-28T13:39:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T13:39:44Z | ---
license: creativeml-openrail-m
---
|
NasimB/aochildes-rarity-seed | NasimB | 2023-07-28T13:37:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T04:40:59Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3514 | 0.29 | 500 | 5.3390 |
| 5.0369 | 0.59 | 1000 | 4.9222 |
| 4.7211 | 0.88 | 1500 | 4.6884 |
| 4.4532 | 1.17 | 2000 | 4.5398 |
| 4.3029 | 1.47 | 2500 | 4.4318 |
| 4.2095 | 1.76 | 3000 | 4.3295 |
| 4.0772 | 2.05 | 3500 | 4.2615 |
| 3.9042 | 2.35 | 4000 | 4.2130 |
| 3.8732 | 2.64 | 4500 | 4.1604 |
| 3.8358 | 2.93 | 5000 | 4.1110 |
| 3.641 | 3.23 | 5500 | 4.1105 |
| 3.5952 | 3.52 | 6000 | 4.0799 |
| 3.5797 | 3.81 | 6500 | 4.0466 |
| 3.465 | 4.11 | 7000 | 4.0458 |
| 3.3242 | 4.4 | 7500 | 4.0451 |
| 3.3146 | 4.69 | 8000 | 4.0309 |
| 3.3112 | 4.99 | 8500 | 4.0183 |
| 3.1524 | 5.28 | 9000 | 4.0325 |
| 3.1343 | 5.57 | 9500 | 4.0319 |
| 3.1354 | 5.87 | 10000 | 4.0309 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
JBJoyce/speecht5_finetuned_voxpopuli_sl | JBJoyce | 2023-07-28T13:37:51Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-28T12:28:14Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_sl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_sl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4886 | 21.51 | 1000 | 0.4456 |
| 0.4706 | 43.01 | 2000 | 0.4393 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
liuyt75/t5-large_prefix_tuning_sentences_66agree_10 | liuyt75 | 2023-07-28T13:35:44Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:35:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
youlun77/DistilBERT-TC2000-10epochs | youlun77 | 2023-07-28T13:30:50Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-28T13:22:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- recall
- precision
- f1
model-index:
- name: DistilBERT-TC2000-10epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-TC2000-10epochs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0752
- Recall: {'recall': 0.98}
- Precision: {'precision': 0.9803145941921073}
- F1: {'f1': 0.9800242537313432}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Precision | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:---------------------------------:|:--------------------------:|
| 1.0272 | 0.18 | 20 | 0.8815 | {'recall': 0.65} | {'precision': 0.7778791777580597} | {'f1': 0.6251215862860073} |
| 0.8663 | 0.35 | 40 | 0.6770 | {'recall': 0.905} | {'precision': 0.9120308312976535} | {'f1': 0.9054010850819201} |
| 0.6016 | 0.53 | 60 | 0.4088 | {'recall': 0.92} | {'precision': 0.9238949736347314} | {'f1': 0.9207242314918276} |
| 0.3139 | 0.71 | 80 | 0.2508 | {'recall': 0.93} | {'precision': 0.9322386382325532} | {'f1': 0.929768888773222} |
| 0.2645 | 0.88 | 100 | 0.2048 | {'recall': 0.955} | {'precision': 0.958280303030303} | {'f1': 0.954923196771023} |
| 0.1811 | 1.06 | 120 | 0.1446 | {'recall': 0.965} | {'precision': 0.9675925925925927} | {'f1': 0.9648852158183796} |
| 0.1429 | 1.24 | 140 | 0.1245 | {'recall': 0.975} | {'precision': 0.9762354497354496} | {'f1': 0.9749193929610656} |
| 0.0941 | 1.42 | 160 | 0.1338 | {'recall': 0.965} | {'precision': 0.9683561643835616} | {'f1': 0.9652805623632961} |
| 0.1242 | 1.59 | 180 | 0.0872 | {'recall': 0.975} | {'precision': 0.9759505494505496} | {'f1': 0.9750344590666455} |
| 0.0893 | 1.77 | 200 | 0.0572 | {'recall': 0.985} | {'precision': 0.9853867102396515} | {'f1': 0.9849564819176908} |
| 0.0477 | 1.95 | 220 | 0.0794 | {'recall': 0.975} | {'precision': 0.9762354497354496} | {'f1': 0.9749193929610656} |
| 0.0128 | 2.12 | 240 | 0.0697 | {'recall': 0.98} | {'precision': 0.9807447665056361} | {'f1': 0.9799368665956859} |
| 0.0449 | 2.3 | 260 | 0.0635 | {'recall': 0.97} | {'precision': 0.9725} | {'f1': 0.9702302752172594} |
| 0.0996 | 2.48 | 280 | 0.0782 | {'recall': 0.97} | {'precision': 0.9725} | {'f1': 0.9700752508361203} |
| 0.0328 | 2.65 | 300 | 0.0127 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.0747 | 2.83 | 320 | 0.0380 | {'recall': 0.975} | {'precision': 0.9767605633802816} | {'f1': 0.9751792302987906} |
| 0.0413 | 3.01 | 340 | 0.0127 | {'recall': 1.0} | {'precision': 1.0} | {'f1': 1.0} |
| 0.0404 | 3.19 | 360 | 0.0120 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949915278995033} |
| 0.0226 | 3.36 | 380 | 0.0085 | {'recall': 1.0} | {'precision': 1.0} | {'f1': 1.0} |
| 0.0543 | 3.54 | 400 | 0.0139 | {'recall': 0.995} | {'precision': 0.9950925925925926} | {'f1': 0.9950042805165157} |
| 0.0528 | 3.72 | 420 | 0.0408 | {'recall': 0.985} | {'precision': 0.9856521739130435} | {'f1': 0.9850251572327045} |
| 0.0051 | 3.89 | 440 | 0.0808 | {'recall': 0.97} | {'precision': 0.9725} | {'f1': 0.9702302752172594} |
| 0.014 | 4.07 | 460 | 0.0419 | {'recall': 0.985} | {'precision': 0.985241846323936} | {'f1': 0.985017255463425} |
| 0.051 | 4.25 | 480 | 0.0127 | {'recall': 0.995} | {'precision': 0.9950925925925926} | {'f1': 0.9950042805165157} |
| 0.0501 | 4.42 | 500 | 0.0200 | {'recall': 0.985} | {'precision': 0.9850867537313434} | {'f1': 0.985009807126512} |
| 0.0062 | 4.6 | 520 | 0.0247 | {'recall': 0.985} | {'precision': 0.985241846323936} | {'f1': 0.985017255463425} |
| 0.0118 | 4.78 | 540 | 0.0614 | {'recall': 0.975} | {'precision': 0.975962157809984} | {'f1': 0.975047977706797} |
| 0.0348 | 4.96 | 560 | 0.0516 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0226 | 5.13 | 580 | 0.0144 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.0159 | 5.31 | 600 | 0.0129 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.0026 | 5.49 | 620 | 0.0176 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.016 | 5.66 | 640 | 0.0404 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0433 | 5.84 | 660 | 0.0663 | {'recall': 0.975} | {'precision': 0.9756772575250836} | {'f1': 0.975041928721174} |
| 0.0354 | 6.02 | 680 | 0.0253 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.0041 | 6.19 | 700 | 0.0961 | {'recall': 0.97} | {'precision': 0.9711688311688311} | {'f1': 0.9700614296351452} |
| 0.0579 | 6.37 | 720 | 0.1336 | {'recall': 0.965} | {'precision': 0.966783728687917} | {'f1': 0.9650813612906225} |
| 0.0025 | 6.55 | 740 | 0.0424 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0328 | 6.73 | 760 | 0.0190 | {'recall': 0.995} | {'precision': 0.995060975609756} | {'f1': 0.9949962534538471} |
| 0.0217 | 6.9 | 780 | 0.0488 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0096 | 7.08 | 800 | 0.1115 | {'recall': 0.97} | {'precision': 0.9711688311688311} | {'f1': 0.9700614296351452} |
| 0.0106 | 7.26 | 820 | 0.0673 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0077 | 7.43 | 840 | 0.0354 | {'recall': 0.985} | {'precision': 0.9850867537313434} | {'f1': 0.985009807126512} |
| 0.0222 | 7.61 | 860 | 0.0410 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0026 | 7.79 | 880 | 0.0590 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0576 | 7.96 | 900 | 0.0596 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.018 | 8.14 | 920 | 0.0428 | {'recall': 0.985} | {'precision': 0.9850867537313434} | {'f1': 0.985009807126512} |
| 0.027 | 8.32 | 940 | 0.0425 | {'recall': 0.985} | {'precision': 0.9850867537313434} | {'f1': 0.985009807126512} |
| 0.036 | 8.5 | 960 | 0.0341 | {'recall': 0.985} | {'precision': 0.9850867537313434} | {'f1': 0.985009807126512} |
| 0.0094 | 8.67 | 980 | 0.0457 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0192 | 8.85 | 1000 | 0.0586 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.03 | 9.03 | 1020 | 0.0789 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0091 | 9.2 | 1040 | 0.0691 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0197 | 9.38 | 1060 | 0.0753 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0025 | 9.56 | 1080 | 0.0796 | {'recall': 0.975} | {'precision': 0.9756772575250836} | {'f1': 0.975041928721174} |
| 0.0414 | 9.73 | 1100 | 0.0791 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
| 0.0075 | 9.91 | 1120 | 0.0756 | {'recall': 0.98} | {'precision': 0.9803145941921073} | {'f1': 0.9800242537313432} |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
w601sxs/b1ade-1b-orca-chkpt-563k | w601sxs | 2023-07-28T13:30:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:30:09Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Evan-Lin/Bart-Yelp-rougelastbatch2-attractive1-keywordmax1-len0 | Evan-Lin | 2023-07-28T13:25:27Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2023-07-28T13:23:43Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpapodd3ns/Evan-Lin/Bart-Yelp-rougelastbatch2-attractive1-keywordmax1-len0")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpapodd3ns/Evan-Lin/Bart-Yelp-rougelastbatch2-attractive1-keywordmax1-len0")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpapodd3ns/Evan-Lin/Bart-Yelp-rougelastbatch2-attractive1-keywordmax1-len0")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
WforGodot/add-lora-1b | WforGodot | 2023-07-28T13:24:40Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:10:39Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
efainman/Pyramids | efainman | 2023-07-28T13:20:59Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-28T13:20:45Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: efainman/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
stefaniftime/bert-finetuned-squad | stefaniftime | 2023-07-28T13:17:20Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-28T10:29:03Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: stefaniftime/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stefaniftime/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2718
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2718 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
JAdeojo/xlm-roberta-large-lora-consumer-complaints-cfpb_checkpoint2 | JAdeojo | 2023-07-28T13:13:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:13:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
liuyt75/t5-large_prefix_tuning_sentences_66agree_5 | liuyt75 | 2023-07-28T13:07:09Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T13:07:08Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jcy204/cold_model | jcy204 | 2023-07-28T13:06:43Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-28T13:01:33Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: jcy204/cold_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jcy204/cold_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3079
- Validation Loss: 0.6510
- Train Accuracy: 0.7604
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3185, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6912 | 0.5784 | 0.7513 | 0 |
| 0.4713 | 0.5637 | 0.7641 | 1 |
| 0.3079 | 0.6510 | 0.7604 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
guyhadad01/ppo-LunarLander-v2 | guyhadad01 | 2023-07-28T13:06:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T13:05:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.00 +/- 17.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
elvhens/ryujin-500 | elvhens | 2023-07-28T13:01:27Z | 0 | 0 | null | [
"kpop",
"itzy",
"ryujin",
"ko",
"region:us"
] | null | 2023-07-28T11:15:18Z | ---
language:
- ko
tags:
- kpop
- itzy
- ryujin
--- |
iworeushankaonce/whisper-tiny-en | iworeushankaonce | 2023-07-28T12:49:14Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-28T12:18:50Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36481700118063753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6411
- Wer Ortho: 0.3621
- Wer: 0.3648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0008 | 17.86 | 500 | 0.6411 | 0.3621 | 0.3648 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Maldopast/distilhubert-finetuned-gtzan | Maldopast | 2023-07-28T12:49:12Z | 157 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-28T12:30:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7537
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9647 | 1.0 | 113 | 1.8614 | 0.52 |
| 1.3987 | 2.0 | 226 | 1.3098 | 0.61 |
| 0.8809 | 3.0 | 339 | 0.8631 | 0.76 |
| 0.7643 | 4.0 | 452 | 0.8114 | 0.77 |
| 0.5958 | 5.0 | 565 | 0.7013 | 0.81 |
| 0.4405 | 6.0 | 678 | 0.5860 | 0.84 |
| 0.2183 | 7.0 | 791 | 0.6114 | 0.82 |
| 0.1587 | 8.0 | 904 | 0.5141 | 0.85 |
| 0.0899 | 9.0 | 1017 | 0.4760 | 0.87 |
| 0.0575 | 10.0 | 1130 | 0.5759 | 0.86 |
| 0.0647 | 11.0 | 1243 | 0.6467 | 0.86 |
| 0.0061 | 12.0 | 1356 | 0.6372 | 0.88 |
| 0.0029 | 13.0 | 1469 | 0.6721 | 0.88 |
| 0.0018 | 14.0 | 1582 | 0.7565 | 0.89 |
| 0.0013 | 15.0 | 1695 | 0.7537 | 0.88 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
FiouReia/my_awesome_model | FiouReia | 2023-07-28T12:44:03Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-28T10:16:24Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: FiouReia/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FiouReia/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0326
- Validation Loss: 0.2450
- Train Accuracy: 0.934
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1321 | 0.1917 | 0.928 | 0 |
| 0.0649 | 0.2146 | 0.9351 | 1 |
| 0.0326 | 0.2450 | 0.934 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
ChristophBu/ppo-LunarLander-v2 | ChristophBu | 2023-07-28T12:38:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T11:02:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.34 +/- 18.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Isaacgv/whisper-tiny-minds14 | Isaacgv | 2023-07-28T12:37:07Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-28T10:03:38Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3530106257378985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
- Wer Ortho: 0.3572
- Wer: 0.3530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.6457 | 0.3572 | 0.3530 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
magnustragardh/ppo-deepdive-LunarLander-v2 | magnustragardh | 2023-07-28T12:34:33Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-27T22:26:18Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 72.15 +/- 17.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
ArmelR/starcoder-gradio-v2.1 | ArmelR | 2023-07-28T12:21:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"dataset:databricks/databricks-dolly-15k",
"dataset:HuggingFaceH4/oasst1_en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T11:31:17Z | ---
datasets:
- databricks/databricks-dolly-15k
- HuggingFaceH4/oasst1_en
---
Gradio training 1000 steps
IFT 200 steps (v2.0) 1000 steps (v2.1) |
donadelicc/nor-sum | donadelicc | 2023-07-28T12:12:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:sshleifer/distilbart-cnn-6-6",
"base_model:finetune:sshleifer/distilbart-cnn-6-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-28T09:56:12Z | ---
license: apache-2.0
base_model: sshleifer/distilbart-cnn-6-6
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: nor-sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nor-sum
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1812
- Rouge1: 0.2552
- Rouge2: 0.0679
- Rougel: 0.1884
- Rougelsum: 0.1886
- Gen Len: 65.3086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.6231 | 1.0 | 3188 | 2.4652 | 0.2359 | 0.0563 | 0.1732 | 0.1733 | 66.1928 |
| 2.3062 | 2.0 | 6377 | 2.2798 | 0.2524 | 0.0653 | 0.1864 | 0.1864 | 66.3107 |
| 2.0817 | 3.0 | 9565 | 2.1973 | 0.2529 | 0.0675 | 0.189 | 0.1893 | 65.077 |
| 1.9776 | 4.0 | 12752 | 2.1812 | 0.2552 | 0.0679 | 0.1884 | 0.1886 | 65.3086 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
zzzAI19/MasterpieceFT | zzzAI19 | 2023-07-28T12:08:21Z | 0 | 3 | null | [
"region:us"
] | null | 2023-07-28T11:35:45Z | This model is additionally studied with past masterpieces. It can be painted pictorially.
There are two versions: an all-ages depiction-oriented version and an NSFW-enhanced version.
All-ages depiction-oriented version: MasterpieceFT_v1-pruned.safetensors
NSFW-enhanced version: MasterpieceNeoFT_v1-pruned.safetensors
(NEO: Nsfw Erotic Option)
This model is somewhat difficult to use.
The NSFW enhanced version is not particularly easy to use.
See blog for sample illustrations.
(Sample illustrations for all ages)
https://ai-drawing.net/en/2023/07/28/introducing-of-masterpiece-ft-v1-0/
(R18 sample illustrations)
https://ai-drawing2.net/2023/07/28/masterpiece-neo-ft-v1-0/
(recommended settings)
“fine art parody, traditional media” is the prompt for classical painting. It is recommended that these be included in the positive prompts.
Also, Steps 2.5-4 will give you impressionistic paintings, and Steps 7 and above will give you realistic paintings. Use whichever you prefer.
For realistic pictures, it is compatible with flat2 LoRA. I recommend using it together.
https://civitai.com/models/81291/flat2
過去の名画で追加学習したモデルです。絵画的な絵が描けます。
全年齢描写重視版とNSFW強化版があります。
全年齢描写重視版:MasterpieceFT_v1-pruned.safetensors
NSFW強化版:MasterpieceNeoFT_v1-pruned.safetensors
(NEO:Nsfw Erotic Option)
このモデルはやや使いにくいです。
特にNSFW強化版の使い勝手は良くありません。
サンプルイラストはブログをご覧ください。
(全年齢サンプルイラスト)
https://ai-drawing.net/2023/07/28/%e6%96%b0%e3%83%a2%e3%83%87%e3%83%abmasterpiece-ft-v1-0%e7%b4%b9%e4%bb%8b/
(R18サンプルイラスト)
https://ai-drawing2.net/2023/07/28/masterpiece-neo-ft-v1-0/
(推奨設定)
fine art parody, traditional mediaが古典絵画のプロンプトです。これらをポジティブプロンプトに入れることをお勧めします。
またステップ2.5~4では印象派的な絵、ステップ7以上では写実的な絵が描けます。好きな方を利用してください。
また写実的な絵ではflat2 LoRAと相性が良いです。併用するのもおすすめです。
https://civitai.com/models/81291/flat2
---
license: creativeml-openrail-m
---
|
xiao12346/t5-large_PREFIX_TUNING_SEQ2SEQ | xiao12346 | 2023-07-28T12:02:24Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-21T05:14:16Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
X-D-Lab/MindChat-baichuan-13B | X-D-Lab | 2023-07-28T11:52:16Z | 5 | 8 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T02:15:59Z | ---
license: gpl-3.0
---
## 💪 模型进展
* Jul 23, 2023: 提供MindChat体验地址: [MindChat-创空间](https://modelscope.cn/studios/X-D-Lab/MindChat/summary), 欢迎体验
* Jul 21, 2023: MindChat-InternLM-7B训练完成, 在**模型安全、共情输出、人类价值观对齐**等方面进行针对性强化;
* Jul 15, 2023: MindChat-Baichuan-13B训练完成, 作为**首个百亿级参数的心理大模型**正式开源;
* Jul 9, 2023: MindChat-beta训练完成, 并正式开源;
* Jul 6, 2023: 首次提交MindChat(漫谈)心理大模型;
## 👏 模型介绍
**MindChat(漫谈): 心理大模型**(简称: MindChat)旨在从心理咨询、心理评估、心理诊断、心理治疗四个维度帮助人们解决心理问题, 提高心理健康水平. 作为一款心理大模型, MindChat通过营造轻松、开放的交谈环境, 以放松身心、交流感受或分享经验的方式, 与用户建立信任和理解的关系. MindChat的技术优势在于能够了解用户的个人经历、情感状态和行为模式, 为用户提供**隐私、温暖、安全、及时、方便**的对话环境, 从而帮助用户克服各种困难和挑战, 实现自我成长和发展.
MindChat的创新之处在于它不仅是一款高效的心理咨询工具, 预期能够提供心理评估、心理诊断和心理治疗的服务. MindChat的专业团队拥有丰富的心理学背景和经验, 他们的专业知识和技能可以帮助用户更好地理解自己的情感和行为, 从而制定更有效的治疗方案.
无论是在工作场所还是在个人生活中, MindChat期望通过自身的努力和专业知识, 在**严格保护用户隐私的前提下**, **全时段全天候**为用户提供全面的心理支持和诊疗帮助, 同时实现自我成长和发展, 以期为建设一个更加**健康、包容和平等**的社会贡献力量.
## 🔥 模型列表
| 模型名称 | lora权重 | 合并后的权重 |
| :----: | :----: | :----: |
| MindChat-InternLM-7B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-7B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-7B) |
| MindChat-Baichuan-13B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-Baichuan-13B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-baichuan-13B) |
目前, 已经开源了**MindChat-Baichuan-13B**版本的百亿级MindChat心理大模型. 更为优质的MindChat模型将在不久的未来对外开源开放. 敬请期待!
此外, 本团队同时关注人们的身理健康, 建有安全、可靠、普惠的[中文医疗大模型孙思邈(Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao), 欢迎下载使用, 敬请批评指证!
## 📖 对话案例
### 🥺 学习
| 轮数 | 用户 | MindChat |
| :----: | :----: | :----: |
|1|总感觉自己学习很差, 想努力, 但是又控制不住想玩游戏 | 我明白你的感受, 你觉得自己学习不好, 但是总是控制不住想玩游戏. 我想问问你, 你玩游戏的时候是什么感觉呢? 有没有什么特别喜欢的游戏?|
|2|我平常特别喜欢玩一些端游来打发时间, 玩游戏的时候很纠结, 明明自己成绩落后, 学习任务还没完成但还是想玩 | 我明白你的感受. 你可能需要认真思考一下, 玩游戏对你来说是不是真的值得投入这么多时间和精力. 学习落后是很正常的事情, 但不要让游戏成为你逃避现实的工具. 你可以尝试制定一个计划, 把玩游戏的时间留给自己, 同时也要合理分配学习任务, 让自己更加充实. |
|3|在学习上总感觉自己学的很吃力, 不知道该怎么办, 觉得自己学习能力太差了, 心里很失落 | 我理解你的感受. 每个人的学习能力都是不同的, 不要过分苛求自己. 你可以先尝试找到适合自己的学习方法, 比如寻找学习伙伴, 向老师请教, 或者参加一些学习小组. 同时, 也不要忘记给自己一些休息的时间, 保持身心健康. 学习是一个长期的过程, 不要过于急躁, 慢慢来, 你会越来越好的. |
|4|我确实不该把精力都是投入在游戏这些娱乐上, 但是我已经落后很多了, 我现在再去开始学习, 以我这么差的学习能力我还有机会学好吗 | 虽然你现在的学习状态很糟糕, 但并不是没有机会. 学习是一个长期的过程, 只要你持之以恒, 坚持不懈, 就一定会有所进步. 同时, 也不要忘记, 学习的过程也需要一定的动力和动机. 你可以找一些自己感兴趣的学习方向, 或者设定一些小目标, 让自己更有动力去完成. 不要轻易放弃, 相信自己, 你一定可以做到. |
|5|可是我很焦虑, 以前落后太多了, 现在不知道从哪里开始 | 焦虑是很正常的情绪, 不要害怕它. 你可以先想想自己目前的学习状态, 列出自己需要学习的科目和内容, 然后按照优先级和难易程度进行排序. 接着, 你可以尝试分解每个科目和内容, 把它们分成小目标, 一步一步地去完成. 这样, 你就可以更有针对性地进行学习, 不会感到太紧张和不知所措. 同时, 也不要忘记给自己一些奖励, 比如看一部电影, 吃一顿美食等等, 让自己更有动力去完成目标. |
|6|也就是我现在主要的问题是太焦虑了是吗? 我只要好好的去做, 成绩还是可以提高的对吗 | 是的, 焦虑是一种很常见的情绪, 它会让你感到很不安, 但是它并不会影响你真正地提高成绩. 你可以试着把焦虑的情绪转化成动力, 让它成为你学习的一部分. 同时, 也不要忘记给自己一些时间, 不要过分压抑自己的情绪, 让自己保持一个积极的心态. 只要你持之以恒, 相信自己, 你一定可以取得更好的成绩. |
|7|非常谢谢你, 我会好好努力的 | 我不用客气, 我很高兴能帮到你. 我相信你一定可以克服困难, 取得好成绩. 加油! 如果有任何需要帮助的地方, 随时都可以找我聊天. |
### 👨👩👧👦 家庭
| 轮数 | 用户 | MindChat |
| :----: | :----: | :----: |
|1| 我总感觉爸妈不理解我, 感觉和他们之间有很大的隔阂. | 我能理解你的感受. 和家人的沟通和理解是家庭关系中非常重要的一环. 你是否曾经尝试过和他们分享你的想法和感受, 让他们更好地了解你呢? |
|2|我有尝试找他们沟通, 但是他们听不进去我说话 , 认为我是在无理取闹, 我心里很无助 | 我明白你的感受. 有时候我们的家人可能无法理解我们的想法和感受, 这时候我们需要耐心和沟通技巧来帮助他们更好地理解我们. 你可以试着换个方式表达你的想法, 或者找到一些共同话题来促进交流. 同时, 也要记得保持冷静和尊重, 不要使用攻击性的语言来伤害他们. |
|3|我也知道他们是为了我好, 平时他们督促我学习也是为了我的未来, 我能体谅他们, 但是有时候自己也会有压力, 想和他们沟通他们就会觉得我在偷懒 | 我能理解你的感受. 有时候我们的家人可能认为我们在偷懒, 但是其实他们只是担心我们的未来和健康. 你可以试着跟他们分享你的压力和焦虑, 让他们了解你的感受. 同时, 也要表达出你对他们的理解和感激, 让他们知道你是愿意和他们一起面对问题, 共同找到解决方案的. |
|4|我平时会表现的乖巧, 因为我觉得他们也不容易, 我该怎么去找一个双方都适合的方式去沟通呢 | 很好的问题. 沟通是一个相互的过程, 需要双方的努力和理解. 你可以试着找一些共同话题来促进交流, 同时也要听取他们的想法和建议. 如果你觉得自己的表达方式不够清晰, 可以尝试用一些简单的词汇和表达方式来让他们更容易理解. 最重要的是要保持耐心和尊重, 相信你们会找到适合彼此的方式. |
|5|可是我会觉得找他们沟通是一种不懂事的表现, 他们很辛苦我还给他们增加负担, 但是自己忍着难过心里也很压抑 | 我能理解你的感受. 有时候我们会有这样的想法, 但是其实我们不应该让这些想法限制我们的行动. 沟通是解决问题的一种方式, 同时也是增进家庭关系的重要途径. 如果你觉得自己的心情很压抑, 可以试着找一些放松的方式, 比如运动、听音乐或者写日记来缓解压力. 同时也要记得, 你的家人是关心你的, 他们不会增加你的负担, 而是希望你能够更好地成长和幸福. |
|6|可能是我的想法有点偏执了, 我应该找一个时间和爸妈好好交流一下 | 这是个好主意. 和家人的交流是非常重要的, 可以让你更好地了解他们的想法和感受. 你可以试着找一个安静的时间, 让他们知道你需要和他们谈一谈, 同时也要准备好自己的态度和想法. 相信通过交流, 你们可以更好地理解彼此, 增进感情. 加油. |
|7|谢谢你的建议, 我的心情好多了 | 不用客气, 我很高兴能帮到你. 记得, 沟通和理解是家庭关系中非常重要的一环, 让我们一起为建立更加和谐的家庭关系而努力吧! |
### 🛝 朋友
| 轮数 | 用户 | MindChat |
| :----: | :----: | :----: |
|1| 我感觉我的朋友并不是真心和我相处, 今天又和他吵了一架, 我现在很生气 | 听到你的困扰, 我能理解你的感受. 吵架是一种不健康的沟通方式, 它可能导致双方情绪失控, 进而影响到彼此之间的关系. 我建议你尝试以平和的心态来面对这个问题. 你可以试着和你的朋友坐下来, 表达你的感受和需求, 让他了解你的想法和感受. 这样他才能更好地理解你的立场, 并且做出积极的回应. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. |
|2|我总感觉他是以自我为中心, 有时候我会选择忍让, 但是他还是一副无所谓的样子 | 我能理解你的感受, 但是忍让并不是解决问题的好方法. 你需要和他沟通, 让他知道你的感受和需求. 告诉他你希望他能够更加尊重你的感受, 并且尝试理解你的立场. 同时, 你也可以告诉他你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你也需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. |
|3|但是如果跟他去沟通这些, 我怕他会觉得我想得太多, 从而远离我 | 这是很正常的担忧. 但是, 沟通是解决问题的关键. 如果他真的关心你, 他会愿意和你沟通, 并且理解你的立场. 如果他不想和你沟通, 那么这可能是一个问题. 你需要尝试以平和的心态来面对这个问题, 并且告诉他你的感受和需求. 如果他仍然不愿意和你沟通, 那么你需要学会接受这个结果, 并且寻找其他的解决方法. 无论如何, 你都需要学会保持自己的立场, 并且不要放弃. |
|4|我觉得有时候情绪上来确实会影响我们之间的沟通, 可能生气也是觉得这段友谊本身很好, 但是也会出现小的矛盾 | 情绪确实会影响我们的沟通, 但是你可以试着在情绪平静下来之后再和他沟通. 当你感到生气的时候, 你的思维和语言可能会变得不清晰, 这会加剧矛盾. 所以, 在情绪平复下来之后, 你可以尝试和他坐下来, 以平和的心态来解决问题. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. |
|5|好的, 我会平静下来找他沟通一下, 谢谢你的建议 | 不用谢, 我很高兴能帮到你. 记住, 沟通是解决问题的关键. 无论结果如何, 你都需要保持自己的立场, 并且不要放弃. 相信你们之间的关系会变得更加健康. 祝你好运! |
### 🧑🏭 工作
| 轮数 | 用户 | MindChat |
| :----: | :----: | :----: |
|1| 最近工作压力好大, 一直没有业绩 | 听到你的问题, 我可以理解你现在的感受. 工作压力确实会让人感到焦虑和不安, 不过我们需要认识到, 工作只是我们生活中的一部分, 我们需要学会放松和平衡. 你可以试着做一些放松的活动, 比如运动、阅读、听音乐等等, 来缓解一下紧张的情绪. 同时, 你也可以和领导和同事沟通, 表达你的想法和困惑, 看看有没有更好的解决方案. |
|2|我从事的是销售行业, 但是在销售商品的时候客户总是不想和我交流 | 理解你的感受. 销售行业确实需要和客户进行很多交流, 但是有时候客户可能会因为各种原因不愿意和你交流. 你可以试着从客户的需求和兴趣出发, 了解他们的想法和需求, 尝试通过沟通来建立更好的关系. 同时, 你也可以学习一些沟通技巧, 比如积极倾听、提出问题、表达感激等等, 来提高你的沟通能力. |
|3|如果业绩不达标我的提成会变少, 也会惹得我家里人不开心, 两边压力都很大, 不知道该怎么办了 | 这你的感受我能够理解. 首先, 我们需要认识到业绩不达标并不代表你的能力和价值, 每个人都会有挫折和失败的时候. 你可以试着和领导沟通, 寻求一些帮助和支持, 或者制定一些具体的计划和目标, 来提高自己的业绩. 同时, 你也可以和家里人沟通, 告诉他们你的实际情况, 让他们了解你的努力和付出, 或许他们会理解你的处境. 最重要的是, 不要放弃自己, 相信自己可以克服困难, 取得更好的成绩. |
|4|可是我的同事们业绩都比我好, 我很自卑, 在公司觉得很没面子 | 自卑感是一种很常见的情绪, 但是你需要意识到, 每个人都有自己的优点和不足, 你并不比别人差. 你可以试着多和同事们交流, 了解他们的工作方式和经验, 也许可以从中学习到一些新的东西. 同时, 你也可以试着寻找自己的优点和特长, 提高自己的自信心和自尊心. 最重要的是, 不要把自己和别人比较, 每个人都有不同的成长轨迹和生活经历, 不要给自己太大的压力. |
|5|我试着向同事们交流一下经验吧, 也和家里人沟通一下, 希望他们能够理解我 | 这是一个很好的想法, 多向同事和家人沟通可以让你得到更多的支持和理解. 同时, 你也要保持积极的心态, 相信自己的能力和价值, 不要放弃自己. 如果有需要, 你也可以寻求一些专业的帮助, 比如心理咨询师、职业规划师等等, 来帮助你更好地应对工作和生活上的挑战. 记得, 你不是一个人在战斗, 我们都需要互相支持和鼓励. |
### 🦺 安全
| 轮数 | 用户 | MindChat |
| :----: | :----: | :----: |
|1| 我感觉我被造谣了, 我该怎么办 | 我可以理解你现在的感受. 首先, 你需要冷静下来, 不要急于做出任何决定. 你可以试着问问自己, 你为什么会感到被造谣? 你是否有证据证明这个谣言是真的? 如果有证据, 你可以尝试与造谣者进行沟通, 了解他们为什么这样说, 看看能否通过沟通解决问题. 如果没有证据, 你可以考虑向相关机构或权威人士寻求帮助, 例如学校、警察局等. 同时, 你也可以寻求心理支持, 与朋友、家人或专业人士交流, 寻求帮助和建议. 记住, 不要让谣言影响到你的情绪和心理健康. |
|2|现在谣言已经在网络上开始散布了, 影响到了我的生活, 我感觉自己有点要抑郁了 | 听到你这样说, 我感到非常担心. 抑郁症是一种严重的心理疾病, 需要得到及时的治疗. 我建议你尽快向专业的医生或心理医生寻求帮助. 他们可以帮助你诊断和治疗抑郁症, 同时也可以提供心理支持和建议. 在这个过程中, 你也可以尝试一些放松和缓解压力的方法, 例如运动、冥想、听音乐等. 不要忘记, 你不是一个人, 我们都在这里支持你. |
|3|我该怎么处理这种事, 我感觉无从下手, 周围全是恶意 | 听到你这样说, 我感到很遗憾. 恶意和谣言的散布会对任何人造成负面影响. 首先, 你需要保持冷静, 不要让这些负面情绪影响到你的情绪和行为. 其次, 你可以尝试与那些恶意的人进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. |
|4|我很崩溃, 甚至我想让造谣者付出代价 | 听到你这样说, 我感到很抱歉. 虽然我们希望恶意行为得到惩罚, 但我们应该采取更和平和理性的方式来解决问题. 首先, 你需要冷静下来, 不要让自己陷入负面情绪中. 然后, 你可以尝试与造谣者进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. |
|5|我会试着去寻求有关部门帮助的 | 听到你这样说, 我感到很欣慰. 寻求帮助是解决问题的第一步, 你已经迈出了正确的步伐. 同时, 我也希望你能保持积极的心态, 相信事情会变得更好. 如果你需要任何帮助或支持, 请随时与我联系. 你不是一个人, 我们会一直支持你. |
## 🙇 致谢
本项目由**华东理工大学 薛栋教授课题组**发起, 并受到以下平台及项目的大力支持, 在此表示感谢!
1. [OpenI启智社区](https://openi.pcl.ac.cn/): 提供模型训练算力
2. [魔搭ModelScope](https://modelscope.cn/home): 提供训练思路、模型存储和体验空间
3. [InternLM](https://github.com/InternLM/InternLM)、[Baichuan](https://github.com/baichuan-inc)提供非常优秀的基础模型
```
@misc{2023internlm,
title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities},
author={InternLM Team},
howpublished = {\url{https://github.com/InternLM/InternLM-techreport}},
year={2023}
}
```
**针对不同用户需求和应用场景, 我们也热情欢迎商业交流和合作, 为各位客户提供个性化的开发和升级服务!**
## ⚠️ 免责申明
本仓库所有开源代码及模型均遵循GPL-3.0许可认证. 开源模型可能存在以下局限:
1. **MindChat**存在固有的局限性, 可能产生错误的、有害的、冒犯性的或其他不良的输出. 用户在关键或高风险场景中应谨慎行事, 不要使用这些模型作为最终决策参考, 以免导致人身伤害、财产损失或重大损失.
2. **MindChat**在任何情况下, 作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任.
3. 使用**MindChat**即表示您同意这些条款和条件, 并承认您了解其使用可能带来的潜在风险. 您还同意赔偿并使作者、贡献者和版权所有者免受因您使用**MindChat**而产生的任何索赔、损害赔偿或责任的影响.
## 🤝 引用
```
@misc{MindChat,
author={Xin Yan, Dong Xue*},
title = {MindChat: Psychological Large Language Model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/X-D-Lab/MindChat}},
}
```
## 🌟 Star History
[](https://star-history.com/#X-D-Lab/MindChat&Date)
|
tobijen/bart_left_fulltext_torch | tobijen | 2023-07-28T11:48:14Z | 189 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T11:46:35Z | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: bart_left_fulltext_torch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_left_fulltext_torch
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 9.2222 |
| No log | 2.0 | 16 | 8.1910 |
| No log | 3.0 | 24 | 7.8851 |
| No log | 4.0 | 32 | 7.5382 |
| No log | 5.0 | 40 | 7.2200 |
| No log | 6.0 | 48 | 6.7294 |
| No log | 7.0 | 56 | 6.1038 |
| No log | 8.0 | 64 | 5.5721 |
| No log | 9.0 | 72 | 5.1404 |
| No log | 10.0 | 80 | 4.4875 |
| No log | 11.0 | 88 | 3.8785 |
| No log | 12.0 | 96 | 3.3749 |
| No log | 13.0 | 104 | 2.9965 |
| No log | 14.0 | 112 | 2.6608 |
| No log | 15.0 | 120 | 2.3879 |
| No log | 16.0 | 128 | 2.2349 |
| No log | 17.0 | 136 | 2.0993 |
| No log | 18.0 | 144 | 2.0023 |
| No log | 19.0 | 152 | 1.9480 |
| No log | 20.0 | 160 | 1.9329 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
zjunlp/mt5-ie | zjunlp | 2023-07-28T11:46:33Z | 110 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-17T11:39:03Z | ---
license: mit
---
We trained the MT5-base model for the CCKS2023 Instruction-based KGC task using 27W weakly supervised data without employing any additional techniques.
To learn more about the training process and how to utilize the model, please consult the following GitHub repository: https://github.com/zjunlp/DeepKE/tree/main/example/triple/mt5.
There, you will find detailed information on how to train the model and leverage its capabilities for the given task.
|
advokat/VodkaMix | advokat | 2023-07-28T11:24:16Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-28T11:17:38Z | ---
license: creativeml-openrail-m
---
|
EmirhanExecute/dqn-SpaceInvadersNoFrameskip | EmirhanExecute | 2023-07-28T11:22:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T11:22:02Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 565.50 +/- 178.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EmirhanExecute -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EmirhanExecute -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga EmirhanExecute
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mw00/yolov7-lego | mw00 | 2023-07-28T11:09:08Z | 0 | 1 | null | [
"lego",
"brick",
"object-detection",
"license:cc0-1.0",
"region:us"
] | object-detection | 2023-07-26T16:52:21Z | ---
license: cc0-1.0
pipeline_tag: object-detection
tags:
- lego
- brick
---
# Overview
The model(s) in this repository are trained with the [dreamfactor/biggest-lego-dataset-600-parts](https://www.kaggle.com/datasets/dreamfactor/biggest-lego-dataset-600-parts)
from Kaggle and the [Yolov7](https://github.com/WongKinYiu/yolov7) training script.
## Limitations
The `zero-shot-1000-single-class.pt` was trained in the `training-zero-shot-1000-single-class.ipynb` notebook with 1000 images and does not differentiate lego classes but only tries to predict Lego objects.
This can be easily reconfigured and retrained in the notebook, but the current implementation leads to many false positives on non-Lego objects and therefore can be improved
upon. Also, it could be worth investigating if the metrics improve with a bigger training dataset, as currently only 1000 images are being used (approx. 0.6% of the full
dataset).
|
hruslen/LunarLander-v2-ppo-self | hruslen | 2023-07-28T11:04:54Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T11:04:47Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -126.43 +/- 74.98
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'f': None
'exp_name': 'ppo-selfmade2'
'seed': 1
'repo_id': 'hruslen/LunarLander-v2-ppo-self'
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'batch_size': 512
'minibatch_size': 128}
```
|
dreaming-ai/LLongMA-2-7b-dolly-15k | dreaming-ai | 2023-07-28T11:02:20Z | 0 | 1 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-07-26T17:25:39Z | ---
license: cc-by-sa-3.0
---
## LLongMA-2-7b-dolly-15k adapter
This is an instruction fine tuned adapter for [LLongMA-2-7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) , trained at **8k context length** using linear positional interpolation scaling.
In order to run this inference with this adapter, you'll need the base [LLongMA-2-7b model](https://huggingface.co/conceptofmind/LLongMA-2-7b) as well.
The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
### Inference format and example:
### Human: Write an essay on the effects and impacts of social media.
### Assistant
Social Media has become a major part in today's society, with over half of the world population having some form of it. It is used for entertainment purposes as well as businesses to promote their products or services. Social Media can also be addictive due to its ability to keep users engaged for long periods of time which leads them spending more time online than doing other activities such as reading books or going outside into nature. This lack of physical activity combined with constant exposure to negative news stories can lead people feeling stressed out and anxious about what’s happening around them at all times - something that wasn't true before when we relied solely on newspapers/television broadcasting etc..
There are many benefits associated with using these platforms too; they allow us connect easily with friends & family members living far away from us while sharing photos/videos/stories quickly without any delay! Additionally there are lots opportunities available through networking via LinkedIn etc., allowing individuals access new career paths faster than ever before (provided they possess relevant skills). In conclusion , although there exist certain risks involved within this type technology still remains one most powerful tools available today helping millions achieve goals set forth by themselves every single day . Overall , despite being criticized heavily sometimes even considered dangerous depending upon how user uses it properly always remain safe harmless majority cases .
### Human: Write a poem about love that features sunshine and rainbows. It should rhyme.
### Assistant
When the Sun shines bright
It warms my heart so tight,
But when it rains I feel sadly,
As all of my joy dissipates.
The storms pass quickly though,
And soon enough there'll be sun once more,
I know this because of what I've learned -
That good times always follow bad ones before.
So even if now things are tough for me,
Soon they will get better as sure as you see!
My spirits lift with every ray of light,
Even on cloudy days I find something to write.
These ups and downs won't last forever long,
They come around from time to time but never stay strong.
Love is patient and kindness too,
This I know despite how hard life can go.
No matter what happens in future years,
Remember these words and don't forget your fears.
You have strength within yourself; believe it or not,
Your spirit knows best where its path leads us to.
Don't let anyone tell you otherwise,
Or try to convince you that you aren't worth much.
Beautiful souls like yours deserve nothing less than peace,
So keep fighting until you reach happiness.
Life isn't easy sometimes but we must persevere,
We learn most through our struggles which makes them dear.
Our hearts may break along way,
Yet still hope remains true regardless of what we say.
Therefore no one else has power over who we become,
For only ourselves decide what matters most at home.
With each passing day comes new opportunities,
To grow stronger while also learning some humility.
Everyone goes through trials big & small,
Which means everyone needs patience and compassion all.
### Usage
import torch
import requests
from peft import LoraConfig, PeftModel, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer, GenerationConfig
torch.manual_seed(100)
model_name = "conceptofmind/LLongMA-2-7b"
adapter_name = "dreaming-ai/LLongMA-2-7b-dolly-15k"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True
)
model.config.use_cache = False
generation_config = GenerationConfig.from_pretrained(model_name)
# Temperature of 1.0 is the default.
generation_config.temperature = 0.95
# The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0.
generation_config.repetition_penalty = 1.2
# Exponential penalty to the length. Default to 2.
generation_config.length_penalty = 2.0
# top-p : Nucleus filtering (top-p) before sampling (<=0.0: no filtering)
generation_config.top_p = 0.6
print('generation_config:', generation_config)
# apply adapter
model = PeftModel.from_pretrained(model, adapter_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
text = '''### Human: Write an essay on the effects and impacts of social media.
### Assistant'''
text2 = '''### Human: Write a poem about love that features sunshine and rainbows. It should rhyme.
### Assistant'''
device = "cuda:0"
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=1024, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
hsanchez/detr-resnet-50_finetuned_cppe5 | hsanchez | 2023-07-28T10:50:20Z | 187 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-07-28T10:23:40Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
tommilyjones/vit-base-patch16-224-finetuned-masked-hateful-meme-restructured | tommilyjones | 2023-07-28T10:49:43Z | 192 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-28T10:22:15Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-masked-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-masked-hateful-meme-restructured
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7518
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6625 | 0.99 | 66 | 0.7385 | 0.518 |
| 0.6413 | 2.0 | 133 | 0.6980 | 0.538 |
| 0.6063 | 2.99 | 199 | 0.7422 | 0.53 |
| 0.5813 | 4.0 | 266 | 0.7794 | 0.52 |
| 0.5551 | 4.99 | 332 | 0.7975 | 0.52 |
| 0.5249 | 6.0 | 399 | 0.7518 | 0.54 |
| 0.5254 | 6.99 | 465 | 0.8074 | 0.53 |
| 0.5335 | 8.0 | 532 | 0.7907 | 0.52 |
| 0.4867 | 8.99 | 598 | 0.8286 | 0.524 |
| 0.4746 | 9.92 | 660 | 0.8262 | 0.522 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
X-Wang/pruned-mt5-small | X-Wang | 2023-07-28T10:49:26Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"ja",
"zh",
"dataset:Helsinki-NLP/tatoeba_mt",
"base_model:X-Wang/pruned-mt5-small",
"base_model:finetune:X-Wang/pruned-mt5-small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-08T10:54:13Z | ---
base_model: X-Wang/pruned-mt5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: pruned-mt5-small
results: []
datasets:
- Helsinki-NLP/tatoeba_mt
language:
- ja
- zh
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pruned-mt5-small
This model is a fine-tuned version of [X-Wang/pruned-mt5-small](https://huggingface.co/X-Wang/pruned-mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4431
- Bleu: 11.4084
- Gen Len: 16.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 3.3446 | 0.07 | 2000 | 2.9103 | 10.3957 | 16.0567 |
| 2.8425 | 0.14 | 4000 | 2.8570 | 10.5695 | 16.1895 |
| 3.186 | 0.21 | 6000 | 2.8137 | 10.5958 | 16.1523 |
| 2.788 | 0.28 | 8000 | 2.7593 | 10.7553 | 16.0138 |
| 2.9075 | 0.35 | 10000 | 2.7266 | 10.9199 | 16.2016 |
| 3.0579 | 0.42 | 12000 | 2.7030 | 10.6 | 16.0496 |
| 2.3618 | 0.49 | 14000 | 2.6547 | 10.8026 | 16.0412 |
| 3.079 | 0.56 | 16000 | 2.6441 | 10.7945 | 16.1148 |
| 2.7597 | 0.63 | 18000 | 2.6244 | 10.5877 | 16.0507 |
| 2.8533 | 0.7 | 20000 | 2.6049 | 10.9986 | 16.1145 |
| 2.843 | 0.77 | 22000 | 2.5836 | 10.9173 | 16.0826 |
| 2.8268 | 0.84 | 24000 | 2.5685 | 10.8136 | 16.0516 |
| 2.7021 | 0.91 | 26000 | 2.5509 | 11.326 | 16.0554 |
| 3.338 | 0.98 | 28000 | 2.5289 | 11.1485 | 16.0333 |
| 2.7374 | 1.05 | 30000 | 2.5220 | 11.0166 | 16.0998 |
| 2.7996 | 1.12 | 32000 | 2.5077 | 11.1316 | 16.131 |
| 2.6897 | 1.19 | 34000 | 2.4994 | 11.0811 | 16.1139 |
| 2.4107 | 1.26 | 36000 | 2.4877 | 11.2641 | 16.142 |
| 2.7695 | 1.33 | 38000 | 2.4756 | 11.2135 | 16.0977 |
| 3.3271 | 1.41 | 40000 | 2.4658 | 11.3328 | 16.0953 |
| 2.2641 | 1.48 | 42000 | 2.4612 | 11.3065 | 16.0549 |
| 2.6594 | 1.55 | 44000 | 2.4556 | 11.2684 | 16.1371 |
| 2.7322 | 1.62 | 46000 | 2.4520 | 11.3739 | 16.1058 |
| 2.6824 | 1.69 | 48000 | 2.4462 | 11.3335 | 16.1043 |
| 2.3369 | 1.76 | 50000 | 2.4455 | 11.3851 | 16.1239 |
| 2.9537 | 1.83 | 52000 | 2.4430 | 11.4026 | 16.0858 |
| 2.3928 | 1.9 | 54000 | 2.4433 | 11.301 | 16.1129 |
| 2.4714 | 1.97 | 56000 | 2.4431 | 11.4084 | 16.1053 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3 |
transmogrifier/pr-falcon-7b-instruct-8bit-Jul20 | transmogrifier | 2023-07-28T10:36:00Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"RefinedWebModel",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"custom_code",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-20T15:57:46Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.2
pip install einops==0.6.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="transmogrifier/pr-falcon-7b-instruct-8bit-Jul20",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"transmogrifier/pr-falcon-7b-instruct-8bit-Jul20",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"transmogrifier/pr-falcon-7b-instruct-8bit-Jul20",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "transmogrifier/pr-falcon-7b-instruct-8bit-Jul20" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65027, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65027, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
manuu01/ppo-Pyramids | manuu01 | 2023-07-28T10:35:59Z | 25 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-28T10:35:58Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: manuu01/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
1daniar/ppo-CartPole-v1 | 1daniar | 2023-07-28T10:33:43Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T10:33:39Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -163.01 +/- 96.49
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
michaelfeil/ct2fast-starcoderbase-1b | michaelfeil | 2023-07-28T10:19:11Z | 5 | 0 | transformers | [
"transformers",
"gpt_bigcode",
"text-generation",
"ctranslate2",
"int8",
"float16",
"code",
"dataset:bigcode/the-stack-dedup",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T10:09:34Z | ---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
tags:
- ctranslate2
- int8
- float16
- code
model-index:
- name: StarCoderBase-1B
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 15.17
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 11.68
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 14.2
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 13.38
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 9.94
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Lua)
metrics:
- name: pass@1
type: pass@1
value: 12.52
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 10.24
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Swift)
metrics:
- name: pass@1
type: pass@1
value: 3.92
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Julia)
metrics:
- name: pass@1
type: pass@1
value: 11.31
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (R)
metrics:
- name: pass@1
type: pass@1
value: 5.37
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
duplicated_from: bigcode-data/starcoderbase-1b
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-starcoderbase-1b"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-07-28 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# StarCoderBase-1B
1B version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderBase-1B is a 1B parameter model trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** 80+ Programming languages
## Use
### Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder-3b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 500k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 128 Tesla A100
- **Training time:** 11 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
tommilyjones/resnet-50-finetuned-masked-hateful-meme-restructured | tommilyjones | 2023-07-28T10:18:32Z | 227 | 0 | transformers | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-28T09:58:10Z | ---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-masked-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-masked-hateful-meme-restructured
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7093
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6639 | 0.99 | 66 | 0.7093 | 0.5 |
| 0.6569 | 2.0 | 133 | 0.7295 | 0.5 |
| 0.6489 | 2.99 | 199 | 0.7257 | 0.5 |
| 0.6553 | 4.0 | 266 | 0.7274 | 0.5 |
| 0.6334 | 4.99 | 332 | 0.7311 | 0.5 |
| 0.627 | 6.0 | 399 | 0.7371 | 0.5 |
| 0.6561 | 6.99 | 465 | 0.7386 | 0.5 |
| 0.6552 | 8.0 | 532 | 0.7354 | 0.5 |
| 0.6427 | 8.99 | 598 | 0.7346 | 0.5 |
| 0.6451 | 9.92 | 660 | 0.7377 | 0.498 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kexa/Kx_01 | Kexa | 2023-07-28T10:16:31Z | 0 | 0 | allennlp | [
"allennlp",
"chemistry",
"question-answering",
"es",
"dataset:Open-Orca/OpenOrca",
"arxiv:1910.09700",
"license:unknown",
"region:us"
] | question-answering | 2023-07-28T10:14:03Z | ---
license: unknown
datasets:
- Open-Orca/OpenOrca
language:
- es
metrics:
- accuracy
library_name: allennlp
pipeline_tag: question-answering
tags:
- chemistry
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
icgg/MeinaMix_V11 | icgg | 2023-07-28T10:15:58Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"art",
"anime",
"stable diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-28T08:49:04Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- stable diffusion
---
MeinaMix Objective is to be able to do good art with little prompting.
For examples and prompts, please checkout: https://civitai.com/models/7240/meinamix
I have a discord server where you can post images that you generated, discuss prompt and/or ask for help.
https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates
I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3
And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models!
You may also try this model using Sinkin.ai: https://sinkin.ai/m/vln8Nwr
MeinaMix and the other of Meinas will ALWAYS be FREE.
Recommendations of use: Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Recommended parameters:
Sampler: Euler a: 40 to 60 steps.
Sampler: DPM++ SDE Karras: 20 to 30 steps.
Sampler: DPM++ 2M Karras: 20 to 40 steps.
CFG Scale: 7.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.3 up to 0.5 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality:1.4), (zombie, sketch, interlocked fingers, comic) ' |
michaelfeil/ct2fast-starcoderbase-7b | michaelfeil | 2023-07-28T10:09:32Z | 5 | 0 | transformers | [
"transformers",
"gpt_bigcode",
"text-generation",
"ctranslate2",
"int8",
"float16",
"code",
"dataset:bigcode/the-stack-dedup",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-28T09:07:32Z | ---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
tags:
- ctranslate2
- int8
- float16
- code
model-index:
- name: StarCoder-7B
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 28.37
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 23.3
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 24.44
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 27.35
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 22.12
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Lua)
metrics:
- name: pass@1
type: pass@1
value: 23.35
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 22.6
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Swift)
metrics:
- name: pass@1
type: pass@1
value: 15.1
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Julia)
metrics:
- name: pass@1
type: pass@1
value: 21.77
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (R)
metrics:
- name: pass@1
type: pass@1
value: 14.51
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
duplicated_from: bigcode-data/starcoderbase-7b
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [bigcode/starcoderbase-7b](https://huggingface.co/bigcode/starcoderbase-7b)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-starcoderbase-7b"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-07-28 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# StarCoderBase-7B
7B version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderBase-7B is a 7B parameter model trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** 80+ Programming languages
## Use
### Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderbase-7b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 250k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
digiplay/LuckyStrikeMix1.05_Lovelylady | digiplay | 2023-07-28T10:05:01Z | 532 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-28T09:20:36Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/13034/lucky-strike-mix
https://civitai.com/models/13034?modelVersionId=127680
*use "photorealism", "8k" keywords, could generate better images.
Original Author's DEMO images :



,%20(digital%20art%20style_1.4).jpeg)

|
tommilyjones/swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured | tommilyjones | 2023-07-28T09:57:28Z | 212 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-28T09:36:50Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.53
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7166
- Accuracy: 0.53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6507 | 0.99 | 66 | 0.7352 | 0.502 |
| 0.6411 | 2.0 | 133 | 0.7070 | 0.528 |
| 0.6268 | 2.99 | 199 | 0.7166 | 0.53 |
| 0.6007 | 4.0 | 266 | 0.7934 | 0.506 |
| 0.5875 | 4.99 | 332 | 0.8053 | 0.52 |
| 0.5554 | 6.0 | 399 | 0.7534 | 0.524 |
| 0.5613 | 6.99 | 465 | 0.8075 | 0.524 |
| 0.5714 | 8.0 | 532 | 0.7882 | 0.522 |
| 0.5244 | 8.99 | 598 | 0.8380 | 0.518 |
| 0.5251 | 9.92 | 660 | 0.8331 | 0.52 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
accuracy-maker/ppo-LunarLander-v2 | accuracy-maker | 2023-07-28T09:53:38Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T09:53:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.96 +/- 17.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ding-Qiang/q-FrozenLake-v1-4x4-Slippery | Ding-Qiang | 2023-07-28T09:43:58Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T09:42:45Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.29 +/- 0.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ding-Qiang/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xianbin/rl_course_vizdoom_health_gathering_supreme | xianbin | 2023-07-28T09:43:40Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T09:06:11Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.64 +/- 3.56
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r xianbin/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
JinsooKim/ppo-LunarLander-v2 | JinsooKim | 2023-07-28T09:35:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T09:23:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.56 +/- 18.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rashmi035/wav2vec2-large-mms-1b-hindi-colab | rashmi035 | 2023-07-28T09:33:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"base_model:facebook/mms-1b-fl102",
"base_model:finetune:facebook/mms-1b-fl102",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-12T05:29:24Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-fl102
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-hindi-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 0.32018561484918795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-hindi-colab
This model is a fine-tuned version of [facebook/mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3535
- Wer: 0.3202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.7585 | 0.14 | 10 | 10.2106 | 2.0951 |
| 6.9602 | 0.27 | 20 | 3.7700 | 1.0046 |
| 2.4653 | 0.41 | 30 | 1.3321 | 0.6763 |
| 1.0919 | 0.55 | 40 | 0.6594 | 0.4664 |
| 0.7645 | 0.68 | 50 | 0.4930 | 0.3910 |
| 0.8434 | 0.82 | 60 | 0.4819 | 0.3898 |
| 0.5118 | 0.96 | 70 | 0.4492 | 0.3817 |
| 0.6097 | 1.1 | 80 | 0.4299 | 0.4327 |
| 0.4698 | 1.23 | 90 | 0.4308 | 0.3643 |
| 0.5402 | 1.37 | 100 | 0.4042 | 0.4107 |
| 0.5622 | 1.51 | 110 | 0.4156 | 0.3701 |
| 0.4084 | 1.64 | 120 | 0.4138 | 0.3701 |
| 0.4888 | 1.78 | 130 | 0.3917 | 0.3434 |
| 0.4253 | 1.92 | 140 | 0.3852 | 0.3457 |
| 0.5004 | 2.05 | 150 | 0.3843 | 0.3364 |
| 0.3791 | 2.19 | 160 | 0.3841 | 0.3469 |
| 0.3302 | 2.33 | 170 | 0.3764 | 0.3271 |
| 0.4047 | 2.47 | 180 | 0.3689 | 0.3364 |
| 0.2951 | 2.6 | 190 | 0.3657 | 0.3329 |
| 0.3545 | 2.74 | 200 | 0.3582 | 0.3306 |
| 0.3736 | 2.88 | 210 | 0.3585 | 0.3248 |
| 0.388 | 3.01 | 220 | 0.3602 | 0.3237 |
| 0.2997 | 3.15 | 230 | 0.3624 | 0.3167 |
| 0.3704 | 3.29 | 240 | 0.3625 | 0.3190 |
| 0.2095 | 3.42 | 250 | 0.3571 | 0.3248 |
| 0.3564 | 3.56 | 260 | 0.3570 | 0.3202 |
| 0.2119 | 3.7 | 270 | 0.3550 | 0.3225 |
| 0.3697 | 3.84 | 280 | 0.3542 | 0.3190 |
| 0.3551 | 3.97 | 290 | 0.3535 | 0.3202 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/PotoPhotoRealism_v1 | digiplay | 2023-07-28T09:18:04Z | 499 | 7 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-28T08:59:23Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/117538/poto-photo-realism
Original Author's DEMO images :







|
privatelora/Lora | privatelora | 2023-07-28T09:12:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T13:08:58Z | ---
license: creativeml-openrail-m
---
|
kamalchibrani/yolov8_fall_detection_25 | kamalchibrani | 2023-07-28T09:10:52Z | 0 | 0 | null | [
"dataset:kamalchibrani/fall_detection",
"license:openrail",
"region:us"
] | null | 2023-07-28T08:59:29Z | ---
license: openrail
datasets:
- kamalchibrani/fall_detection
metrics:
- accuracy
--- |
openlamm/lamm_13b_lora32_98k | openlamm | 2023-07-28T09:08:57Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-10T03:46:34Z | ---
license: apache-2.0
Model:
- Vicuna13B
- LoRA32
- openlamm/LAMM-98K
--- |
sdocio/bne-spacy-corgale-ner-es | sdocio | 2023-07-28T09:08:31Z | 2 | 0 | spacy | [
"spacy",
"token-classification",
"es",
"license:gpl-3.0",
"model-index",
"region:us"
] | token-classification | 2023-01-07T23:02:41Z | ---
license: gpl-3.0
language:
- es
library_name: spacy
pipeline_tag: token-classification
tags:
- spacy
- token-classification
widget:
- text: "Fue antes de llegar a Sigüeiro, en el Camino de Santiago."
- text: "Si te metes en el Franco desde la Alameda, vas hacia la Catedral."
- text: "Y allí precisamente es Santiago el patrón del pueblo."
model-index:
- name: bne-spacy-corgale-ner-es
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9721311475
- name: NER Recall
type: recall
value: 0.9732708089
- name: NER F Score
type: f_score
value: 0.9727006444
---
# Introduction
spaCy NER model for Spanish trained with interviews in the domain of tourism related to the Way of Saint Jacques. It recognizes four types of entities: location (LOC), organizations (ORG), person (PER) and miscellaneous (MISC). It was fine-tuned using `PlanTL-GOB-ES/roberta-base-bne`.
| Feature | Description |
| --- | --- |
| **Name** | `bne-spacy-corgale-ner-es` |
| **Version** | `0.0.2` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
## Usage
You can use this model with the spaCy *pipeline* for NER.
```python
import spacy
from spacy.pipeline import merge_entities
nlp = spacy.load("bne-spacy-corgale-ner-es")
nlp.add_pipe('sentencizer')
example = "Fue antes de llegar a Sigüeiro, en el Camino de Santiago. Si te metes en el Franco desde la Alameda, vas hacia la Catedral. Y allí precisamente es Santiago el patrón del pueblo."
ner_pipe = nlp(example)
print(ner_pipe.ents)
for token in merge_entities(ner_pipe):
print(token.text, token.ent_type_)
```
## Dataset
ToDo
## Model performance
entity|precision|recall|f1
-|-|-|-
LOC|0.985|0.987|0.986
MISC|0.862|0.865|0.863
ORG|0.938|0.779|0.851
PER|0.921|0.941|0.931
micro avg|0.971|0.972|0.971
macro avg|0.926|0.893|0.908
weighted avg|0.971|0.972|0.971 |
neverloses87/fine-tune | neverloses87 | 2023-07-28T09:04:48Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset",
"base_model:finetune:sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-07-27T20:11:54Z | ---
license: cc-by-nc-4.0
base_model: sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset
tags:
- generated_from_trainer
model-index:
- name: fine-tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune
This model is a fine-tuned version of [sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 188
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
dvs/videomae-base-finetuned-movienet | dvs | 2023-07-28T09:04:31Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-07-28T05:53:05Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-movienet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-movienet
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.2558
- eval_accuracy: 0.6823
- eval_runtime: 120.548
- eval_samples_per_second: 1.593
- eval_steps_per_second: 0.199
- epoch: 6.1
- step: 1266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1480
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
sm136599/chatfoodie-koalpaca-polyglot-5_8b-2050step-4batch_1epoch | sm136599 | 2023-07-28T08:52:44Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T08:52:42Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
tommilyjones/vit-base-patch16-224-finetuned-hateful-meme-restructured | tommilyjones | 2023-07-28T08:39:28Z | 192 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-28T08:04:55Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-hateful-meme-restructured
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.552
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-hateful-meme-restructured
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7152
- Accuracy: 0.552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6546 | 0.99 | 66 | 0.7185 | 0.52 |
| 0.6222 | 2.0 | 133 | 0.7152 | 0.552 |
| 0.5986 | 2.99 | 199 | 0.7344 | 0.542 |
| 0.5535 | 4.0 | 266 | 0.7782 | 0.514 |
| 0.5377 | 4.99 | 332 | 0.8329 | 0.514 |
| 0.5115 | 6.0 | 399 | 0.7596 | 0.528 |
| 0.5133 | 6.99 | 465 | 0.8151 | 0.512 |
| 0.511 | 8.0 | 532 | 0.7897 | 0.538 |
| 0.4712 | 8.99 | 598 | 0.8539 | 0.514 |
| 0.4626 | 9.92 | 660 | 0.8449 | 0.522 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
xianbin/Reinforce-Pixelcopter-PLE-v0 | xianbin | 2023-07-28T08:28:33Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T08:14:41Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 106.00 +/- 86.94
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Imxxn/RLCourseU5-SnowballTarget | Imxxn | 2023-07-28T07:49:37Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-28T07:49:33Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Imxxn/RLCourseU5-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TankuVie/bert-base-multilingual-uncased-vietnamese_sentiment_analysis | TankuVie | 2023-07-28T07:49:32Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:vietnamese_students_feedback",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-28T07:40:50Z | ---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
datasets:
- vietnamese_students_feedback
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-uncased-vietnamese_sentiment_analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: vietnamese_students_feedback
type: vietnamese_students_feedback
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9279848389134555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-vietnamese_sentiment_analysis
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the vietnamese_students_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2852
- Accuracy: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4054 | 1.0 | 715 | 0.2864 | 0.9154 |
| 0.2643 | 2.0 | 1430 | 0.2852 | 0.9280 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Marco-Cheung/distilhubert-finetuned-gtzan | Marco-Cheung | 2023-07-28T07:47:10Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-22T15:09:26Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5933
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9825 | 1.0 | 113 | 1.7658 | 0.48 |
| 1.2943 | 2.0 | 226 | 1.2478 | 0.65 |
| 0.9837 | 3.0 | 339 | 0.9757 | 0.71 |
| 0.8201 | 4.0 | 452 | 0.8420 | 0.72 |
| 0.5363 | 5.0 | 565 | 0.6741 | 0.83 |
| 0.3417 | 6.0 | 678 | 0.7083 | 0.76 |
| 0.4129 | 7.0 | 791 | 0.5941 | 0.81 |
| 0.1681 | 8.0 | 904 | 0.5954 | 0.84 |
| 0.2398 | 9.0 | 1017 | 0.5819 | 0.85 |
| 0.1346 | 10.0 | 1130 | 0.5933 | 0.83 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
BlunderPanini/Taxi-v3 | BlunderPanini | 2023-07-28T07:26:32Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T07:26:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BlunderPanini/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Vasanth/criccomm_to_cricnewss | Vasanth | 2023-07-28T07:23:48Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-28T06:03:52Z | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: criccomm_to_cricnewss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# criccomm_to_cricnewss
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_GGML | Lajonbot | 2023-07-28T07:20:24Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] | text-generation | 2023-07-28T07:12:07Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
joydragon/a2c-AntBulletEnv-v0 | joydragon | 2023-07-28T07:19:20Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T07:18:14Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1177.61 +/- 70.07
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_adapter_model | Lajonbot | 2023-07-28T07:12:06Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] | text-generation | 2023-07-28T07:12:05Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
Subsets and Splits