modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
wangguan/ppo-LunarLander-v2 | wangguan | 2023-02-09T09:16:30Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T09:15:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 28.48 +/- 127.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
swl-models/bailocat | swl-models | 2023-02-09T09:08:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T08:43:09Z | ---
license: creativeml-openrail-m
---
|
reemalyami/AraRoBERTa-EGY | reemalyami | 2023-02-09T08:58:21Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language:
- ar
---
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
The following are the AraRoBERTa seven dialectal variations:
* [AraRoBERTa-SA](https://huggingface.co/reemalyami/AraRoBERTa-SA): Saudi Arabia (SA) dialect.
* [AraRoBERTa-EGY](https://huggingface.co/reemalyami/AraRoBERTa-EGY): Egypt (EGY) dialect.
* [AraRoBERTa-KU](https://huggingface.co/reemalyami/AraRoBERTa-KU): Kuwait (KU) dialect.
* [AraRoBERTa-OM](https://huggingface.co/reemalyami/AraRoBERTa-OM): Oman (OM) dialect.
* [AraRoBERTa-LB](https://huggingface.co/reemalyami/AraRoBERTa-LB): Lebanon (LB) dialect.
* [AraRoBERTa-JO](https://huggingface.co/reemalyami/AraRoBERTa-JO): Jordan (JO) dialect.
* [AraRoBERTa-DZ](https://huggingface.co/reemalyami/AraRoBERTa-DZ): Algeria (DZ) dialect
# When using the model, please cite our paper:
```python
@inproceedings{alyami-al-zaidy-2022-weakly,
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
author = "AlYami, Reem and Al-Zaidy, Rabah",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.24",
pages = "260--272",
}
```
# Contact
**Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | <[email protected]> | <[email protected]>
|
reemalyami/AraRoBERTa-OM | reemalyami | 2023-02-09T08:58:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language:
- ar
---
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
The following are the AraRoBERTa seven dialectal variations:
* AraRoBERTa-SA: Saudi Arabia (SA) dialect.
* AraRoBERTa-EGY: Egypt (EGY) dialect.
* AraRoBERTa-KU: Kuwait (KU) dialect.
* AraRoBERTa-OM: Oman (OM) dialect.
* AraRoBERTa-LB: Lebanon (LB) dialect.
* AraRoBERTa-JO: Jordan (JO) dialect.
* AraRoBERTa-DZ: Algeria (DZ) dialect
# When using the model, please cite our paper:
```python
@inproceedings{alyami-al-zaidy-2022-weakly,
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
author = "AlYami, Reem and Al-Zaidy, Rabah",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.24",
pages = "260--272",
}
```
# Contact
**Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | <[email protected]> | <[email protected]>
|
reemalyami/AraRoBERTa-KU | reemalyami | 2023-02-09T08:57:45Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language:
- ar
---
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
The following are the AraRoBERTa seven dialectal variations:
* [AraRoBERTa-SA](https://huggingface.co/reemalyami/AraRoBERTa-SA): Saudi Arabia (SA) dialect.
* [AraRoBERTa-EGY](https://huggingface.co/reemalyami/AraRoBERTa-EGY): Egypt (EGY) dialect.
* [AraRoBERTa-KU](https://huggingface.co/reemalyami/AraRoBERTa-KU): Kuwait (KU) dialect.
* [AraRoBERTa-OM](https://huggingface.co/reemalyami/AraRoBERTa-OM): Oman (OM) dialect.
* [AraRoBERTa-LB](https://huggingface.co/reemalyami/AraRoBERTa-LB): Lebanon (LB) dialect.
* [AraRoBERTa-JO](https://huggingface.co/reemalyami/AraRoBERTa-JO): Jordan (JO) dialect.
* [AraRoBERTa-DZ](https://huggingface.co/reemalyami/AraRoBERTa-DZ): Algeria (DZ) dialect
# When using the model, please cite our paper:
```python
@inproceedings{alyami-al-zaidy-2022-weakly,
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
author = "AlYami, Reem and Al-Zaidy, Rabah",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.24",
pages = "260--272",
}
```
# Contact
**Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | <[email protected]> | <[email protected]>
|
reemalyami/AraRoBERTa-SA | reemalyami | 2023-02-09T08:57:22Z | 38 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language:
- ar
---
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
The following are the AraRoBERTa seven dialectal variations:
* [AraRoBERTa-SA](https://huggingface.co/reemalyami/AraRoBERTa-SA): Saudi Arabia (SA) dialect.
* [AraRoBERTa-EGY](https://huggingface.co/reemalyami/AraRoBERTa-EGY): Egypt (EGY) dialect.
* [AraRoBERTa-KU](https://huggingface.co/reemalyami/AraRoBERTa-KU): Kuwait (KU) dialect.
* [AraRoBERTa-OM](https://huggingface.co/reemalyami/AraRoBERTa-OM): Oman (OM) dialect.
* [AraRoBERTa-LB](https://huggingface.co/reemalyami/AraRoBERTa-LB): Lebanon (LB) dialect.
* [AraRoBERTa-JO](https://huggingface.co/reemalyami/AraRoBERTa-JO): Jordan (JO) dialect.
* [AraRoBERTa-DZ](https://huggingface.co/reemalyami/AraRoBERTa-DZ): Algeria (DZ) dialect
# When using the model, please cite our paper:
```python
@inproceedings{alyami-al-zaidy-2022-weakly,
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
author = "AlYami, Reem and Al-Zaidy, Rabah",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.24",
pages = "260--272",
}
```
# Contact
**Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | <[email protected]> | <[email protected]>
|
Anjoe/poetry-gpt2-large-no-hoel_2 | Anjoe | 2023-02-09T08:56:20Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-06T20:25:45Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poetry-gpt2-large-no-hoel_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-no-hoel_2
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6683 | 1.0 | 19927 | 3.7260 |
| 3.3474 | 2.0 | 39854 | 3.7067 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Anjoe/poetry-gpt2-large-no_schiller_3 | Anjoe | 2023-02-09T08:56:06Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-08T20:37:21Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poetry-gpt2-large-no_schiller_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-no_schiller_3
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6925 | 1.0 | 20041 | 3.7494 |
| 3.3496 | 2.0 | 40082 | 3.7301 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Anjoe/poetry-gpt2-large-complete_3 | Anjoe | 2023-02-09T08:55:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-08T20:37:36Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poetry-gpt2-large-complete_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poetry-gpt2-large-complete_3
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6838 | 1.0 | 20371 | 3.7627 |
| 3.3382 | 2.0 | 40742 | 3.7483 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
muhtasham/santacoder-finetuned-the-stack-assembly | muhtasham | 2023-02-09T08:49:00Z | 49 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"code",
"codegen",
"assembly",
"custom_code",
"dataset:bigcode/the-stack-dedup",
"arxiv:1911.02150",
"arxiv:2207.14255",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-05T21:56:44Z | ---
license: openrail
tags:
- generated_from_trainer
- code
- codegen
- assembly
model-index:
- name: santacoder-finetuned-the-stack-assembly
results: []
datasets:
- bigcode/the-stack-dedup
language:
- code
pipeline_tag: text-generation
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-assembly
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an on The Stack [assembly](https://huggingface.co/datasets/bigcode/the-stack-dedup) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7423
- eval_runtime: 14042.2321
- eval_samples_per_second: 6.116
- eval_steps_per_second: 3.058
- epoch: 0.3
- step: 1500
## Model description
The [SantaCoder](https://huggingface.co/bigcode/santacoder) models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
In addition, there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
## Intended uses & limitations
The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
## Training and evaluation data
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 |
marcosgg/bert-small-gl-cased | marcosgg | 2023-02-09T08:33:05Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"gl",
"pt",
"arxiv:2106.13553",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language:
- gl
- pt
widget:
- text: A mesa estaba feita de [MASK].
license: agpl-3.0
---
# BERT for Galician (Small)
This is a small pre-trained BERT model (6 layers, cased) for Galician (ILG/RAG spelling). It was evaluated on lexical semantics tasks, using a [dataset to identify homonymy and synonymy in context](https://github.com/marcospln/homonymy_acl21), and presented at ACL 2021.
There is also a base version (12 layers, cased): `marcosgg/bert-base-gl-cased`
## Citation
If you use this model, please cite the following [paper](https://arxiv.org/abs/2106.13553):
```
@inproceedings{garcia-2021-exploring,
title = "Exploring the Representation of Word Meanings in Context: {A} Case Study on Homonymy and Synonymy",
author = "Garcia, Marcos",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.281",
doi = "10.18653/v1/2021.acl-long.281",
pages = "3625--3640"
}
``` |
Asiri123/spotter | Asiri123 | 2023-02-09T08:19:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T08:19:47Z | ---
license: creativeml-openrail-m
---
|
swl-models/zoirun-plus | swl-models | 2023-02-09T08:15:32Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T07:03:26Z | ---
license: creativeml-openrail-m
---
|
espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp | espnet | 2023-02-09T08:01:30Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librimix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2023-02-09T08:00:31Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librimix
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp`
This model was trained by Pengcheng Guo using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fe824770250485b77c68e8ca041922b8779b5c94
pip install -e .
cd egs2/librimix/sot_asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 6 12:15:26 CST 2023`
- python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: ``
- Commit date: ``
## asr_train_sot_conformer_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|123853|78.3|19.1|2.6|3.0|24.7|99.3|
|decode_sot_asr_model_valid.acc.ave/test|3000|111243|79.6|17.7|2.6|3.0|23.3|98.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|670222|90.1|6.3|3.6|3.5|13.4|99.3|
|decode_sot_asr_model_valid.acc.ave/test|3000|605408|90.7|5.7|3.6|3.3|12.6|98.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_sot_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_sot_asr_conformer_raw_en_char_sp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38867
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 8000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0005
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- <sc>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: multi
preprocessor_conf:
speaker_change_symbol:
- <sc>
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
imjunaidafzal/saqib-t600-u3000-photoreal-9-feb | imjunaidafzal | 2023-02-09T07:58:53Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-09T07:55:29Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Fine tune the
### concept name: saqib-t600-u3000-photoreal-9-FEB
### Training steps: 1500
### Text encoder steps: 350% of Training steps
Sample pictures of this concept:
|
kkh4162/xlm-roberta-base-finetuned-panx-de | kkh4162 | 2023-02-09T07:52:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-09T06:50:32Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638300289723342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sunwooooong/klue-bert-finetuned-klue-ner | sunwooooong | 2023-02-09T07:47:31Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-09T07:19:36Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: klue-bert-finetuned-klue-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-bert-finetuned-klue-ner
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- F1: 0.3930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5313 | 1.0 | 876 | 0.5225 | 0.2331 |
| 0.3884 | 2.0 | 1752 | 0.4197 | 0.3350 |
| 0.3136 | 3.0 | 2628 | 0.3741 | 0.3930 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
atorre/poca-SoccerTwos-10M | atorre | 2023-02-09T07:47:23Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T07:47:15Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: atorre/poca-SoccerTwos-10M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ZoeScralet/Zoe_LOL_LoraModel | ZoeScralet | 2023-02-09T06:55:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T06:47:48Z | ---
license: creativeml-openrail-m
---
|
kellyxuanlin/bert-finetuned-squad | kellyxuanlin | 2023-02-09T06:52:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-09T00:03:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jannikskytt/poca-SoccerTwos | jannikskytt | 2023-02-09T06:50:28Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T06:50:13Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jannikskytt/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thanat/mt5-small-finetuned-amazon-en-es | thanat | 2023-02-09T06:42:12Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-09T05:12:02Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: thanat/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thanat/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0061
- Validation Loss: 3.3257
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6013 | 4.2024 | 0 |
| 5.8556 | 3.7335 | 1 |
| 5.0930 | 3.5494 | 2 |
| 4.6610 | 3.4502 | 3 |
| 4.3874 | 3.4030 | 4 |
| 4.2103 | 3.3568 | 5 |
| 4.0930 | 3.3311 | 6 |
| 4.0061 | 3.3257 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
TkskKurumi/KurumiMix | TkskKurumi | 2023-02-09T06:12:07Z | 2 | 1 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-08T02:26:32Z | # KurumiMix
## Composition
### unet weights
The model weights are interpolated with same composition in all UNet blocks.
|Model|Contribution|
|-|-|
|[PastelMix](https://huggingface.co/andite/pastel-mix)|40%|
|[Counterfeit V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5)|20%|
|Counterfeit V2.2|20%|
|[EimisAnimeDiffusion](https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v)|10%|
|[BasilMix](https://huggingface.co/nuigurumi/basil_mix)|5%|
|[AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs)|5%|
### vae weights
Pastel mix's vae is colorful and beautiful, but a bit over-saturated in my view. Mix a little bit other vae.
|Model|Contribution|
|-|-|
|[orangemix.vae.pt](https://huggingface.co/WarriorMama777/OrangeMixs)|10%|
|[pastel-waifu-diffusion.vae.pt](https://huggingface.co/andite/pastel-mix)|90%|
## samples



 |
Toying/distilbert-base-uncased-finetuned-emotion | Toying | 2023-02-09T06:06:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-09T05:44:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264887378942147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2107
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.811 | 1.0 | 250 | 0.3073 | 0.905 | 0.9023 |
| 0.2402 | 2.0 | 500 | 0.2107 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
shaoyu17/my_awesome_model | shaoyu17 | 2023-02-09T06:03:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T05:49:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8597
- F1: 0.5171
- Precision: 0.5205
- Recall: 0.52
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.6451 | 1.0 | 752 | 0.7708 | 0.4699 | 0.5047 | 0.5035 | 0.5035 |
| 0.5828 | 2.0 | 1504 | 0.7702 | 0.5101 | 0.5106 | 0.5106 | 0.5106 |
| 0.5139 | 3.0 | 2256 | 0.8597 | 0.5171 | 0.5205 | 0.52 | 0.52 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dfm794/poca-SoccerTwos-2-l | dfm794 | 2023-02-09T05:50:03Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T05:49:55Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-2-l
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SandyML/ddpm-celebahq-finetuned-butterflies-2epochs | SandyML | 2023-02-09T05:35:07Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-09T05:34:23Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('SandyML/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Dainong2/bert-finetuned-squad | Dainong2 | 2023-02-09T05:13:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-09T03:53:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_base_to_pf10h | rishabhjain16 | 2023-02-09T05:03:28Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-08T15:16:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1929
- Wer: 4.3549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0326 | 10.0 | 500 | 0.1670 | 5.0398 |
| 0.0019 | 20.0 | 1000 | 0.1728 | 4.5113 |
| 0.0008 | 30.01 | 1500 | 0.1820 | 4.4071 |
| 0.0005 | 40.01 | 2000 | 0.1847 | 4.3773 |
| 0.0004 | 51.0 | 2500 | 0.1886 | 4.3549 |
| 0.0003 | 61.0 | 3000 | 0.1910 | 4.3475 |
| 0.0003 | 71.01 | 3500 | 0.1925 | 4.3549 |
| 0.0002 | 81.01 | 4000 | 0.1929 | 4.3549 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
rishabhjain16/whisper_base_en_to_pf10h | rishabhjain16 | 2023-02-09T05:00:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-08T15:18:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-base.en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base.en
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1913
- Wer: 3.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0489 | 10.0 | 500 | 0.1624 | 8.5536 |
| 0.0019 | 20.0 | 1000 | 0.1682 | 4.0051 |
| 0.0007 | 30.01 | 1500 | 0.1782 | 4.1167 |
| 0.0004 | 40.01 | 2000 | 0.1823 | 4.0497 |
| 0.0003 | 51.0 | 2500 | 0.1861 | 3.9827 |
| 0.0002 | 61.0 | 3000 | 0.1888 | 3.9753 |
| 0.0002 | 71.01 | 3500 | 0.1907 | 3.9678 |
| 0.0002 | 81.01 | 4000 | 0.1913 | 3.9530 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
PecanPi/q-taxi-v3-v2 | PecanPi | 2023-02-09T04:43:15Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T04:41:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pfunk/Pong-v4-DQPN_p50_e0.50-seed1 | pfunk | 2023-02-09T04:41:31Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T04:41:12Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 7.20 +/- 4.85
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_e0.50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_e0.50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_e0.50 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_e0.50 --start-policy-f 50000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.5,
'exp_name': 'DQPN_p50_e0.50',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
PecanPi/q-taxi-v3 | PecanPi | 2023-02-09T04:34:24Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T04:34:20Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sweaterr/pegasus-samsum | sweaterr | 2023-02-09T04:34:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-09T03:33:08Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6928 | 0.54 | 500 | 1.4812 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
PecanPi/q-FrozenLake-v1-4x4-noSlippery | PecanPi | 2023-02-09T04:31:28Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T04:31:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
UtopiansRareTruth/poca-SoccerTwos | UtopiansRareTruth | 2023-02-09T04:03:42Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-08T08:25:46Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: UtopiansRareTruth/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mindspore-ai/LeNet | mindspore-ai | 2023-02-09T03:26:50Z | 77 | 8 | mindspore | [
"mindspore",
"image-classification",
"dataset:mnist",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
library_name: mindspore
tags:
- image-classification
datasets:
- mnist
---
## MindSpore Image Classification models with MNIST on the 🤗Hub!
This repository contains the model from [this notebook on image classification with MNIST dataset using LeNet architecture](https://gitee.com/mindspore/mindspore/blob/r1.2/model_zoo/official/cv/lenet/README.md#).
## LeNet Description
Lenet-5 is one of the earliest pre-trained models proposed by Yann LeCun and others in the year 1998, in the research paper Gradient-Based Learning Applied to Document Recognition. They used this architecture for recognizing the handwritten and machine-printed characters.
The main reason behind the popularity of this model was its simple and straightforward architecture. It is a multi-layer convolution neural network for image classification.

[source](https://www.analyticsvidhya.com/blog/2021/03/the-architecture-of-lenet-5/)
|
espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp | espnet | 2023-02-09T03:21:16Z | 5 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librimix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2023-02-09T03:19:33Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librimix
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp`
This model was trained by Pengcheng Guo using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fe824770250485b77c68e8ca041922b8779b5c94
pip install -e .
cd egs2/librimix/sot_asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Dec 29 13:36:46 CST 2022`
- python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: ``
- Commit date: ``
## asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|123853|82.9|15.1|2.0|2.4|19.4|97.1|
|decode_sot_asr_model_valid.acc.ave/test|3000|111243|85.1|13.0|1.9|2.1|17.1|96.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|670222|92.2|4.9|2.9|2.7|10.6|97.1|
decode_sot_asr_model_valid.acc.ave/test|3000|605408|93.2|4.1|2.6|2.3|9.1|96.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tunining/train_sot_asr_conformer_wavlm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38431
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- <sc>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_local
path_or_url: /home/work_nfs6/pcguo/asr/librimix/hub/wavlm_large.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: multi
preprocessor_conf:
speaker_change_symbol:
- <sc>
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pozman/distilbert-base-uncased-finetuned-squad | pozman | 2023-02-09T03:17:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-09T01:52:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2224 | 1.0 | 5533 | 1.1604 |
| 0.9577 | 2.0 | 11066 | 1.1244 |
| 0.7436 | 3.0 | 16599 | 1.1519 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nolanaatama/hyprbrsts | nolanaatama | 2023-02-09T03:11:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T03:08:05Z | ---
license: creativeml-openrail-m
---
|
Tune-A-Video-library/redshift-man-skiing | Tune-A-Video-library | 2023-02-09T03:06:45Z | 26 | 14 | diffusers | [
"diffusers",
"tune-a-video",
"text-to-video",
"arxiv:2212.11565",
"arxiv:2112.10752",
"base_model:nitrosocke/redshift-diffusion",
"base_model:finetune:nitrosocke/redshift-diffusion",
"license:creativeml-openrail-m",
"diffusers:TuneAVideoPipeline",
"region:us"
]
| text-to-video | 2023-02-07T03:09:46Z | ---
license: creativeml-openrail-m
base_model: nitrosocke/redshift-diffusion
training_prompt: A man is skiing.
tags:
- tune-a-video
- text-to-video
- diffusers
inference: false
---
# Tune-A-Video - Redshift
## Model Description
- Base model: [nitrosocke/redshift-diffusion](https://huggingface.co/nitrosocke/redshift-diffusion)
- Training prompt: a man is skiing.

## Samples

Test prompt: (redshift style) [spider man/black widow/batman/hulk] is skiing.
## Usage
Clone the [github repo](https://github.com/showlab/Tune-A-Video)
```bash
git clone https://github.com/showlab/Tune-A-Video.git
```
Run inference code
```python
from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
from tuneavideo.models.unet import UNet3DConditionModel
from tuneavideo.util import save_videos_grid
import torch
pretrained_model_path = "nitrosocke/redshift-diffusion"
unet_model_path = "Tune-A-Video-library/redshift-man-skiing"
unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
prompt = "(redshift style) spider man is skiing"
video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=7.5).videos
save_videos_grid(video, f"./{prompt}.gif")
```
## Related Papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
|
Isaacp/Reinforce-pixelcopter | Isaacp | 2023-02-09T02:34:28Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T02:34:20Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.90 +/- 33.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bbbbearczx/bert-finetuned-squad | bbbbearczx | 2023-02-09T01:46:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-08T05:13:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hectorjelly/ppo-SnowballTarge2 | hectorjelly | 2023-02-09T01:20:34Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-09T01:20:29Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: hectorjelly/ppo-SnowballTarge2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cosc/sketchstyle-cutesexyrobutts | cosc | 2023-02-09T00:15:16Z | 502 | 48 | diffusers | [
"diffusers",
"stable-diffusion",
"art",
"cutesexyrobutts",
"style",
"dreambooth",
"text-to-image",
"en",
"dataset:Cosk/cutesexyrobutts",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-12-19T22:52:59Z | ---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- stable-diffusion
- art
- cutesexyrobutts
- style
- dreambooth
datasets:
- Cosk/cutesexyrobutts
library_name: diffusers
widget:
- text: portrait of a beautiful girl
- text: beautiful girl, playboy bunny, dark skin, black hair, blunt bangs, ponytail
---
# 'Sketchstyle' (cutesexyrobutts style)
Base model: https://huggingface.co/Linaqruf/anything-v3.0.</br>
Used 'fast-DreamBooth' on Google Colab and 768x768 images for all versions.
## NEW: Merges
*Merging sketchstyle models with other models will help to improve anatomy and other elements while also trying to keep the intended style as much as possible.</br>
I will upload from time to time new merges, if any of those improves on the previous ones. </br>
A 'weak' model means there is more weight to cutesexyrobutts style and a 'strong' model means there is a little more focus on the other model/models.</br>
Weak models might mantain a little more of the style but could have some anatomy problems, while strong models keep better anatomy though the style might become a little affected. Low CFG Scale (5-9) and using the "sketchstyle" token in the prompts might help with keeping the style on strong models.</br>*
**List of merges:**
- Pastelmix 0.2 + sketchstyle_v4-42k 0.8 weak (weighted sum, fp16)
- Pastelmix 0.4 + sketchstyle_v4-42k 0.6 strong (weighted sum, fp16)
**Versions:**
- V1: Trained with around 1300 images (from danbooru), automatically cropped.
- V2: Trained with 400 handpicked and handcropped images.
- V3: Trained with the same images as V2, but with 'style training' enabled.
- V4: Trained with 407 images, including 'captions' for each image.
**Recommended to use:**
- V4-42k (pretty good style and decent anatomy, might be the best)
- V3-40k (decent style and anatomy)
- V4-10k (best anatomy, meh style)
- V4-100k (good style, bad anatomy/hard to use, useful with img2img)
**Usage recommendations:**
- For V4, don't use CFG Scale over 11-12, as it will generate an overcooked image. Try between 6 to 9 at first. 9 seems to be the best if you're using the 'sketchstyle' in the prompt, if not, lower
- Generating specific characters might be hard, result in bad anatomy or not even work at all. If you want an specific character, the best is to use img2img with an image generated with another model
- Going over a certain resolution will generate incoherent results, so try staying close to 768x768 (examples: 640x896, 768x960, 640x1024, 832x640, and similar). Maybe Hires fix could help.
- Make sure to add nsfw/nipples/huge or large breasts in the negative prompts if you don't want any of those.
- Skin tone tends to be 'tan', use dark skin/tan on the negative prompts if its the case, and/or pale skin in the prompts.
- Using img2img to change the style of another image generally gives the best results, examples below.
Pay attention to this number. Normally going below 75 generates bad results, specially with models with high steps like V4-100k. Best with 100+

Token: 'sketchstyle' (if used, anatomy may get affected, but it can be useful for models with low steps to get a better style)<br />
**Limitations and known errors:**
- Not very good anatomy
- Sometimes it generates artifacts, specially on the eyes and lips
- Tends to generate skimpy clothes, open clothes, cutouts, and similar
- Might generate unclear outlines
Try using inpainting and/or img2img to fix these.
# Comparison between different versions and models
As you can see, robutts tends to give less coherent results and might need more prompting/steps to get good results (tried on other things aswell with similar results)

V2 with 10k steps or lower tends to give better anatomy results, and over that the style appears more apparent, so 10k is the 'sweet spot'.

Around 40 steps seems to be the best, but you should use 20 steps and, if you get an image you like, you increase the step count to 40 or 50.

Comparison between not completing that negative prompt and increasing the strength too much.

Comparison (using V3-5k) of token strength.

Another comparison of token strength using V3-15k.

Comparison, from 1 to 30 steps, between NovelAI - Sketchstyle V3-27500 (img2img with NovelAI image) - Sketchstyle V3-27500. Using Euler sampler.

# Examples:

```bibtex
Prompt: (masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hair_strand)),(serious),portrait,frown,arms_up,adjusting_hair,eyelashes,parted_lips,(sportswear,crop_top),toned,collarbone,ponytail,1girl,solo,highres<br />
Negative prompt: (deformed,disfigured),(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 70, Sampler: Euler, CFG scale: 12, Seed: 1365838486, Size: 768x768, Model: Sketchstyle V3-5k
```
_Eyes fixed with inpainting_:

```bibtex
Prompt: (masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hair_strand)),(serious),portrait,frown,arms_up,adjusting_hair,eyelashes,parted_lips,(sportswear,crop_top),toned,collarbone,ponytail,1girl,solo,highres<br />
Negative prompt: (deformed,disfigured),(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 34, Sampler: Euler, CFG scale: 12, Seed: 996011741, Size: 768x768, Denoising strength: 0.6, Mask blur: 8, Model: Sketchstyle V2-10k
```

```bibtex
Prompt: sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body,parted_lips),1girl, (nipples), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,blush),orange_hair,((onsen,indoors)),(toned),medium_breasts,navel,cleavage,looking at viewer,collarbone,hair bun, solo, highres,(nsfw)<br />
Negative prompt: (dark-skin,dark_nipples,extra_nipples),deformed,disfigured,(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 30, Sampler: Euler, CFG scale: 12, Seed: 4172541433, Size: 640x832, Model: Sketchstyle V3-5k
```

```bibtex
Prompt: sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body),1girl, (nipples), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,ringed_eyes,shy,blush),onsen,indoors,medium_breasts, cleavage,looking at viewer,collarbone,hair bun, solo, highres,(nsfw)<br />
Negative prompt: Negative prompt: (huge_breasts,large_breasts),realistic,3D,3D Game,nsfw,lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br />
Steps: 40, Sampler: Euler, CFG scale: 14, Seed: 4268937236, Size: 704x896, Model: Sketchstyle V3-5k
```

```bibtex
Prompt: (masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back,breasts))),((thong,garter_belt,garter_straps,lingerie)),(hair_flower,bed_sheet),(black_hair,braid,braided_ponytail,long_hair),1girl,grey_background,thighs,solo,highres<br />
Negative prompt: ((deformed)),((looking_back,looking_at_viewer,face)),((out_of_frame,cropped)),(fat,thick,thick_thighs),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, patreon_logo, patreon_username, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 3765393440, Size: 640x832, Model: Sketchstyle V3-5k
```

```bibtex
Prompt: (masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back))),((thong,garter_belt,garter_straps,lingerie)),(hair_flower,bed_sheet),(black_hair,braid,braided_ponytail,long_hair),1girl,grey_background,thighs,solo,highres<br />
Negative prompt: backboob,((deformed)),((looking_back,looking_at_viewer,face)),((out_of_frame,cropped)),(fat,thick,thick_thighs),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, patreon_logo, patreon_username, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 2346086519, Size: 640x832, Model: Sketchstyle V3-5k
```

```bibtex
Prompt: (masterpiece,best quality,ultra-detailed),(sketchstyle),(arms_up,tying_hair),(large_breasts,nipples),(long_hair,blonde_hair,tied_hair,ponytail,collarbone,navel,stomach,midriff,completely_nude,nude,toned),((cleft_of_venus,pussy)),cloudy_sky,1girl,solo,highres,(nsfw)<br />
Negative prompt: (deformed,disfigured,bad proportions,exaggerated),from_behind,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),((fat,thick,thick_thighs)),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 40, Sampler: Euler, CFG scale: 12, Seed: 4024165718, Size: 704x960, Model: Sketchstyle V3-5k
```

```bibtex
Prompt: (masterpiece,best quality),(sketchstyle),((1boy,male_focus)),((close-up,portrait)),((black_shirt)),((((red collared_coat)))),((dante_\(devil_may_cry\),devil may cry)),((medium_hair,parted_hair,parted_bangs,forehead,white_hair)),((stubble)),(facial_hair),(popped_collar,open_coat),(closed_mouth,smile),blue_eyes,looking_at_viewer,solo,highres<br />
Negative prompt: ((deformed)),(nsfw),(long_hair,short_hair,young,genderswap,1girl,female,breasts,androgynous),((choker)),(shiny,shiny_hair,shiny_skin,3D,3D game),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),((fat,thick,thick_thighs)),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br />
Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 4166887955, Size: 768x768, Model: Sketchstyle V3-5k
```
# img2img style change examples:

```bibtex
Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 16, Seed: 3633297035, Size: 640x960<br />
Original prompt: masterpiece, best quality, 1girl, naked towel, fox ears, orange eyes, wet, ringed eyes, shy, medium breasts, cleavage, looking at viewer, hair bun, blush, solo, highres<br />
Original negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br />
New settings: Model: Sketchstyle V3 5k steps, Steps: 33, CFG scale: 12, Seed: 3311014108, Size: 640x960, Denoising strength: 0.6, Mask blur: 4<br />
New prompt: ((sketchstyle)),(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body),1girl, (((naked_towel,towel))), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,ringed_eyes,shy,blush),onsen,indoors,medium_breasts, cleavage,looking at viewer,collarbone,hair bun, solo, highres<br />
New negative prompt: (nipples,huge_breasts,large_breasts),realistic,3D,3D Game,nsfw,lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br />
```

```bibtex
Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 16, Seed: 764529639, Size: 640x960<br />
Prompt: masterpiece, highest quality, (1girl), (looking at viewer), ((pov)), fox ears, ((leaning forward)), [light smile], ((camisole)), short shorts, (cleavage), (((medium breasts))), blonde, (high ponytail), (highres)<br />
Negative prompt: ((deformed)), (duplicated), lowres, ((missing animal ears)), ((poorly drawn face)), ((poorly drawn eyes)), (extra limb), (mutation), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (fused fingers), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realistic, realism, huge breasts<br />
New settings: Model: Sketchstyle V3 5k steps, Steps: 28, CFG scale: 12, Seed: 1866024520, Size: 640x960, Denoising strength: 0.7, Mask blur: 8
```

```bibtex
Original settings: Model: NovelAI, Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2604970030, Size: 640x896<br />
Original prompt: (masterpiece),(best quality),((sketch)),(ultra detailed),(1girl, teenage),((white hair, messy hair)),((expressionless)),(black jacket, long sleeves),((grey scarf)),((squatting)), (hands on own knees),((plaid_skirt, pleated skirt, miniskirt)),(fox ears, extra ears, white fox tail, fox girl, animal ear fluff),black ((boots)),full body,bangs,ahoge,(grey eyes),solo,absurdres<br />
Negative prompt: ((deformed)),((loli, young)),(kneehighs,thighhighs),long body, long legs),lowres,((((poorly drawn fingers, poorly drawn hands)))),((anatomic nonsense)),(extra fingers),((fused fingers)),(plaid scarf),(spread legs),((one hand with more than 5 fingers)), ((one hand with less than 5 fingers)),((bad eyes)),(twin, multiple girls, 2girls),(separated eyes),(long neck),((bad proportions)),(bad lips),((thick lips)),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),(blurry),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (poorly drawn feet), (fused toes), (mutated hands and fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br />
New settings: Model: Sketchstyle V3 5k steps, Steps: 45, CFG scale: 12, Seed: 1073378414, Size: 640x896, Denoising strength: 0.6, Mask blur: 8<br />
New prompt: (masterpiece),(best quality),(sketchstyle),(ultra detailed),(1girl, teenage),((white hair, messy hair)),((expressionless)),(black jacket, long sleeves),((grey scarf)),((squatting)), (hands on own knees),((plaid_skirt, pleated skirt, miniskirt)),(fox ears, extra ears, white fox tail, fox girl, animal ear fluff),black ((boots)),full body,bangs,ahoge,(grey eyes),solo,absurdres<br />
```

```bibtex
Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 12, Seed: 3659534337, Size: 768x832<br />
Original prompt: ((masterpiece)), ((highest quality)),(((ultra-detailed))),(illustration),(1girl), portrait,((wolf ears)),(beautiful eyes),looking at viewer,dress shirt,shadows,((ponytail)), (white hair), ((sidelocks)),outdoors,bangs, solo, highres<br />
Original negative prompt: ((deformed)), lowres,loli,((monochrome)),(black and white),((lips)),long body,(((poorly drawn eyes))),((out of frame)),((poorly drawn)),((bad drawing)),(blurry),depth of field,(fused fingers),(((mutation))),((bad anatomy)),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realism, face mask<br />
New settings: Model: Sketchstyle V3-20k 2000steps text encoder, Steps: 80, CFG scale: 12, Seed: 3001145714, Size: 768x832, Denoising strength: 0.5, Mask blur: 4<br />
New prompt: ((sketchstyle)),(masterpiece,best quality,highest quality,illustration),((ultra-detailed)),1girl,(portrait,close-up),((wolf_girl,wolf_ears)),(eyelashes,detailed eyes,beautiful eyes),looking at viewer,(collared-shirt,white_shirt),((ponytail)), (white hair), ((sidelocks)),(blue eyes),closed_mouth,(shadows,outdoors,sunlight,grass,trees),hair_between_eyes,bangs,solo,highres<br />
New negative prompt: ((deformed)),(less than 5 fingers, more than 5 fingers,bad hands,bad hand anatomy,missing fingers, extra fingers, mutated hands, disfigured hands, deformed hands),lowres,loli,((monochrome)),(black and white),((lips)),long body,(((poorly drawn eyes))),((out of frame)),((poorly drawn)),((bad drawing)),(blurry),depth of field,(fused fingers),(((mutation))),((bad anatomy)),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realism, face mask<br />
```

```bibtex
Original settings: Model: NovelAI, Steps: 20, Sampler: Euler, CFG scale: 11, Seed: 2413712316, Size: 768x768<br />
Original prompt: (masterpiece,best quality,ultra-detailed,detailed_eyes),(sketch),((portrait,face focus)),(((shaded eyes))),(wavy hair),(((ringed eyes,red_hair))),((black hair ribbon)),((hair behind ear)),(((short ponytail))),(blush lines),(good anatomy),(((hair strands))),(bangs),((lips)),[teeth, tongue],yellow eyes,(eyelashes),shirt, v-neck,collarbone,cleavage,breasts,(medium hair),(sidelocks),looking at viewer,(shiny hair),1girl,solo,highres<br />
Original negative prompt: ((deformed)),lowres,(black hair),(formal),earrings,(twin, multiple girls, 2girls),(braided bangs),((big eyes)),((close up, eye focus)),(separated eyes),(multiple eyebrows),((eyebrows visible through hair)),(long neck),(bad lips),(tongue out),((thick lips)),(from below),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),((blurry)),depth of field,(fused fingers),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored,doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br />
New settings: (img2img with original image, then again with the new generated image, inpainted to fix the neck) Model: Sketchstyle V3-27.5k 2000steps text encoder, Steps: 80, CFG scale: 12, Seed: 1237755461 / 1353966202, Size: 832x832, Denoising strength: 0.5 / 0.3, Mask blur: 4<br />
New prompt: sketchstyle,(masterpiece,best quality,ultra-detailed,detailed_eyes),(((portrait,face focus,close-up))),(((shaded eyes))),(wavy hair),(((ringed eyes,red_hair))),((black hair ribbon)),((hair behind ear)),(((short ponytail))),(blush lines),(good anatomy),(((hair strands))),(bangs),((lips)),[teeth, tongue],(yellow eyes,eyelashes,tsurime,slanted_eyes),shirt, v-neck,collarbone,breasts,(medium hair),(sidelocks),looking at viewer,(shiny hair),1girl,solo,highres<br />
New negative prompt: ((deformed)),((loli,young)),lowres,(black hair),(formal),earrings,(twin, multiple girls, 2girls),(braided bangs),((big eyes)),((close up, eye focus)),(separated eyes),(multiple eyebrows),((eyebrows visible through hair)),(long neck),(bad lips),(tongue out),((thick lips)),(from below),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),((blurry)),depth of field,(fused fingers),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored,doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br />
``` |
deprem-ml/distilroberta-tweet-clustering-embeddings | deprem-ml | 2023-02-09T00:11:25Z | 0 | 0 | null | [
"feature-extraction",
"tr",
"license:apache-2.0",
"region:us"
]
| feature-extraction | 2023-02-08T23:54:30Z | ---
license: apache-2.0
language:
- tr
pipeline_tag: feature-extraction
---
## Topic Modelling için Clustering Embedding'leri
Bu repository'i güncelleyeceğiz. Notebook burada: https://github.com/metekemertas/deprem-intent-classification-tfidf/blob/main/unsupervised_analysis.py
Tweet verisinde ihtiyaç belirlemek (topic modelling) için çıkarılmış embedding'ler bu repo'da. |
daripaez/poca-SoccerTwos-1 | daripaez | 2023-02-09T00:02:59Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T00:02:26Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: daripaez/poca-SoccerTwos-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deetsml/dummy-model | deetsml | 2023-02-08T23:37:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text-classification",
"en",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T21:25:20Z | ---
tags:
- text-classification
- transformers
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 14756 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "accuracy",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 14756,
"warmup_steps": 1476,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Nyaaneet/donut-cord | Nyaaneet | 2023-02-08T22:39:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-02-06T17:19:06Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: donut-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-cord
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
rerdscf/Embed | rerdscf | 2023-02-08T22:37:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-08T15:31:05Z | ---
license: creativeml-openrail-m
---
|
eduiqe/ppo-LunarLander-v2 | eduiqe | 2023-02-08T22:12:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-23T02:07:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.92 +/- 20.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nonin/ppo-Huggy | Nonin | 2023-02-08T21:57:45Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-08T21:57:38Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Nonin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
saurabhnaik/dqn-SpaceInvadersNoFrameskip-v4 | saurabhnaik | 2023-02-08T21:21:02Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T19:47:58Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 578.00 +/- 157.66
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saurabhnaik -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saurabhnaik -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga saurabhnaik
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Kilgori/inisanium-model | Kilgori | 2023-02-08T20:58:20Z | 17 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-07T23:32:18Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Inisanium-Model Dreambooth model trained by Kilgori with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
!!!IMPORTANT!!! Really NSFW Model use at your own risk!
This model can generate anything from furry futanari nsfw to kinda realistic human women.
This model uses tags and you will be able to see the captions used if you download the captions zip.
Its an unstable model. Clip skip changes the results substantially so use as you wish.
Hard to use, bigger prompts = better images usually.
It doesn't do sfw furries in my experience.
DM me on discord if you want access to the images used to train this model. Kilgorio#6392
😁
Sample pictures of this concept:


,_best_quality,_masterpiece,_highres,_original,_extremely_detailed_wallpaper,_looking_at_viewer,_(sitting_1.4),.png)


),_best_quality,_perfect_anatomy,_(1girl,_solo_focus_1.4),_pov,_looking_at_viewer,_flower_trim,(perspective,_sidew.png)



),_best_quality,_perfect_anatomy,_(1girl,_solo_focus_1.4),_pov,_looking_at_viewer,_flower_trim,(perspective,_sidew.png)





|
aimarsg/pharmacoNER | aimarsg | 2023-02-08T20:53:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:pharmaconer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-08T20:41:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pharmaconer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pharmacoNER
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: pharmaconer
type: pharmaconer
config: PharmaCoNER
split: validation
args: PharmaCoNER
metrics:
- name: Precision
type: precision
value: 0.9057634526085769
- name: Recall
type: recall
value: 0.9025585193249864
- name: F1
type: f1
value: 0.9041581458759373
- name: Accuracy
type: accuracy
value: 0.9948434782608696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pharmacoNER
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the pharmaconer dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0251
- Precision: 0.9058
- Recall: 0.9026
- F1: 0.9042
- Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0272 | 1.0 | 1017 | 0.0288 | 0.8047 | 0.8503 | 0.8269 | 0.9914 |
| 0.0114 | 2.0 | 2034 | 0.0240 | 0.8950 | 0.8998 | 0.8974 | 0.9945 |
| 0.006 | 3.0 | 3051 | 0.0251 | 0.9058 | 0.9026 | 0.9042 | 0.9948 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hulkster/sd-class-butterflies-32 | hulkster | 2023-02-08T20:52:34Z | 2 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-08T20:52:18Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('hulkster/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
huggingtweets/101dadjokes-dadsjokes | huggingtweets | 2023-02-08T20:48:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-08T20:45:18Z | ---
language: en
thumbnail: http://www.huggingtweets.com/101dadjokes-dadsjokes/1675889291789/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1406653045757317121/YCS9YykL_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/641271414/dad_jokes_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dad Jokes & Dad Jokes</div>
<div style="text-align: center; font-size: 14px;">@101dadjokes-dadsjokes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dad Jokes & Dad Jokes.
| Data | Dad Jokes | Dad Jokes |
| --- | --- | --- |
| Tweets downloaded | 184 | 2043 |
| Retweets | 14 | 0 |
| Short tweets | 10 | 123 |
| Tweets kept | 160 | 1920 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/od2iwqt2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @101dadjokes-dadsjokes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7ruisgab) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7ruisgab/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/101dadjokes-dadsjokes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Iggg0r/rl_course | Iggg0r | 2023-02-08T20:32:19Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T19:18:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.37 +/- 14.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JD97/Riffusion_sentiment_LoRA | JD97 | 2023-02-08T20:29:17Z | 10 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"diffusion",
"riffusion",
"text-to-audio",
"text-to-image",
"en",
"dataset:gwkim22/spectro_caption_dataset",
"dataset:Chr0my/Epidemic_music",
"license:mit",
"region:us"
]
| text-to-image | 2023-02-08T15:36:09Z | ---
license: mit
datasets:
- gwkim22/spectro_caption_dataset
- Chr0my/Epidemic_music
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- diffusion
- riffusion
- text-to-audio
---
### Introduce
Riffusion with LoRA, fine-tuned with <code>Chr0my/Epidemic_music</code> <br/>
This model was used during Naver Connect BoostCamp AI tech 4th, NLP Track
### Citation
~~~
@article{Forsgren_Martiros_2022,
author = {Forsgren, Seth* and Martiros, Hayk*},
title = {{Riffusion - Stable diffusion for real-time music generation}},
url = {https://riffusion.com/about},
year = {2022}
}
~~~ |
PecanPi/ppo-LunarLander-v2 | PecanPi | 2023-02-08T20:21:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:40:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.86 +/- 21.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mfrayha/marcelo | mfrayha | 2023-02-08T20:03:59Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-08T19:50:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Marcelo Dreambooth model trained by mfrayha with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
|
Luisfrdz/PPO-RL-1-LunarLander-v3 | Luisfrdz | 2023-02-08T20:01:54Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T18:23:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-RL-Agent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 175.25 +/- 116.16
name: mean_reward
verified: false
---
# **PPO-RL-Agent** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-RL-Agent** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DeepaKrish/distilbert-base-uncased-finetuned-squad | DeepaKrish | 2023-02-08T19:40:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-08T14:55:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 78 | 0.7144 |
| No log | 2.0 | 156 | 0.3996 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.13.2
|
Snim/dqn-SpaceInvadersNoFrameskip-v4 | Snim | 2023-02-08T19:25:49Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T19:25:04Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 753.50 +/- 272.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Snim
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
kitintouch/kit-the-bear | kitintouch | 2023-02-08T18:44:56Z | 0 | 0 | null | [
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-08T18:44:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: kitthebear
---
### kit the bear Dreambooth model trained by kitintouch with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
kitthebear (use that on your prompt)

|
ernie-ai/finetuned-vit-image-text-classifier | ernie-ai | 2023-02-08T18:36:19Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-08T06:08:50Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: finetuned-vit-doc-text-classifer
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: ernie-ai/image-text-examples-ar-cn-latin-notext
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9029850746268657
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-vit-doc-text-classifer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ernie-ai/image-text-examples-ar-cn-latin-notext dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3107
- Accuracy: 0.9030
## Model description
It is an image classificatin model fine-tuned to predict whether an images contains text and if that text is Latin script, Chinese or Arabic. It also classifies non-text images.
## Training and evaluation data
Dataset: [ernie-ai/image-text-examples-ar-cn-latin-notext]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2719 | 2.08 | 100 | 0.4120 | 0.8657 |
| 0.1027 | 4.17 | 200 | 0.3907 | 0.8881 |
| 0.0723 | 6.25 | 300 | 0.3107 | 0.9030 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fjaragones/Taxi-v3 | fjaragones | 2023-02-08T18:28:50Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T18:28:47Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fjaragones/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/yabukiKentaroStyle_yabukiFinalV10 | LarryAIDraw | 2023-02-08T18:22:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-07T21:21:21Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/7106/yabukikentaro-style |
Luisfrdz/PPO-RL-1-LunarLander-v2 | Luisfrdz | 2023-02-08T18:20:24Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-08T17:18:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-RL-Agent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 229.23 +/- 77.53
name: mean_reward
verified: false
---
# **PPO-RL-Agent** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-RL-Agent** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pfunk/Pong-v4-DQPN_p50_e0.10-seed1 | pfunk | 2023-02-08T17:41:56Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T17:41:35Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 10.00 +/- 5.67
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_e0.10 --start-policy-f 50000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p50_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
GFMRommel/Vergelltungswaffe1 | GFMRommel | 2023-02-08T17:27:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-08T10:14:26Z | ---
license: creativeml-openrail-m
---
|
AravindReddy/Gender | AravindReddy | 2023-02-08T17:14:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-02-08T16:56:58Z | ---
title: Which Gender
emoji: 🦀
colorFrom: yellow
colorTo: blue
sdk: gradio
sdk_version: 3.17.0
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Phantom-Artist/phantom-diffusion-s3-the-last-8 | Phantom-Artist | 2023-02-08T16:52:18Z | 0 | 4 | null | [
"art",
"en",
"ja",
"dataset:Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-08T12:28:01Z | ---
license: creativeml-openrail-m
datasets:
- Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset
language:
- en
- ja
tags:
- art
---
The final 8 models that are trained over [hakurei's Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion).
Each model was trained on a notable Japanese (and Taiwanese, I think) AI artist's works using dreambooth, with 30 of their works gained mainly from twitter (except for sabakichi, which I collected the dataset from pixiv).
It tooks 1000 steps to train each model, at a learning rate of 5-e6. I have shared the notebook [here](https://huggingface.co/Phantom-Artist/phantom-diffusion/blob/main/DreamBooth_Stable_Diffusion_works_with_SD_2.ipynb) in case you want to try out additional training.
Some are good at backgrounds, while others can generate a semi-realistic style as seen in many SD 1.5 anime/waifu models (and some might be a bit overfitted).
The dataset is published [here](https://huggingface.co/datasets/Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset), and the full members are listed [here](https://twitter.com/i/lists/1622444997240037377) (special thanks to Yunyalula for creating it!).
# For those who are against generative AIs
You can see that now they are the target. Take our tool. Now is the time for pay back. Generate the images in their styles, and bring back the power you had to yourself.
# For those who support the development of generative AIs
Some of the AI artists, even though they take advantage of the open strategy of Stable Diffusion, now tend to hide their prompts, trying to monopolize their style (I'm not saying the AI artists I trained are as such, to be sure).
To continue protecting our values and beliefs on the open community and fight against them trying to create another pre-modern style guilds, I will show you a new way.
You no longer need their prompts; just train their images by yourself to protect the open community. It's not only legal but also ethical, as they have been taking advantages of others' trained dataset.
# For those who call themselves "phantom 40"
I saw some caliming there should be 48, and here you go. Phantom 48, or do you like to call yourselves *PTM* 48 instead? It's up to you.
# Why will they be the last?
My initial intention on this series was a social experiment to see what will happen if the AI artists are targeted for personalized training.
As it became more popular than expected and the artists started calling themselves "phantom 20," I came up with the second intention to see how they will react after I add 20 more in one day, to see if they can adapt to the sudden change. They acted greatly, and I think that's why they could become notable.
All the reactions and the interpretations on my action were impressive, but since I have accomplished my goal, and since the main stream model will probably be SD 2.1 768 (not SD 2.1 512), I will no longer add new models.
I know I couldn't add some of the artists, but no. I will not do it under the name of phantom.
It takes me like 8 hours to train, test, and upload 20 models, and it's just unsustainable to continue doing it everyday.
**From now on, anyone who wish to add more is the next phantom. Train anyone you wish to by yourself.**
# trained artist list
- atsuwo_AI
- recommended pos: multicolored hair, cg
- fladdict
- recommended pos: oil painting/ancient relief/impressionist impasto oil painting (maybe more)
- possible neg: monkey
- Hifumi_AID
- recommended pos: dark purple hair, emerald eyes
- mayonaka_rr
- recommended pos: cg
- possible pos: dynamic posing, bikini, ponytail
- o81morimori
- possible pos: cg, in a messy apartment room with objects on the floor and the bed
- sabakichi
- possible pos 1: merging underwater, limited pallete, melting underwater, unstable outlines
- possible pos 2: rough sketch, limited pallete, ((unstable outlines)), monotone gradation, dynamic posing
- teftef
- possible pos: light skyblue hair, bun, retropunk gears of a factory
- violet_fizz
- recommended pos: beautiful face, grown up face, long eyes, expressionless
- possible pos: expressionless
# samples
The basic prompt is as follows.
However, to present you the potential of these models as much as possible, many of them have additional postive tags (such as "in the style of") to get the result below (yes, use ``aitop (ARTIST)_style`` to gain the finetuned result).
Many works better with the additional prompt ``beautiful face``. Generally speaking, prompting words close to the trained dataset will give you a better result.
```
POS: masterpiece, best quality, 1girl, aitop (ARTIST)_style
NEG: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, simple background
```
## atsuwo_AI



## fladdict



## Hifumi_AID


## mayonaka_rr



## o81morimori


## sabakichi




## teftef


## violet_fizz

 |
victorivus/Q-Learner-Taxi-v3 | victorivus | 2023-02-08T16:42:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T16:42:47Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Learner-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="victorivus/Q-Learner-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ahmad-alismail/pyramids-RND-1 | ahmad-alismail | 2023-02-08T16:39:34Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-08T16:39:29Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ahmad1289/pyramids-RND-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
apatidar0/conversation-summ | apatidar0 | 2023-02-08T16:36:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-08T15:58:12Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: conversation-summ
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 51.7796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conversation-summ
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4048
- Rouge1: 51.7796
- Rouge2: 26.1341
- Rougel: 41.4013
- Rougelsum: 41.4563
- Gen Len: 29.656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5781 | 1.0 | 500 | 0.3637 | 50.8871 | 26.6178 | 41.8757 | 41.9291 | 25.16 |
| 0.2183 | 2.0 | 1000 | 0.3586 | 50.7919 | 25.4277 | 40.8428 | 40.8421 | 27.712 |
| 0.1354 | 3.0 | 1500 | 0.4048 | 51.7796 | 26.1341 | 41.4013 | 41.4563 | 29.656 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vvn0/a2c-AntBulletEnv-v0 | vvn0 | 2023-02-08T16:30:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T16:29:13Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1442.86 +/- 397.05
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sheldonxxxx/OFA_model_weights | sheldonxxxx | 2023-02-08T16:22:21Z | 0 | 1 | null | [
"visual-question-answering",
"en",
"license:apache-2.0",
"region:us"
]
| visual-question-answering | 2023-02-07T13:54:05Z | ---
license: apache-2.0
language:
- en
pipeline_tag: visual-question-answering
---
This is an unoffical mirror of the model weights for use with https://github.com/OFA-Sys/OFA
The original link is too slow when downloading from outside of China... |
fathyshalab/massive_calendar-roberta-large-v1-2-0.89 | fathyshalab | 2023-02-08T16:09:11Z | 12 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-08T16:08:47Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive_calendar-roberta-large-v1-2-0.89
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-2-0.89")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
davanstrien/dataset_mentions | davanstrien | 2023-02-08T16:02:29Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-08T15:50:40Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# davanstrien/dataset_mentions
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("davanstrien/dataset_mentions")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
griffin/clinical-led-summarizer | griffin | 2023-02-08T15:58:41Z | 11 | 5 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-12T14:29:12Z | # clinical-led-summarizer
HuggingFace Model Weights for the LongFormer Hospital-Course Summarization model trained on Revised References, as described in Findings of EMNLP 2022 Paper "Learning to Revise References for Faithful Summarization"
[Paper Link](https://aclanthology.org/2022.findings-emnlp.296/)
---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- MIMIC-III
metrics:
- rouge
- bertscore
---
|
fathyshalab/massive_transport-roberta-large-v1-2-0.15 | fathyshalab | 2023-02-08T15:57:47Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-08T15:57:25Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive_transport-roberta-large-v1-2-0.15
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-2-0.15")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
pyflynn/taxi-v3-model-v0 | pyflynn | 2023-02-08T15:42:13Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:13:19Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-model-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pyflynn/taxi-v3-model-v0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pabloac31/ppo-SnowballTarget | pabloac31 | 2023-02-08T15:24:46Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:24:40Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: pabloac31/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nasheed/rl-course | nasheed | 2023-02-08T15:22:35Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:22:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.60 +/- 12.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
frangiral/dqn-SpaceInvadersNoFrameskip-v4 | frangiral | 2023-02-08T15:08:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:07:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 422.50 +/- 299.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga frangiral
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
quartz14/Reinforce-cartpole | quartz14 | 2023-02-08T15:06:21Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:06:07Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mwissing/dqn-SpaceInvadersNoFrameskip-v4 | mwissing | 2023-02-08T15:02:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T15:02:08Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 679.50 +/- 183.98
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mwissing -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mwissing -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mwissing
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mertyazan/Reinforce-1 | mertyazan | 2023-02-08T15:01:26Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T10:30:26Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.20 +/- 25.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Axel578/flan_t5_summarization | Axel578 | 2023-02-08T15:00:53Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-08T13:06:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan_t5_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_summarization
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6162
- Rouge1: 15.9418
- Rouge2: 7.4447
- Rougel: 15.5655
- Rougelsum: 15.5835
- Gen Len: 18.7313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 272 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7405 | 2.0 | 544 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7405 | 3.0 | 816 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7453 | 4.0 | 1088 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7453 | 5.0 | 1360 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7372 | 6.0 | 1632 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7372 | 7.0 | 1904 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7436 | 8.0 | 2176 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7436 | 9.0 | 2448 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
| 0.7425 | 10.0 | 2720 | 0.6162 | 15.9418 | 7.4447 | 15.5655 | 15.5835 | 18.7313 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nc33/multiqa_model | nc33 | 2023-02-08T14:58:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-08T12:16:51Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multiqa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiqa_model
This model is a fine-tuned version of [nc33/multiqa_model](https://huggingface.co/nc33/multiqa_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1150
- Precision: 0.0855
- Recall: 0.0485
- F1: 0.0619
- Accuracy: 0.9626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 327 | 0.1121 | 0.0708 | 0.0280 | 0.0402 | 0.9631 |
| 0.0786 | 2.0 | 654 | 0.1098 | 0.0531 | 0.0254 | 0.0343 | 0.9599 |
| 0.0786 | 3.0 | 981 | 0.1085 | 0.0657 | 0.0243 | 0.0354 | 0.9634 |
| 0.0681 | 4.0 | 1308 | 0.1133 | 0.0765 | 0.0453 | 0.0569 | 0.9618 |
| 0.0641 | 5.0 | 1635 | 0.1150 | 0.0855 | 0.0485 | 0.0619 | 0.9626 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mrm8488/xlm-v-base-finetuned-xglue-xnli | mrm8488 | 2023-02-08T14:51:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"xnli",
"dataset:xglue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T22:06:50Z | ---
license: mit
langs:
- multilingual
tags:
- generated_from_trainer
- xnli
datasets:
- xglue
metrics:
- accuracy
model-index:
- name: xlm-v-base-finetuned-xglue-xnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xglue
type: xglue
config: xnli
split: validation.en+validation.ar+validation.bg+validation.de+validation.el+validation.es+validation.fr+validation.hi+validation.ru+validation.sw+validation.th+validation.tr+validation.ur+validation.vi+validation.zh
args: xnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7402677376171352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-V (base) fine-tuned on XNLI
This model is a fine-tuned version of [XLM-V (base)](https://huggingface.co/facebook/xlm-v-base) on the XNLI (XGLUE) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6511
- Accuracy: 0.7403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0994 | 0.08 | 1000 | 1.0966 | 0.3697 |
| 1.0221 | 0.16 | 2000 | 1.0765 | 0.4560 |
| 0.8437 | 0.24 | 3000 | 0.8472 | 0.6179 |
| 0.6997 | 0.33 | 4000 | 0.7650 | 0.6804 |
| 0.6304 | 0.41 | 5000 | 0.7227 | 0.7007 |
| 0.5972 | 0.49 | 6000 | 0.7430 | 0.6977 |
| 0.5886 | 0.57 | 7000 | 0.7365 | 0.7066 |
| 0.5585 | 0.65 | 8000 | 0.6819 | 0.7223 |
| 0.5464 | 0.73 | 9000 | 0.7222 | 0.7046 |
| 0.5289 | 0.81 | 10000 | 0.7290 | 0.7054 |
| 0.5298 | 0.9 | 11000 | 0.6824 | 0.7221 |
| 0.5241 | 0.98 | 12000 | 0.6650 | 0.7268 |
| 0.4806 | 1.06 | 13000 | 0.6861 | 0.7308 |
| 0.4715 | 1.14 | 14000 | 0.6619 | 0.7304 |
| 0.4645 | 1.22 | 15000 | 0.6656 | 0.7284 |
| 0.4443 | 1.3 | 16000 | 0.7026 | 0.7270 |
| 0.4582 | 1.39 | 17000 | 0.7055 | 0.7225 |
| 0.4456 | 1.47 | 18000 | 0.6592 | 0.7361 |
| 0.44 | 1.55 | 19000 | 0.6816 | 0.7329 |
| 0.4419 | 1.63 | 20000 | 0.6772 | 0.7357 |
| 0.4403 | 1.71 | 21000 | 0.6745 | 0.7319 |
| 0.4348 | 1.79 | 22000 | 0.6678 | 0.7338 |
| 0.4355 | 1.87 | 23000 | 0.6614 | 0.7365 |
| 0.4295 | 1.96 | 24000 | 0.6511 | 0.7403 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
irenekar/q-FrozenLake-v1-4x4-noSlippery | irenekar | 2023-02-08T14:50:40Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T14:50:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="irenekar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Rolo/Reinforce-Pixelcopter | Rolo | 2023-02-08T14:37:37Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-06T22:48:38Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.20 +/- 39.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Elytum/bert-finetuned-ner | Elytum | 2023-02-08T14:35:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-08T10:22:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [gaunernst/bert-small-uncased](https://huggingface.co/gaunernst/bert-small-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0186
- Precision: 0.9941
- Recall: 0.9952
- F1: 0.9946
- Accuracy: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0277 | 1.0 | 2500 | 0.0190 | 0.9929 | 0.9939 | 0.9934 | 0.9956 |
| 0.0137 | 2.0 | 5000 | 0.0180 | 0.9935 | 0.9951 | 0.9943 | 0.9960 |
| 0.0095 | 3.0 | 7500 | 0.0186 | 0.9941 | 0.9952 | 0.9946 | 0.9963 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ell-hol/mT5-OrangeSum | ell-hol | 2023-02-08T14:34:07Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:ell-hol/autotrain-data-test-orangesum",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-12-27T22:06:22Z | ---
language:
- unk
tags:
- autotrain
- summarization
datasets:
- ell-hol/autotrain-data-test-orangesum
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 675.7789931017469
model-index:
- name: ell-hol/mT5-OrangeSum
results:
- task:
type: summarization
name: Summarization
dataset:
name: orange_sum
type: orange_sum
config: abstract
split: validation
metrics:
- type: rouge
value: 33.377
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhjMWIxYmNmNDYzNTMzMDM2YjQyOTdkYjYyMDJkZDhlNzQ2ZDVkNGM2YTIzODU4ZWYwZDg2ODZkN2U5OTk2MSIsInZlcnNpb24iOjF9.UL_nv_GGJ75LMgDmRjvrp0dYhCyjz-h5txS1ljDFS7k9Yy6iJ0QnTebou1tsLFtj7sBSvUKvZeyqFXEHN7SBCg
- type: rouge
value: 14.4472
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTYxZTVkMzFlMGUxMWNmNzc5ZDI0OWM3ODY2ZTc1MDg2MDc2NTRiZjM3OTA4NGI1MmEwNzQzMjQyOWM5NDE3YiIsInZlcnNpb24iOjF9.xsBp4kyHAnAnAWllwvcXNF3vFFbgP_3Ipplg0Cs8yMzY2qIKozlflWSpmm7qyru1RvtDrHH5JQy0hSSz49tMDQ
- type: rouge
value: 24.1902
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDNmODZiOTcxYmU0NjlkMjEzOTBmZjZhMzkxZDcyODNjYmJjOGNiNzA2MTI2YjU4MTUzZTFlM2EwYjRkNyIsInZlcnNpb24iOjF9.QE9X1gqHxDA_Vzj86nOi1FrYXrvvYR-uQgAKn2ESJp48mnT4rHCnpxVo3qJGXcoeD0vA0M9VDWJzc2pci34PBA
- type: rouge
value: 25.5277
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDk2YzY1NjU3NDgxMDllYjIwMGI5NGE2ZjY3NzcxZGEwNmYzYjQxYzVlZTdmYzdkYWIxM2Y1YjkxNjZhOWRlZiIsInZlcnNpb24iOjF9.ksd-KgRtY71cHJxFsqLWr5lofRSrfiwixGTI6Hek6GvfisssetoDPy17bWnQpUqfN0ozxJciw2VzpauYPDuZCg
- type: loss
value: 1.6347737312316895
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNmODJhNzdmMzNkMTc4MDcwZDhmNDFiZjM1ZWVmYjQ4N2IzNWU3MjYwMWM4ZmM0NjFhNjY1OTBlZjBkMjY0YSIsInZlcnNpb24iOjF9.aaF2D-cKnhK4YaqFV23QhoiTCOK7rQJKoXJMMj-kuxe_NLQBLNj73LBou376IlsTmOxxk_mmEimzwMMbTiVSDA
- type: gen_len
value: 48.4967
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzk3YjMxZWY2NzE5ZWMxZjBhYmE5YzU2YTM3MzNmMjlmNmJjM2MyMzY4ZTE1MjI1ZTNkN2YxOWZhOThmYzljMyIsInZlcnNpb24iOjF9._I_I9B66dT3S8RMMmMACG3YjIQYcXzmodriDWM33jRa4X6NFQx0b6_YHNP7K-uLEm8qD31bgb0NlsaRA37qLBA
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2638979565
- CO2 Emissions (in grams): 675.7790
## Validation Metrics
- Loss: 1.631
- Rouge1: 33.348
- Rouge2: 14.481
- RougeL: 24.210
- RougeLsum: 25.514
- Gen Len: 48.497
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ell-hol/autotrain-test-orangesum-2638979565
``` |
mili7522/dqn-SpaceInvadersNoFrameskip-v4 | mili7522 | 2023-02-08T14:22:25Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T14:21:39Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 885.00 +/- 294.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mili7522 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mili7522 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mili7522
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pfunk/Pong-v4-DQPN_p100_pt0.1-seed1 | pfunk | 2023-02-08T14:20:49Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-08T14:20:29Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 4.50 +/- 3.93
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100_pt0.1 --start-policy-f 100000 --end-policy-f 100000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 100000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p100_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
mechanicalsea/speecht5-vc | mechanicalsea | 2023-02-08T14:02:35Z | 0 | 24 | null | [
"speech",
"text",
"cross-modal",
"unified model",
"self-supervised learning",
"SpeechT5",
"Voice Conversion",
"dataset:CMUARCTIC",
"dataset:bdl",
"dataset:clb",
"dataset:rms",
"dataset:slt",
"license:mit",
"region:us"
]
| null | 2022-08-25T09:24:52Z | ---
license: mit
tags:
- speech
- text
- cross-modal
- unified model
- self-supervised learning
- SpeechT5
- Voice Conversion
datasets:
- CMUARCTIC
- bdl
- clb
- rms
- slt
---
## SpeechT5 VC Manifest
| [**Github**](https://github.com/microsoft/SpeechT5) | [**Huggingface**](https://huggingface.co/mechanicalsea/speecht5-vc) |
This manifest is an attempt to recreate the Voice Conversion recipe used for training [SpeechT5](https://aclanthology.org/2022.acl-long.393). This manifest was constructed using [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) four speakers, e.g., bdl, clb, rms, slt. There are 932 utterances for training, 100 utterances for validation, and 100 utterance for evaluation.
### News
- 8 February 2023: SpeechT5 is integrated as an official model into the Hugging Face Transformers library [[Blog](https://huggingface.co/blog/speecht5)] and [[Demo](https://huggingface.co/spaces/Matthijs/speecht5-vc-demo)].
### Requirements
- [SpeechBrain](https://github.com/speechbrain/speechbrain) for extracting speaker embedding
- [Parallel WaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN) for implementing vocoder.
### Tools
- `manifest/utils` is used to extract speaker embedding, generate manifest, and apply vocoder.
- `manifest/arctic*` provides the pre-trained vocoder for each speaker.
### Model and Samples
- [`speecht5_vc.pt`](./speecht5_vc.pt) are reimplemented Voice Conversion fine-tuning on the released manifest **but with a smaller batch size or max updates** (Ensure the manifest is ok).
- `samples` are created by the released fine-tuned model and vocoder.
### Reference
If you find our work is useful in your research, please cite the following paper:
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
``` |
jojoUla/bert-large-cased-sigir-support-no-label-20-sigir-tune2nd-LR10-labelled-30 | jojoUla | 2023-02-08T13:54:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-08T13:50:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-no-label-20-sigir-tune2nd-LR10-labelled-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-no-label-20-sigir-tune2nd-LR10-labelled-30
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-20](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-20) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1303 | 1.0 | 1 | 3.2415 |
| 2.3107 | 2.0 | 2 | 2.1225 |
| 1.2824 | 3.0 | 3 | 2.2623 |
| 1.0548 | 4.0 | 4 | 0.5449 |
| 1.1366 | 5.0 | 5 | 1.1446 |
| 0.5947 | 6.0 | 6 | 0.3811 |
| 0.4889 | 7.0 | 7 | 1.6445 |
| 1.2689 | 8.0 | 8 | 1.7214 |
| 0.8074 | 9.0 | 9 | 2.3152 |
| 0.7084 | 10.0 | 10 | 0.9325 |
| 1.0307 | 11.0 | 11 | 2.4217 |
| 0.7119 | 12.0 | 12 | 2.6455 |
| 1.0052 | 13.0 | 13 | 1.1594 |
| 0.7125 | 14.0 | 14 | 1.2795 |
| 0.4732 | 15.0 | 15 | 0.1245 |
| 0.8829 | 16.0 | 16 | 1.8585 |
| 0.7079 | 17.0 | 17 | 1.6644 |
| 0.6243 | 18.0 | 18 | 1.6117 |
| 1.2438 | 19.0 | 19 | 2.3044 |
| 1.0812 | 20.0 | 20 | 4.5037 |
| 0.7003 | 21.0 | 21 | 1.5862 |
| 0.867 | 22.0 | 22 | 2.1851 |
| 0.9098 | 23.0 | 23 | 1.6055 |
| 0.6214 | 24.0 | 24 | 2.6699 |
| 0.282 | 25.0 | 25 | 1.3515 |
| 0.1888 | 26.0 | 26 | 2.3864 |
| 0.6863 | 27.0 | 27 | 1.2444 |
| 0.8527 | 28.0 | 28 | 1.9603 |
| 0.9416 | 29.0 | 29 | 3.7045 |
| 0.8302 | 30.0 | 30 | 0.9336 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Subsets and Splits