modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 06:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 06:27:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vocabtrimmer/xlm-v-base-trimmed-de-tweet-sentiment-de
|
vocabtrimmer
| 2023-04-01T03:14:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T03:11:56Z |
# `vocabtrimmer/xlm-v-base-trimmed-de-tweet-sentiment-de`
This model is a fine-tuned version of [/home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-de](https://huggingface.co//home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-de) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (german).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(german).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 72.99 | 72.99 | 72.99 | 72.98 | 72.99 | 73.08 | 72.99 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-de-tweet-sentiment-de/raw/main/eval.json).
|
shoning/PPO-LunarLander-v2
|
shoning
| 2023-04-01T03:13:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-01T03:13:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.81 +/- 19.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dkoh12/distilbert-base-uncased-finetuned_emotion
|
dkoh12
| 2023-04-01T02:55:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:48:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230506440647792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8702 | 1.0 | 250 | 0.3219 | 0.9055 | 0.9026 |
| 0.2588 | 2.0 | 500 | 0.2168 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-60000
|
vocabtrimmer
| 2023-04-01T02:54:21Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:50:41Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt): `vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-60000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-pt | vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-60000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 132,125,955 |
| parameter_size_embedding | 692,451,072 | 46,081,536 |
| vocab_size | 901,629 | 60,002 |
| compression_rate_full | 100.0 | 16.97 |
| compression_rate_embedding | 100.0 | 6.65 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 60000 | 2 |
|
Corianas/SoccerTwos_try2
|
Corianas
| 2023-04-01T02:41:09Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-31T13:33:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Corianas/poca-SoccerTwos2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-30000
|
vocabtrimmer
| 2023-04-01T02:36:52Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:33:29Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt): `vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-30000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-pt | vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-30000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 109,085,955 |
| parameter_size_embedding | 692,451,072 | 23,041,536 |
| vocab_size | 901,629 | 30,002 |
| compression_rate_full | 100.0 | 14.01 |
| compression_rate_embedding | 100.0 | 3.33 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 30000 | 2 |
|
vocabtrimmer/mbart-large-cc25-squad-qa-trimmed-en-15000
|
vocabtrimmer
| 2023-04-01T02:32:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-01T02:06:23Z |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-squad-qa](https://huggingface.co/lmqg/mbart-large-cc25-squad-qa): `vocabtrimmer/mbart-large-cc25-squad-qa-trimmed-en-15000`
This model is a trimmed version of [lmqg/mbart-large-cc25-squad-qa](https://huggingface.co/lmqg/mbart-large-cc25-squad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-squad-qa | vocabtrimmer/mbart-large-cc25-squad-qa-trimmed-en-15000 |
|:---------------------------|:---------------------------------|:----------------------------------------------------------|
| parameter_size_full | 610,852,864 | 370,188,288 |
| parameter_size_embedding | 512,057,344 | 30,728,192 |
| vocab_size | 250,028 | 15,004 |
| compression_rate_full | 100.0 | 60.6 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 15000 | 2 |
|
vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-10000
|
vocabtrimmer
| 2023-04-01T02:19:41Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:15:43Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt): `vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-10000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-pt | vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-10000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 93,725,955 |
| parameter_size_embedding | 692,451,072 | 7,681,536 |
| vocab_size | 901,629 | 10,002 |
| compression_rate_full | 100.0 | 12.04 |
| compression_rate_embedding | 100.0 | 1.11 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 10000 | 2 |
|
vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-5000
|
vocabtrimmer
| 2023-04-01T02:13:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:10:25Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt): `vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-5000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-pt | vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt-5000 |
|:---------------------------|:-------------------------------------------|:-------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 89,885,955 |
| parameter_size_embedding | 692,451,072 | 3,841,536 |
| vocab_size | 901,629 | 5,002 |
| compression_rate_full | 100.0 | 11.55 |
| compression_rate_embedding | 100.0 | 0.55 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 5000 | 2 |
|
vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt
|
vocabtrimmer
| 2023-04-01T02:09:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T02:02:55Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt): `vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-pt | vocabtrimmer/xlm-v-base-tweet-sentiment-pt-trimmed-pt |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------|
| parameter_size_full | 778,495,491 | 225,338,883 |
| parameter_size_embedding | 692,451,072 | 139,294,464 |
| vocab_size | 901,629 | 181,373 |
| compression_rate_full | 100.0 | 28.95 |
| compression_rate_embedding | 100.0 | 20.12 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | | 2 |
|
wjmm/Taxi-v3
|
wjmm
| 2023-04-01T01:58:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-01T01:58:18Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="wjmm/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Molka11/marian-finetuned-kde4-en-to-fr
|
Molka11
| 2023-04-01T01:57:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-31T23:42:36Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 42.11917291581875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 42.1192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
wjmm/q-FrozenLake-v1-4x4-noSlippery
|
wjmm
| 2023-04-01T01:43:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-01T01:43:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wjmm/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-60000
|
vocabtrimmer
| 2023-04-01T01:07:18Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T01:03:38Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr): `vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-60000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-fr | vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-60000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 132,125,955 |
| parameter_size_embedding | 692,451,072 | 46,081,536 |
| vocab_size | 901,629 | 60,002 |
| compression_rate_full | 100.0 | 16.97 |
| compression_rate_embedding | 100.0 | 6.65 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 60000 | 2 |
|
saif-daoud/whisper-small-hi-2400_500_133
|
saif-daoud
| 2023-04-01T00:54:00Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:afrispeech-200",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-31T22:23:54Z |
---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: whisper-small-hi-2400_500_133
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: hausa
split: train
args: hausa
metrics:
- name: Wer
type: wer
value: 0.32728583443469905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2400_500_133
This model is a fine-tuned version of [saif-daoud/whisper-small-hi-2400_500_132](https://huggingface.co/saif-daoud/whisper-small-hi-2400_500_132) on the afrispeech-200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7843
- Wer: 0.3273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 540
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9568 | 0.5 | 270 | 0.7916 | 0.3298 |
| 0.9337 | 1.5 | 540 | 0.7843 | 0.3273 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-30000
|
vocabtrimmer
| 2023-04-01T00:49:38Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T00:45:24Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr): `vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-30000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-fr | vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-30000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 109,085,955 |
| parameter_size_embedding | 692,451,072 | 23,041,536 |
| vocab_size | 901,629 | 30,002 |
| compression_rate_full | 100.0 | 14.01 |
| compression_rate_embedding | 100.0 | 3.33 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 30000 | 2 |
|
Brizape/Yepes_5e-05_250
|
Brizape
| 2023-04-01T00:42:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:13:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Yepes_5e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Yepes_5e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1394
- Precision: 0.7129
- Recall: 0.5498
- F1: 0.6208
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5163 | 1.39 | 25 | 0.2117 | 0.0 | 0.0 | 0.0 | 0.9672 |
| 0.1988 | 2.78 | 50 | 0.2076 | 0.0 | 0.0 | 0.0 | 0.9672 |
| 0.1579 | 4.17 | 75 | 0.1379 | 0.4017 | 0.2338 | 0.2956 | 0.9712 |
| 0.1055 | 5.56 | 100 | 0.1182 | 0.5688 | 0.3085 | 0.4 | 0.9754 |
| 0.0791 | 6.94 | 125 | 0.1024 | 0.5032 | 0.3955 | 0.4429 | 0.9762 |
| 0.0545 | 8.33 | 150 | 0.1038 | 0.5683 | 0.4453 | 0.4993 | 0.9777 |
| 0.0402 | 9.72 | 175 | 0.1165 | 0.7063 | 0.4726 | 0.5663 | 0.9796 |
| 0.0337 | 11.11 | 200 | 0.1104 | 0.6635 | 0.5149 | 0.5798 | 0.9786 |
| 0.0238 | 12.5 | 225 | 0.1203 | 0.6789 | 0.5522 | 0.6091 | 0.9790 |
| 0.0202 | 13.89 | 250 | 0.1263 | 0.7416 | 0.5498 | 0.6314 | 0.9803 |
| 0.0147 | 15.28 | 275 | 0.1273 | 0.6965 | 0.5423 | 0.6098 | 0.9791 |
| 0.0129 | 16.67 | 300 | 0.1338 | 0.6796 | 0.5647 | 0.6168 | 0.9787 |
| 0.0109 | 18.06 | 325 | 0.1359 | 0.7690 | 0.5547 | 0.6445 | 0.9804 |
| 0.0091 | 19.44 | 350 | 0.1394 | 0.7129 | 0.5498 | 0.6208 | 0.9796 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Brizape/Variome_0.0001_250
|
Brizape
| 2023-04-01T00:32:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:38:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Variome_0.0001_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Variome_0.0001_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0638
- Precision: 0.6586
- Recall: 0.5816
- F1: 0.6177
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3778 | 0.35 | 25 | 0.1802 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1563 | 0.69 | 50 | 0.1200 | 0.4524 | 0.0162 | 0.0313 | 0.9763 |
| 0.1061 | 1.04 | 75 | 0.1041 | 0.3604 | 0.2767 | 0.3130 | 0.9799 |
| 0.0981 | 1.39 | 100 | 0.0902 | 0.4585 | 0.3826 | 0.4171 | 0.9814 |
| 0.0807 | 1.74 | 125 | 0.0783 | 0.5129 | 0.4244 | 0.4645 | 0.9835 |
| 0.0731 | 2.08 | 150 | 0.0727 | 0.5513 | 0.5047 | 0.5270 | 0.9844 |
| 0.0526 | 2.43 | 175 | 0.0720 | 0.6368 | 0.5167 | 0.5705 | 0.9856 |
| 0.0604 | 2.78 | 200 | 0.0686 | 0.589 | 0.5030 | 0.5426 | 0.9849 |
| 0.0542 | 3.12 | 225 | 0.0671 | 0.6131 | 0.5371 | 0.5726 | 0.9856 |
| 0.0441 | 3.47 | 250 | 0.0669 | 0.6635 | 0.5389 | 0.5947 | 0.9860 |
| 0.0438 | 3.82 | 275 | 0.0667 | 0.625 | 0.5423 | 0.5807 | 0.9859 |
| 0.0381 | 4.17 | 300 | 0.0658 | 0.6562 | 0.5525 | 0.5999 | 0.9858 |
| 0.0404 | 4.51 | 325 | 0.0648 | 0.6578 | 0.5713 | 0.6115 | 0.9862 |
| 0.0341 | 4.86 | 350 | 0.0625 | 0.6637 | 0.5679 | 0.6121 | 0.9865 |
| 0.0298 | 5.21 | 375 | 0.0646 | 0.6727 | 0.5739 | 0.6194 | 0.9868 |
| 0.029 | 5.56 | 400 | 0.0643 | 0.6569 | 0.5739 | 0.6126 | 0.9861 |
| 0.0287 | 5.9 | 425 | 0.0637 | 0.6713 | 0.5739 | 0.6188 | 0.9869 |
| 0.027 | 6.25 | 450 | 0.0637 | 0.6660 | 0.5739 | 0.6165 | 0.9868 |
| 0.0236 | 6.6 | 475 | 0.0639 | 0.6644 | 0.5833 | 0.6212 | 0.9869 |
| 0.0233 | 6.94 | 500 | 0.0638 | 0.6586 | 0.5816 | 0.6177 | 0.9867 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-10000
|
vocabtrimmer
| 2023-04-01T00:31:33Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T00:28:28Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr): `vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-10000`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-fr | vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr-10000 |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------------|
| parameter_size_full | 778,495,491 | 93,725,955 |
| parameter_size_embedding | 692,451,072 | 7,681,536 |
| vocab_size | 901,629 | 10,002 |
| compression_rate_full | 100.0 | 12.04 |
| compression_rate_embedding | 100.0 | 1.11 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 10000 | 2 |
|
vocabtrimmer/xlm-v-base-trimmed-ar-30000
|
vocabtrimmer
| 2023-04-01T00:31:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-04-01T00:30:06Z |
# Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-ar-30000`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-ar-30000 |
|:---------------------------|:----------------------|:-------------------------------------------|
| parameter_size_full | 779,396,349 | 109,115,186 |
| parameter_size_embedding | 692,451,072 | 23,041,536 |
| vocab_size | 901,629 | 30,002 |
| compression_rate_full | 100.0 | 14.0 |
| compression_rate_embedding | 100.0 | 3.33 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 30000 | 2 |
|
vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr
|
vocabtrimmer
| 2023-04-01T00:21:29Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T00:14:28Z |
# Vocabulary Trimmed [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr): `vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr`
This model is a trimmed version of [cardiffnlp/xlm-v-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-v-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-v-base-tweet-sentiment-fr | vocabtrimmer/xlm-v-base-tweet-sentiment-fr-trimmed-fr |
|:---------------------------|:-------------------------------------------|:--------------------------------------------------------|
| parameter_size_full | 778,495,491 | 253,812,483 |
| parameter_size_embedding | 692,451,072 | 167,768,064 |
| vocab_size | 901,629 | 218,448 |
| compression_rate_full | 100.0 | 32.6 |
| compression_rate_embedding | 100.0 | 24.23 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
|
vocabtrimmer/xlm-v-base-trimmed-ar-15000
|
vocabtrimmer
| 2023-04-01T00:09:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-04-01T00:08:47Z |
# Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-ar-15000`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-ar-15000 |
|:---------------------------|:----------------------|:-------------------------------------------|
| parameter_size_full | 779,396,349 | 97,580,186 |
| parameter_size_embedding | 692,451,072 | 11,521,536 |
| vocab_size | 901,629 | 15,002 |
| compression_rate_full | 100.0 | 12.52 |
| compression_rate_embedding | 100.0 | 1.66 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 15000 | 2 |
|
vocabtrimmer/xlm-v-base-trimmed-ar-10000-tweet-sentiment-ar
|
vocabtrimmer
| 2023-04-01T00:06:27Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-01T00:05:23Z |
# `vocabtrimmer/xlm-v-base-trimmed-ar-10000-tweet-sentiment-ar`
This model is a fine-tuned version of [/home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar-10000](https://huggingface.co//home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar-10000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (arabic).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(arabic).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 61.38 | 61.38 | 61.38 | 60.99 | 61.38 | 60.95 | 61.38 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-ar-10000-tweet-sentiment-ar/raw/main/eval.json).
|
vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa
|
vocabtrimmer
| 2023-04-01T00:05:28Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-01T00:03:14Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things."
example_title: "Question Answering Example 1"
- text: "question: who created the post as we know it today?, context: 'So much of The Post is Ben,' Mrs. Graham said in 1994, three years after Bradlee retired as editor. 'He created it as we know it today.'— Ed O'Keefe (@edatpost) October 21, 2014"
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 33.47
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 67.38
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 39.13
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 91.86
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 81.36
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 68.65
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 54.26
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en-90000](https://huggingface.co/ckpts/mt5-small-trimmed-en-90000) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en-90000](https://huggingface.co/ckpts/mt5-small-trimmed-en-90000)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa")
# model prediction
answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa")
output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 54.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 68.65 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 49.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 43.25 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 37.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 33.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 39.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 81.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 67.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en-90000
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-90000-squad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Brizape/SETH_5e-05_250
|
Brizape
| 2023-04-01T00:00:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:49:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: SETH_5e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SETH_5e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0716
- Precision: 0.7964
- Recall: 0.8036
- F1: 0.8000
- Accuracy: 0.9849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3757 | 0.76 | 25 | 0.1924 | 0.0 | 0.0 | 0.0 | 0.9625 |
| 0.1119 | 1.52 | 50 | 0.0723 | 0.6237 | 0.7473 | 0.6799 | 0.9775 |
| 0.0565 | 2.27 | 75 | 0.0614 | 0.6569 | 0.7727 | 0.7101 | 0.9794 |
| 0.048 | 3.03 | 100 | 0.0586 | 0.6667 | 0.8655 | 0.7532 | 0.9801 |
| 0.0355 | 3.79 | 125 | 0.0519 | 0.7206 | 0.8345 | 0.7734 | 0.9835 |
| 0.0328 | 4.55 | 150 | 0.0532 | 0.7165 | 0.8455 | 0.7756 | 0.9831 |
| 0.0258 | 5.3 | 175 | 0.0539 | 0.7460 | 0.8382 | 0.7894 | 0.9835 |
| 0.022 | 6.06 | 200 | 0.0561 | 0.7612 | 0.7709 | 0.7660 | 0.9836 |
| 0.0189 | 6.82 | 225 | 0.0564 | 0.7636 | 0.74 | 0.7516 | 0.9828 |
| 0.0166 | 7.58 | 250 | 0.0597 | 0.7274 | 0.8491 | 0.7836 | 0.9836 |
| 0.0128 | 8.33 | 275 | 0.0626 | 0.8251 | 0.7636 | 0.7932 | 0.9854 |
| 0.0113 | 9.09 | 300 | 0.0603 | 0.8029 | 0.8 | 0.8015 | 0.9854 |
| 0.009 | 9.85 | 325 | 0.0687 | 0.8026 | 0.7909 | 0.7967 | 0.9857 |
| 0.0075 | 10.61 | 350 | 0.0716 | 0.7964 | 0.8036 | 0.8000 | 0.9849 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vocabtrimmer/xlm-v-base-trimmed-ar-10000
|
vocabtrimmer
| 2023-03-31T23:49:56Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-31T23:48:49Z |
# Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-ar-10000`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-ar-10000 |
|:---------------------------|:----------------------|:-------------------------------------------|
| parameter_size_full | 779,396,349 | 93,735,186 |
| parameter_size_embedding | 692,451,072 | 7,681,536 |
| vocab_size | 901,629 | 10,002 |
| compression_rate_full | 100.0 | 12.03 |
| compression_rate_embedding | 100.0 | 1.11 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 10000 | 2 |
|
Brizape/SETH_2e-05_250
|
Brizape
| 2023-03-31T23:49:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:38:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: SETH_2e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SETH_2e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0676
- Precision: 0.7820
- Recall: 0.7891
- F1: 0.7855
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4635 | 0.76 | 25 | 0.1662 | 0.0 | 0.0 | 0.0 | 0.9625 |
| 0.0991 | 1.52 | 50 | 0.0805 | 0.7425 | 0.6291 | 0.6811 | 0.9770 |
| 0.0585 | 2.27 | 75 | 0.0616 | 0.6952 | 0.7836 | 0.7368 | 0.9801 |
| 0.0495 | 3.03 | 100 | 0.0564 | 0.7129 | 0.7945 | 0.7515 | 0.9819 |
| 0.0413 | 3.79 | 125 | 0.0531 | 0.7188 | 0.8273 | 0.7692 | 0.9824 |
| 0.0393 | 4.55 | 150 | 0.0512 | 0.7350 | 0.8218 | 0.7760 | 0.9827 |
| 0.0317 | 5.3 | 175 | 0.0490 | 0.7543 | 0.7927 | 0.7730 | 0.9832 |
| 0.0283 | 6.06 | 200 | 0.0546 | 0.7780 | 0.7836 | 0.7808 | 0.9833 |
| 0.0255 | 6.82 | 225 | 0.0524 | 0.7504 | 0.7818 | 0.7658 | 0.9829 |
| 0.022 | 7.58 | 250 | 0.0567 | 0.7613 | 0.7945 | 0.7776 | 0.9835 |
| 0.0183 | 8.33 | 275 | 0.0566 | 0.7730 | 0.7927 | 0.7828 | 0.9842 |
| 0.0179 | 9.09 | 300 | 0.0592 | 0.7668 | 0.7655 | 0.7662 | 0.9830 |
| 0.016 | 9.85 | 325 | 0.0648 | 0.7855 | 0.7855 | 0.7855 | 0.9841 |
| 0.0135 | 10.61 | 350 | 0.0639 | 0.7732 | 0.7873 | 0.7802 | 0.9832 |
| 0.0121 | 11.36 | 375 | 0.0676 | 0.7820 | 0.7891 | 0.7855 | 0.9837 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vocabtrimmer/xlm-v-base-trimmed-ar-5000-tweet-sentiment-ar
|
vocabtrimmer
| 2023-03-31T23:47:08Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-31T23:46:06Z |
# `vocabtrimmer/xlm-v-base-trimmed-ar-5000-tweet-sentiment-ar`
This model is a fine-tuned version of [/home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar-5000](https://huggingface.co//home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar-5000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (arabic).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(arabic).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 46.55 | 46.55 | 46.55 | 37.81 | 46.55 | 41.09 | 46.55 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-ar-5000-tweet-sentiment-ar/raw/main/eval.json).
|
Brizape/Variome_5e-05_250
|
Brizape
| 2023-03-31T23:38:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:22:47Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Variome_5e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Variome_5e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0679
- Precision: 0.6097
- Recall: 0.5389
- F1: 0.5721
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5834 | 0.35 | 25 | 0.1849 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1856 | 0.69 | 50 | 0.1791 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1611 | 1.04 | 75 | 0.1698 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1471 | 1.39 | 100 | 0.1219 | 0.1478 | 0.0290 | 0.0485 | 0.9764 |
| 0.1117 | 1.74 | 125 | 0.1133 | 0.1784 | 0.1426 | 0.1585 | 0.9767 |
| 0.1071 | 2.08 | 150 | 0.1030 | 0.2899 | 0.2220 | 0.2515 | 0.9789 |
| 0.0844 | 2.43 | 175 | 0.0977 | 0.3838 | 0.2750 | 0.3204 | 0.9805 |
| 0.087 | 2.78 | 200 | 0.0884 | 0.4084 | 0.3903 | 0.3991 | 0.9815 |
| 0.0785 | 3.12 | 225 | 0.0803 | 0.4895 | 0.4176 | 0.4507 | 0.9833 |
| 0.0647 | 3.47 | 250 | 0.0784 | 0.5545 | 0.4518 | 0.4979 | 0.9842 |
| 0.0592 | 3.82 | 275 | 0.0740 | 0.5655 | 0.5013 | 0.5315 | 0.9847 |
| 0.0525 | 4.17 | 300 | 0.0725 | 0.5916 | 0.5158 | 0.5511 | 0.9854 |
| 0.0515 | 4.51 | 325 | 0.0698 | 0.5861 | 0.5115 | 0.5463 | 0.9853 |
| 0.0483 | 4.86 | 350 | 0.0691 | 0.5994 | 0.5201 | 0.5569 | 0.9855 |
| 0.047 | 5.21 | 375 | 0.0702 | 0.5905 | 0.5209 | 0.5535 | 0.9855 |
| 0.0429 | 5.56 | 400 | 0.0693 | 0.5986 | 0.5286 | 0.5615 | 0.9858 |
| 0.0435 | 5.9 | 425 | 0.0673 | 0.5951 | 0.5397 | 0.5661 | 0.9858 |
| 0.0418 | 6.25 | 450 | 0.0676 | 0.5949 | 0.5329 | 0.5622 | 0.9858 |
| 0.038 | 6.6 | 475 | 0.0679 | 0.6013 | 0.5397 | 0.5689 | 0.9860 |
| 0.0355 | 6.94 | 500 | 0.0679 | 0.6097 | 0.5389 | 0.5721 | 0.9860 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
iiranna/ViT_GP_model
|
iiranna
| 2023-03-31T23:36:46Z | 0 | 1 | null |
[
"dataset:iiranna/BUI",
"license:apache-2.0",
"region:us"
] | null | 2023-03-31T17:03:30Z |
---
license: apache-2.0
datasets:
- iiranna/BUI
---
|
vocabtrimmer/xlm-v-base-trimmed-ar-5000
|
vocabtrimmer
| 2023-03-31T23:30:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-31T23:29:39Z |
# Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-ar-5000`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-ar-5000 |
|:---------------------------|:----------------------|:------------------------------------------|
| parameter_size_full | 779,396,349 | 89,890,186 |
| parameter_size_embedding | 692,451,072 | 3,841,536 |
| vocab_size | 901,629 | 5,002 |
| compression_rate_full | 100.0 | 11.53 |
| compression_rate_embedding | 100.0 | 0.55 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 5000 | 2 |
|
vocabtrimmer/xlm-v-base-trimmed-ar-tweet-sentiment-ar
|
vocabtrimmer
| 2023-03-31T23:28:30Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-31T23:26:50Z |
# `vocabtrimmer/xlm-v-base-trimmed-ar-tweet-sentiment-ar`
This model is a fine-tuned version of [/home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar](https://huggingface.co//home/asahi/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-ar) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (arabic).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(arabic).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 65.4 | 65.4 | 65.4 | 64.72 | 65.4 | 65.15 | 65.4 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-ar-tweet-sentiment-ar/raw/main/eval.json).
|
Brizape/Variome_2e-05_250
|
Brizape
| 2023-03-31T23:22:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T23:06:36Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Variome_2e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Variome_2e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0798
- Precision: 0.4740
- Recall: 0.4133
- F1: 0.4416
- Accuracy: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.052 | 0.35 | 25 | 0.1874 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1879 | 0.69 | 50 | 0.1794 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1625 | 1.04 | 75 | 0.1736 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1643 | 1.39 | 100 | 0.1323 | 0.0 | 0.0 | 0.0 | 0.9760 |
| 0.1228 | 1.74 | 125 | 0.1183 | 0.2137 | 0.0854 | 0.1220 | 0.9769 |
| 0.1165 | 2.08 | 150 | 0.1113 | 0.2017 | 0.1230 | 0.1528 | 0.9774 |
| 0.0989 | 2.43 | 175 | 0.1072 | 0.3520 | 0.2092 | 0.2625 | 0.9792 |
| 0.1057 | 2.78 | 200 | 0.1008 | 0.3322 | 0.2528 | 0.2871 | 0.9795 |
| 0.0997 | 3.12 | 225 | 0.0961 | 0.3952 | 0.2801 | 0.3278 | 0.9804 |
| 0.0895 | 3.47 | 250 | 0.0930 | 0.4115 | 0.2938 | 0.3428 | 0.9807 |
| 0.0813 | 3.82 | 275 | 0.0904 | 0.3897 | 0.3305 | 0.3577 | 0.9810 |
| 0.0767 | 4.17 | 300 | 0.0885 | 0.4294 | 0.3348 | 0.3762 | 0.9815 |
| 0.0763 | 4.51 | 325 | 0.0851 | 0.4277 | 0.3715 | 0.3976 | 0.9817 |
| 0.0714 | 4.86 | 350 | 0.0836 | 0.4361 | 0.3698 | 0.4002 | 0.9822 |
| 0.0714 | 5.21 | 375 | 0.0825 | 0.4862 | 0.3766 | 0.4244 | 0.9828 |
| 0.0678 | 5.56 | 400 | 0.0814 | 0.4684 | 0.3920 | 0.4268 | 0.9828 |
| 0.0674 | 5.9 | 425 | 0.0802 | 0.4638 | 0.3988 | 0.4288 | 0.9830 |
| 0.0688 | 6.25 | 450 | 0.0792 | 0.4672 | 0.4073 | 0.4352 | 0.9828 |
| 0.0646 | 6.6 | 475 | 0.0802 | 0.4847 | 0.4056 | 0.4417 | 0.9831 |
| 0.0607 | 6.94 | 500 | 0.0798 | 0.4740 | 0.4133 | 0.4416 | 0.9830 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
wjmm/ppo-LunarLander-v2
|
wjmm
| 2023-03-31T23:19:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T23:06:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.05 +/- 22.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
raymondlo84/stable-diffision-v2-openvino
|
raymondlo84
| 2023-03-31T23:16:33Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2023-03-31T22:40:48Z |
---
license: openrail++
---
Instructions on how I generated these IR files.
https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/236-stable-diffusion-v2
|
vocabtrimmer/xlm-v-base-trimmed-ar
|
vocabtrimmer
| 2023-03-31T23:09:15Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-31T23:07:34Z |
# Vocabulary Trimmed [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base): `vocabtrimmer/xlm-v-base-trimmed-ar`
This model is a trimmed version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/xlm-v-base | vocabtrimmer/xlm-v-base-trimmed-ar |
|:---------------------------|:----------------------|:-------------------------------------|
| parameter_size_full | 779,396,349 | 157,554,496 |
| parameter_size_embedding | 692,451,072 | 71,417,856 |
| vocab_size | 901,629 | 92,992 |
| compression_rate_full | 100.0 | 20.21 |
| compression_rate_embedding | 100.0 | 10.31 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | | 2 |
|
Brizape/tmvar_2e-05_250
|
Brizape
| 2023-03-31T23:04:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T22:55:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tmvar_2e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmvar_2e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Precision: 0.8756
- Recall: 0.9135
- F1: 0.8942
- Accuracy: 0.9974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.486 | 1.0 | 25 | 0.0910 | 0.0 | 0.0 | 0.0 | 0.9858 |
| 0.0765 | 2.0 | 50 | 0.0410 | 0.6267 | 0.2541 | 0.3615 | 0.9889 |
| 0.0399 | 3.0 | 75 | 0.0230 | 0.6513 | 0.6865 | 0.6684 | 0.9941 |
| 0.0254 | 4.0 | 100 | 0.0176 | 0.7170 | 0.8216 | 0.7657 | 0.9957 |
| 0.0139 | 5.0 | 125 | 0.0129 | 0.8710 | 0.8757 | 0.8733 | 0.9968 |
| 0.0078 | 6.0 | 150 | 0.0107 | 0.9027 | 0.9027 | 0.9027 | 0.9974 |
| 0.0057 | 7.0 | 175 | 0.0110 | 0.8763 | 0.9189 | 0.8971 | 0.9975 |
| 0.0042 | 8.0 | 200 | 0.0113 | 0.8718 | 0.9189 | 0.8947 | 0.9971 |
| 0.003 | 9.0 | 225 | 0.0118 | 0.8802 | 0.9135 | 0.8966 | 0.9974 |
| 0.0022 | 10.0 | 250 | 0.0121 | 0.8877 | 0.8973 | 0.8925 | 0.9972 |
| 0.0019 | 11.0 | 275 | 0.0126 | 0.8756 | 0.9135 | 0.8942 | 0.9972 |
| 0.0016 | 12.0 | 300 | 0.0126 | 0.8802 | 0.9135 | 0.8966 | 0.9974 |
| 0.0015 | 13.0 | 325 | 0.0129 | 0.8769 | 0.9243 | 0.9 | 0.9974 |
| 0.0013 | 14.0 | 350 | 0.0128 | 0.8756 | 0.9135 | 0.8942 | 0.9974 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Inzamam567/Useless-delicate-mix
|
Inzamam567
| 2023-03-31T22:48:33Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-03-31T22:28:38Z |
---
duplicated_from: NoCrypt/delicate-mix
---
|
Inzamam567/Useless-7pa
|
Inzamam567
| 2023-03-31T22:42:57Z | 11 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-31T22:07:14Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
duplicated_from: AIARTCHAN/7pa
---
# 7pa
[원본글](https://arca.live/b/aiart/70729603)
[civitai](https://civitai.com/models/13468)
# Download
- [original 4.27GB](https://civitai.com/api/download/models/15869)
- [fp16 2.13GB](https://huggingface.co/AIARTCHAN/7pa/blob/main/7pa-fp16.safetensors)
7th anime v3 + 파스텔 + 어비스오렌지2(sfw)




|
wizofavalon/vit-base-patch16-224-finetuned-flower
|
wizofavalon
| 2023-03-31T22:37:04Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-31T22:24:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
amannlp/ppo-LunarLander-v2
|
amannlp
| 2023-03-31T22:22:59Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T22:22:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.08 +/- 34.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
letingliu/my_awesome_model_tweets
|
letingliu
| 2023-03-31T22:22:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-07T05:40:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: letingliu/my_awesome_model_tweets
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# letingliu/my_awesome_model_tweets
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5490
- Validation Loss: 0.5429
- Train Accuracy: 0.6692
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 40, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6582 | 0.6337 | 0.6692 | 0 |
| 0.6230 | 0.6035 | 0.6692 | 1 |
| 0.6015 | 0.5766 | 0.6692 | 2 |
| 0.5738 | 0.5533 | 0.6692 | 3 |
| 0.5540 | 0.5429 | 0.6692 | 4 |
| 0.5534 | 0.5429 | 0.6692 | 5 |
| 0.5515 | 0.5429 | 0.6692 | 6 |
| 0.5524 | 0.5429 | 0.6692 | 7 |
| 0.5455 | 0.5429 | 0.6692 | 8 |
| 0.5463 | 0.5429 | 0.6692 | 9 |
| 0.5380 | 0.5429 | 0.6692 | 10 |
| 0.5494 | 0.5429 | 0.6692 | 11 |
| 0.5467 | 0.5429 | 0.6692 | 12 |
| 0.5382 | 0.5429 | 0.6692 | 13 |
| 0.5562 | 0.5429 | 0.6692 | 14 |
| 0.5517 | 0.5429 | 0.6692 | 15 |
| 0.5462 | 0.5429 | 0.6692 | 16 |
| 0.5456 | 0.5429 | 0.6692 | 17 |
| 0.5499 | 0.5429 | 0.6692 | 18 |
| 0.5490 | 0.5429 | 0.6692 | 19 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
dvruette/oasst-llama-13b-1000-steps
|
dvruette
| 2023-03-31T22:22:28Z | 1,494 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-31T19:22:45Z |
https://wandb.ai/open-assistant/supervised-finetuning/runs/17boywm8?workspace=
|
Inzamam567/Useless-somethingv3
|
Inzamam567
| 2023-03-31T22:14:27Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-03-31T22:14:26Z |
---
duplicated_from: NoCrypt/SomethingV3
---
|
Inzamam567/Useless-SukiyakiMix-v1.0
|
Inzamam567
| 2023-03-31T22:01:54Z | 0 | 5 | null |
[
"stable-diffusion",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-31T22:01:54Z |
---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
- text-to-image
duplicated_from: Vsukiyaki/SukiyakiMix-v1.0
---
# ◆ SukiyakiMix-v1.0
**SukiyakiMix-v1.0** は、**pastel-mix** をベースに **AbyssOrangeMix2** をマージしたモデルです。
## VAE:
VAE はお好きなものをお使いください。推奨は、 [WarriorMama777/OrangeMixs](https://huggingface.co/WarriorMama777/OrangeMixs) の **orangemix.vae.pt** です。
<hr>
# ◆ Recipe
このモデルは、以下の 2 つのモデルを**単純**にマージして生成されたモデルです。
<dl>
<dt><a href="https://huggingface.co/andite/pastel-mix">andite/pastel-mix</a></dt>
<dd>└ pastel-mix</dd>
<dt><a href="https://huggingface.co/WarriorMama777/OrangeMixs">WarriorMama777/OrangeMixs</a></dt>
<dd>└ AbyssOrangeMix2_sfw (AOM2s)</dd>
</dl>
| Model A | Model B | Ratio |
| :--------: | :-------------------------: | :-----: |
| pastel-mix | AbyssOrangeMix2_sfw (AOM2s) | 60 : 40 |
※U-Net の階層ごとの重みは変化させていません。<br>
※マージには[merge-models
](https://github.com/eyriewow/merge-models)のマージ用スクリプトを使用しています。
<hr>
# ◆ Licence
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here :https://huggingface.co/spaces/CompVis/stable-diffusion-license
<br>
#### 【和訳】
このモデルはオープンアクセスであり、すべての人が利用できます。CreativeML OpenRAIL-M ライセンスにより、権利と使用方法がさらに規定されています。CreativeML OpenRAIL ライセンスでは、次のことが規定されています。
1. モデルを使用して、違法または有害な出力またはコンテンツを意図的に作成または共有することはできません。
2. 作成者は、あなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用でき、ライセンスに設定された規定に違反してはならない使用について説明責任を負います。
3. 重みを再配布し、モデルを商用および/またはサービスとして使用することができます。その場合、ライセンスに記載されているのと同じ使用制限を含め、CreativeML OpenRAIL-M のコピーをすべてのユーザーと共有する必要があることに注意してください。 (ライセンスを完全にかつ慎重にお読みください。) [こちらからライセンス全文をお読みください。](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
<br>
🚫 本モデルを商用の画像生成サービスで利用する行為 <br>
Use of this model for commercial image generation services
🚫 本モデルや本モデルをマージしたモデルを販売する行為<br>
The act of selling this model or a model merged with this model
🚫 本モデルを使用し意図的に違法な出力をする行為 <br>
Intentionally using this model to produce illegal output
🚫 本モデルをマージしたモデルに異なる権限を与える行為 <br>
Have different permissions when sharing
🚫 本モデルをマージしたモデルを配布または本モデルを再配布した際に同じ使用制限を含め、CreativeML OpenRAIL-M のコピーをすべてのユーザーと共有しない行為 <br>
The act of not sharing a copy of CreativeML OpenRAIL-M with all users, including the same usage restrictions when distributing or redistributing a merged model of this model.
⭕ 本モデルで生成した画像を商用利用する行為 <br>
Commercial use of images generated by this model
⭕ 本モデルを使用したマージモデルを使用または再配布する行為 <br>
Use or redistribution of merged models using this model
⭕ 本モデルのクレジット表記をせずに使用する行為 <br>
Use of this model without crediting the model
<hr>
# ◆ Examples
### NMKD SD-GUI-1.8.1-NoMdl
- VAE: orangemix.vae.pt
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example1.png" width="512px">
```
Positive:
(best quality)+,(masterpiece)++,(ultra detailed)++,cute girl,
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.3,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,logo,text
Steps: 20
CFG Scale: 8
Size: 1024x1024 (High-Resolution Fix)
Seed: 1696068555
Sampler: PLMS
```
<br>
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example2.png" width="512px">
```
Positive:
(best quality)+,(masterpiece)++,(ultra detailed)++,cute girl,
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.3,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,logo,text
Steps: 20
CFG Scale: 8
Size: 1024x1024 (High-Resolution Fix)
Seed: 1596727034
Sampler: DDIM
```
<br>
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example3.png" width="512px">
```
Positive:
(best quality)+,(masterpiece)++,(ultra detailed)++,sharp focus,cute little girl sitting in a messy room,Roomful of sundries,black hair,long hair,blush,clutter,miscellaneous goods are placed in a mess,wide shot,smile,light particles,hoodie,Bookshelves, drink, cushions, chairs, desks, game equipment, crayons, drawing paper
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.3,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,logo,text
Steps: 80
CFG Scale: 8
Size: 1024x1024 (High-Resolution Fix)
Seed: 629024761
Sampler: DPM++ 2
```
<br>
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example4.png" width="512px">
```
Positive:
(masterpiece, best quality, ultra detailed)++,cute girl sitting at a desk in a girlish room filled with furniture, surrounded by various gaming devices and other tech,Include details such as the room's vibrant,pink hair,blue eyes,short hair,cat ears,smile,playful,creative
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.2,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,(2 girl)
Steps: 80
CFG Scale: 8
Size: 1024x768 (High-Resolution Fix)
Seed: 1887602021
Sampler: DPM++ 2
```
<br>
### stable-diffusion-webui
- VAE: orangemix.vae.pt
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example5.png" width="512px">
```
Positive:
(best quality)+,(masterpiece)++,(ultra detailed)++,cute girl,school uniform
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.3,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,logo,text
Steps: 50
CFG Scale: 8
Size: 512x768
Seed: 3357075383
Sampler: DPM++ SDE Karras
```
<br>
<img src="https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0/resolve/main/imgs/Example6.png" width="512px">
```
Positive:
(best quality)+,(masterpiece)++,(ultra detailed)++,a girl,messy room
Negative:
(low quality, worst quality)1.4, (bad anatomy)+, (inaccurate limb)1.3,bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms)1.2,logo,text
Steps: 20
CFG Scale: 7
Size: 1024x1024
Seed: 1103020084
Sampler: DPM++ SDE Karras
```
<hr>
Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt)
|
NiltonAlf18/eros
|
NiltonAlf18
| 2023-03-31T21:54:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T21:52:45Z |
---
license: creativeml-openrail-m
---
|
miki030/dqn-SpaceInvadersNoFrameskip-v4
|
miki030
| 2023-03-31T21:51:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T13:00:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 601.50 +/- 223.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga miki030 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga miki030 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga miki030
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
lunnan/Reinforce-CartPole-v1
|
lunnan
| 2023-03-31T21:46:43Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T21:46:32Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Inzamam567/Useless-X-mix
|
Inzamam567
| 2023-03-31T21:34:49Z | 24 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-31T21:34:49Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
pipeline_tag: text-to-image
duplicated_from: les-chien/X-mix
---
# X-mix
**Civitai**: [X-mix | Stable Diffusion Checkpoint | Civitai](https://civitai.com/models/13069/x-mix)
X-mix is a merging model used to generate anime images. My English is not very good, so there may be some parts of this article that are unclear.
## V2.0
V2.0 is a merged model based on V1.0. This model supports nsfw.
### Difference from V1.0
- The performance of V2.0 is not better than that of V1.0, but the generated images now exhibit a different artistic style.
- V2.0 offers better support for nsfw than V1.0, but the drawback is that even when you do not intend to generate an nsfw image, there is still a possibility of generating one. If you are more interested in the sfw model, I will provide a detailed explanation in the recipe section.
- In my opinion, V2.0 is not as user-friendly as V1.0, and it appears to be more challenging to generate an excellent image.
### Recommended Settings
- Sampler: DPM++ SDE Karras (sfw), DDIM (nsfw)
- Steps: 20 (DDIM may require more steps)
- CFG Scale: 5
- Hires upscale: Latent (bicubic antialiased), Latent (nearest-exact), Denoising strength: 0.4~0.7
- vae: NAI.vae
- Clip skip: 2
- ENSD: 31337
- Eta: 0.67
### Example

```
masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 4291846267, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Eta: 0.67
```

```
Indoor, bright, 1Girl, gray hair, amber eyes, smile, black dress, barefoot, sitting posture,
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2118045521, Size: 600x400, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```
%2C%20white%20t.png)
```
landscape, in spring, cherry blossoms, cloudy sky, 1girl, solo, long blue hair, smirk, pink eyes, (school uniform:1.05), white thighhighs,
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 3093571233, Size: 400x600, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
1girl, on bed, wet, see-through shirt, thighhighs, cleavage, collarbone, full body,
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 986400693, Size: 512x512, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```
%2C%20solo%2C%20Flowery%20meadow%2C%20cloudy%20sky%2C%20aqua%20eyes%2C%20white%20pantyhose%2C%20blonde%20hair%2C.png)
```
Alice \(Alice in wonderland\), solo, Flowery meadow, cloudy sky, aqua eyes, white pantyhose, blonde hair,
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 273840053, Size: 512x512, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2212365348, Size: 512x512, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Eta: 0.67
```

```
masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DDIM, CFG scale: 5, Seed: 293615512, Size: 512x512, Model hash: 7961a4960e, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact)
```

```
1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guitar, pleated skirt, black shirt, indoors
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2, DeepNegative-V1-75T
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 3442031040, Size: 512x512, Model hash: 7961a4960e, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Eta: 0.67
```
### Recipe
**Step 1:** animefull-latest (model) + pastelmix-lora (lora) + ligneClaireStyleCogecha (lora) = pastel-Cogecha
You can try replacing animefull-latest with Anything-V3.0 or your preferred model. However, I cannot confirm if this will yield better results and it requires you to experiment with it on your own.
**Step 2:** MBW: Chilloutmix + X-mix-V1.0
| Model A | Model B | base_alpha | Weight | Merge Name |
| ----------- | ---------- | ---------- | ------------------------------------------------- | --------------- |
| Chilloutmix | X-mix-V1.0 | 1 | 1,1,1,1,1,1,1,1,0,0,0,0,1,0,0,0,0,1,1,1,1,1,1,1,1 | X-mix-V2.0-base |
This is the step of the sfw version. The steps for the nsfw version are as follows: I merged several LoRAs into Chilloutmix to obtain Chilloutmix-nsfw. Then I merged Chilloutmix-nsfw and X-mix-V1.0 to get X-mix-V2.0-nsfwBase1. Finally, I merged several LoRAs into X-mix-V2.0-nsfwBase1 to get X-mix-V2.0-nsfwBase2.
LoRAs related to real people should be merged into Chilloutmix or other photo-realistic models that you like, while LoRAs related to anime should be merged into X-mix-V2.0-base. Which LoRAs to use depends on your preference.
**Step 3:** MBW: pastel-Cogecha + X-mix-V2.0-base
| Model A | Model B | base_alpha | Weight | Merge Name |
| -------------- | --------------- | ---------- | ------------------------------------------------------- | -------------- |
| pastel-Cogecha | X-mix-V2.0-base | 0 | 1,1,1,1,1,0.3,0,0,0,1,0.1,1,1,1,1,1,0,1,0,1,1,0.2,1,1,1 | X-mix-V2.0-sfw |
In fact, I never tried to obtain the sfw version because I didn't plan on using it from the beginning. So this process is for reference only, and I am not sure about the actual effect of the sfw model.
## V1.0
I have forgotten the recipe for X-mix-V1.0, as too many models were used for merging. This model supports nsfw, but the effect may not be very good.
### Recommended Settings
- Sampler: DPM++ SDE Karras
- Steps: 20
- CFG Scale: 5
- Hires upscaler: Latent (bicubic antialiased), Denoising strength: 0.5~0.6
- vae: NAI.vae
- Clip skip: 2
- ENSD: 31337
- Eta: 0.67
### Examples

```
masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 1906918205, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
Indoor, bright, 1girl, gray hair, amber eyes, smile, black dress, barefoot, sitting posture,
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2118045521, Size: 600x400, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
landscape, in spring, cherry blossoms, cloudy sky, 1girl, solo, long blue hair, smirk, pink eyes, (school uniform:1.05), white thighhighs,
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 3093571233, Size: 400x600, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
1girl, on bed, wet, see-through shirt, thighhighs, cleavage, collarbone, full body,
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 1666118295, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
Alice \(Alice in wonderland\), solo, Flowery meadow, cloudy sky, aqua eyes, white pantyhose, blonde hair,
Negative prompt: EasyNegative, sketch by bad-artist
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 807449917, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Eta: 0.67
```

```
masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 116927034, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos
Negative prompt: EasyNegative, photograph by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 1705759664, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```

```
1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guitar, pleated skirt, black shirt, indoors
Negative prompt: EasyNegative, by bad-artist, bad_prompt_version2
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2548407675, Size: 512x512, Model hash: 7bc4c05c90, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased), Eta: 0.67
```
## Embedding
If you need the embedding used in examples, click them:
- **EasyNegative:** [embed/EasyNegative · Hugging Face](https://huggingface.co/embed/EasyNegative)
- **bad-artist:** [nick-x-hacker/bad-artist · Hugging Face](https://huggingface.co/nick-x-hacker/bad-artist)
- **bad_prompt_version2:** [embed/bad_prompt · Hugging Face](https://huggingface.co/embed/bad_prompt)
- **Deep Negative V1.x:** [Deep Negative V1.x | Stable Diffusion TextualInversion | Civitai](https://civitai.com/models/4629/deep-negative-v1x)
You can consider whether to use them according to your preferences.
## More
1. Since my prompts are usually brief, I'm not sure if this model will be able to meet all of your requirements if you need to use a large number of prompts.
2. Using low resolution is **not recommended** for generating pictures.
3. I did my best, but the hands are not perfect.
4. The above settings may not necessarily be perfect.
5. Due to my computer's performance, it's difficult for me to comprehensively test this model. I'm looking forward to your feedback.
|
stevied67/pegasus-subreddit-comments-summarizer
|
stevied67
| 2023-03-31T21:26:05Z | 109 | 2 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:stevied67/autotrain-data-pegasus-subreddit-comments-summarizer",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-31T20:13:27Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- stevied67/autotrain-data-pegasus-subreddit-comments-summarizer
co2_eq_emissions:
emissions: 27.833269754820982
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 45559114001
- CO2 Emissions (in grams): 27.8333
## Validation Metrics
- Loss: 1.467
- Rouge1: 51.832
- Rouge2: 25.213
- RougeL: 40.226
- RougeLsum: 45.554
- Gen Len: 57.035
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/stevied67/autotrain-pegasus-subreddit-comments-summarizer-45559114001
```
|
ninja/assis
|
ninja
| 2023-03-31T21:15:12Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-31T16:31:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: assis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# assis
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3864
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.006 | 15.62 | 1000 | 2.9606 | 1 |
| 2.8532 | 31.25 | 2000 | 2.8553 | 1 |
| 0.6421 | 46.88 | 3000 | 0.5418 | 1 |
| 0.3404 | 62.5 | 4000 | 0.4027 | 1 |
| 0.2801 | 78.12 | 5000 | 0.3864 | 1 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Inzamam567/Useless-Defmix-v2.0
|
Inzamam567
| 2023-03-31T21:13:58Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-03-31T21:13:58Z |
---
duplicated_from: Defpoint/Defmix-v2.0
---
<br>
# ■*Defmix-v2.0*
◎<strong>*Defmix-v2.0*</strong>は、下記のモデルをMBWによって*U-Net*の階層ごとに重みを変化させてマージしたモデルです。<br>
<strong>*Defmix-v2.0*</strong> is a model that merges the following models by adjusting the weights of each layer in *U-Net*.<br>
- <strong>*Counterfeit v2.5*</strong>
- <strong>*Basil Mix*</strong>
- <strong>*Abyss Orange Mix v3.0 A2*</strong>
◎*Vae*ファイルは好みのものを使用してください。<br>
Please use the *Vae* file of your preference.<br>
<br>
# ■*Examples*
◎*ControlNet*が登場したことから、このモデルは*Defmix-v1.0*と異なり、構図や人物と背景のバランスよりも全体の描画力や質感を重視しています。<br>
With the introduction of *ControlNet*, this model, unlike *Defmix-v1.0*, emphasizes overall drawing power and texture rather than composition and balance between characters and backgrounds.<br>
◎現在広く使われている<strong>クオリティタグ(best qualityやmasterpieceなど)を使用してなくても</strong>、高品質な画像が出力されるように調整しています。<br>
I have adjusted the output to ensure high-quality images are produced, <strong>even without using commonly used Quality Tags</strong> such as 'best quality' or 'masterpiece'.<br>
<br>
- *Sampler: DPM++ 2M Karras*
- *Steps: 28*
- *CFG Scale: 8*
- *Clip Skip: 2*
- *Upscaler: Latent(nearest)*
- *Highres Step: 0*
- *Denoising strength: 0.6*
<br>
Positive: beautiful girl, gothic<br>
Negative: EasyNegative
<br>
<img src="https://i.imgur.com/a25fE5f.jpeg" width="768" height="768">
<br>
# ■*Important Reminders*
◎画風をかなり現実的にすることができるため、<strong>このモデルによって出力したR-18のNSFW画像をSNSサイト等で公開することはご遠慮頂きますよう</strong>、よろしくお願い致します。<br>
As this model can make the style of images quite realistic, <strong>I kindly request that you refrain from posting R-18 NSFW images generated by this model on social media or other websites.</strong> <br>
Thank you for your understanding and cooperation.
<br>
|
pregonas/a2c-PandaReachDense-v2
|
pregonas
| 2023-03-31T20:48:29Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T19:02:20Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.77 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yuhyunji/rare-puppers
|
Yuhyunji
| 2023-03-31T20:13:33Z | 220 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-31T20:13:22Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8939393758773804
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
mrm8488/electricidad-base-finetuned-go_emotions-es
|
mrm8488
| 2023-03-31T20:06:19Z | 130 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions-es-mt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T17:43:21Z |
---
tags:
- generated_from_trainer
datasets:
- go_emotions-es-mt
metrics:
- accuracy
- f1
model-index:
- name: electricidad-base-finetuned-go_emotions-es
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions-es-mt
type: go_emotions-es-mt
config: simplified
split: train
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5934476693051891
- name: F1
type: f1
value: 0.5806237685841615
widget:
- text: "Me gusta mucho su forma de ser"
- text: "Es una persona muy extraña..."
- text: "El dolor es desesperante"
- text: "No me esperaba una evolución tan positiva"
- text: "¡Dios mío, es enorme!"
- text: "¡Agg! Está asqueroso."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-finetuned-go_emotions-es
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the [go_emotions-es-mt](https://huggingface.co/datasets/mrm8488/go_emotions-es-mt) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5111
- Accuracy: 0.5934
- F1: 0.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.729 | 1.0 | 2270 | 1.5835 | 0.5578 | 0.5044 |
| 1.4432 | 2.0 | 4540 | 1.4529 | 0.5842 | 0.5538 |
| 1.2688 | 3.0 | 6810 | 1.4445 | 0.5945 | 0.5770 |
| 1.1017 | 4.0 | 9080 | 1.4804 | 0.5937 | 0.5781 |
| 0.9999 | 5.0 | 11350 | 1.5111 | 0.5934 | 0.5806 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
saif-daoud/whisper-small-hi-2400_500_132
|
saif-daoud
| 2023-03-31T19:42:45Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:afrispeech-200",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-31T14:19:38Z |
---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: whisper-small-hi-2400_500_132
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: hausa
split: train
args: hausa
metrics:
- name: Wer
type: wer
value: 0.3433857983900036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2400_500_132
This model is a fine-tuned version of [saif-daoud/whisper-small-hi-2400_500_131](https://huggingface.co/saif-daoud/whisper-small-hi-2400_500_131) on the afrispeech-200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8127
- Wer: 0.3434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0318 | 0.5 | 900 | 0.8252 | 0.3442 |
| 0.9844 | 1.5 | 1800 | 0.8127 | 0.3434 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
chriscelaya/gpt-test
|
chriscelaya
| 2023-03-31T19:42:00Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2023-03-31T19:41:35Z |
---
license: mit
language:
- en
---
|
justincinmd/ppo-LunarLander-v2
|
justincinmd
| 2023-03-31T19:29:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T18:52:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.64 +/- 20.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ibnout/Taxi-v3
|
Ibnout
| 2023-03-31T19:16:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T19:16:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.14 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Ibnout/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ibnout/q-FrozenLake-v1-4x4-noSlippery
|
Ibnout
| 2023-03-31T19:10:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T14:37:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ibnout/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Nazzyk/ppo-LunarLander-v2-u8
|
Nazzyk
| 2023-03-31T19:06:25Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T18:02:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 109.78 +/- 128.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 3000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Nazzyk/ppo-LunarLander-v2-u8'
'batch_size': 512
'minibatch_size': 128}
```
|
MarcosMunoz95/poca-SoccerTwos
|
MarcosMunoz95
| 2023-03-31T18:46:36Z | 35 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-31T18:44:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: MarcosMunoz95/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IlyaGusev/mt0_xxl_ru_turbo_alpaca_lora
|
IlyaGusev
| 2023-03-31T18:41:13Z | 0 | 1 | null |
[
"text2text-generation",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"region:us"
] |
text2text-generation
| 2023-03-28T21:38:27Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
language:
- ru
pipeline_tag: text2text-generation
inference: false
---
|
omgavy/bert-classifier-tuned
|
omgavy
| 2023-03-31T18:36:03Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-27T19:18:51Z |
### BERT base model (uncased)
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert)
### This model is trained on [News Category Dataset](https://www.kaggle.com/datasets/rmisra/news-category-dataset).
### Labels
Consists of 1-4 numbers which represents class of which
0 world
1 sport
2 business
3 tech
|
sb3/ppo-MiniGrid-KeyCorridorS3R1-v0
|
sb3
| 2023-03-31T18:13:38Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MiniGrid-KeyCorridorS3R1-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T10:31:43Z |
---
library_name: stable-baselines3
tags:
- MiniGrid-KeyCorridorS3R1-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-KeyCorridorS3R1-v0
type: MiniGrid-KeyCorridorS3R1-v0
metrics:
- type: mean_reward
value: 0.95 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-KeyCorridorS3R1-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-KeyCorridorS3R1-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-KeyCorridorS3R1-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 500000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-MiniGrid-PutNear-6x6-N2-v0
|
sb3
| 2023-03-31T18:12:46Z | 225 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MiniGrid-PutNear-6x6-N2-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T10:30:40Z |
---
library_name: stable-baselines3
tags:
- MiniGrid-PutNear-6x6-N2-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-PutNear-6x6-N2-v0
type: MiniGrid-PutNear-6x6-N2-v0
metrics:
- type: mean_reward
value: 0.61 +/- 0.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-PutNear-6x6-N2-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-PutNear-6x6-N2-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-PutNear-6x6-N2-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 10000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-MiniGrid-GoToDoor-5x5-v0
|
sb3
| 2023-03-31T18:12:31Z | 254 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MiniGrid-GoToDoor-5x5-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T10:30:20Z |
---
library_name: stable-baselines3
tags:
- MiniGrid-GoToDoor-5x5-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-GoToDoor-5x5-v0
type: MiniGrid-GoToDoor-5x5-v0
metrics:
- type: mean_reward
value: 0.56 +/- 0.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-GoToDoor-5x5-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-GoToDoor-5x5-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-GoToDoor-5x5-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 5000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-MiniGrid-DoorKey-5x5-v0
|
sb3
| 2023-03-31T18:11:40Z | 357 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"MiniGrid-DoorKey-5x5-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T10:29:29Z |
---
library_name: stable-baselines3
tags:
- MiniGrid-DoorKey-5x5-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-DoorKey-5x5-v0
type: MiniGrid-DoorKey-5x5-v0
metrics:
- type: mean_reward
value: 0.97 +/- 0.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-DoorKey-5x5-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-DoorKey-5x5-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-MiniGrid-Empty-Random-5x5-v0
|
sb3
| 2023-03-31T18:11:08Z | 262 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MiniGrid-Empty-Random-5x5-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-28T12:23:13Z |
---
library_name: stable-baselines3
tags:
- MiniGrid-Empty-Random-5x5-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-Empty-Random-5x5-v0
type: MiniGrid-Empty-Random-5x5-v0
metrics:
- type: mean_reward
value: 0.97 +/- 0.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-Empty-Random-5x5-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-Empty-Random-5x5-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-Empty-Random-5x5-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
manuelmaiorano/ppo-PyramidsTraining
|
manuelmaiorano
| 2023-03-31T18:06:47Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-31T18:06:42Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: manuelmaiorano/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dvilasuero/autotrain-alpaca-gigo-detector-45529113937
|
dvilasuero
| 2023-03-31T17:58:02Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:dvilasuero/autotrain-data-alpaca-gigo-detector",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-31T17:57:19Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dvilasuero/autotrain-data-alpaca-gigo-detector
co2_eq_emissions:
emissions: 0.3078125269826994
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 45529113937
- CO2 Emissions (in grams): 0.3078
## Validation Metrics
- Loss: 0.481
- Accuracy: 0.825
- Macro F1: 0.823
- Micro F1: 0.825
- Weighted F1: 0.825
- Macro Precision: 0.824
- Micro Precision: 0.825
- Weighted Precision: 0.825
- Macro Recall: 0.821
- Micro Recall: 0.825
- Weighted Recall: 0.825
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dvilasuero/autotrain-alpaca-gigo-detector-45529113937
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dvilasuero/autotrain-alpaca-gigo-detector-45529113937", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dvilasuero/autotrain-alpaca-gigo-detector-45529113937", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
arbts/Reinforce-CartPole-v1
|
arbts
| 2023-03-31T17:55:12Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T13:24:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
arbts/Reinforce-Pixelcopter-PLE-v0
|
arbts
| 2023-03-31T17:37:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T17:37:17Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.70 +/- 18.37
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NiltonAlf18/russian
|
NiltonAlf18
| 2023-03-31T17:33:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T17:32:52Z |
---
license: creativeml-openrail-m
---
|
carolinainmymind/Lunar-Lander-v2
|
carolinainmymind
| 2023-03-31T17:32:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T17:32:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.21 +/- 26.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kfahn/dreambooth-mandelbulb
|
kfahn
| 2023-03-31T17:32:02Z | 3 | 0 |
KerasCV Stable Diffusion in Diffusers
|
[
"KerasCV Stable Diffusion in Diffusers",
"tf-keras",
"text-to-image",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-03-31T15:34:11Z |
---
library_name: KerasCV Stable Diffusion in Diffusers
license: openrail
pipeline_tag: text-to-image
---
## Model description
DreamBooth model for mandelbulb-hydrangea hybrid.
## Intended uses & limitations
More information needed
## Training and evaluation data
Generative art
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| inner_optimizer.class_name | Custom>RMSprop |
| inner_optimizer.config.name | RMSprop |
| inner_optimizer.config.weight_decay | None |
| inner_optimizer.config.clipnorm | None |
| inner_optimizer.config.global_clipnorm | None |
| inner_optimizer.config.clipvalue | None |
| inner_optimizer.config.use_ema | False |
| inner_optimizer.config.ema_momentum | 0.99 |
| inner_optimizer.config.ema_overwrite_frequency | 100 |
| inner_optimizer.config.jit_compile | True |
| inner_optimizer.config.is_legacy_optimizer | False |
| inner_optimizer.config.learning_rate | 0.0010000000474974513 |
| inner_optimizer.config.rho | 0.9 |
| inner_optimizer.config.momentum | 0.0 |
| inner_optimizer.config.epsilon | 1e-07 |
| inner_optimizer.config.centered | False |
| dynamic | True |
| initial_scale | 32768.0 |
| dynamic_growth_steps | 2000 |
| training_precision | mixed_float16 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
Mithul/rl_course_vizdoom_health_gathering_supreme
|
Mithul
| 2023-03-31T17:28:19Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T17:27:44Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.19 +/- 4.53
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Mithul/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bjoernp/alpaca-cerebras-6.7B
|
bjoernp
| 2023-03-31T17:21:19Z | 0 | 3 |
transformers
|
[
"transformers",
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:tatsu-lab/alpaca",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-03-31T16:02:53Z |
---
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
- tatsu-lab/alpaca
language:
- en
library_name: transformers
---
# Model Card for Alpaca Cerebras-6.7B LoRA
This repository contains the adapter weights for the [Cerebras-6.7B](https://huggingface.co/cerebras/Cerebras-GPT-6.7B) model finetuned on the
cleaned version of the alpaca dataset following [github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora). Find the code used
for finetuning at our fork: [github.com/bjoernpl/cerebras-lora](https://github.com/bjoernpl/cerebras-lora).
## Model Details
### Model Description
_Copied from [cerebras/Cerebras-GPT-6.7B](https://huggingface.co/cerebras/Cerebras-GPT-6.7B) model card:_
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with Chinchilla scaling laws (20 tokens per model parameter) which is compute-optimal.
These models were trained on the Andromeda AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' weight streaming technology simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the Cerebras Model Studio. Cerebras CS-2 compatible checkpoints are available in Cerebras Model Zoo.
* Developed by: [Cerebras Systems](https://www.cerebras.net/) finetuned by [Björn P.](https://github.com/bjoernpl).
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture with LoRA adapter
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
## Quickstart
See [github.com/bjoernpl/cerebras-lora](https://github.com/bjoernpl/cerebras-lora) for a Gradio demo and more code.
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-6.7B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-6.7B", torch_dtype=torch.float16, device_map='auto', load_in_8bit=True)
model = PeftModel.from_pretrained(model, "bjoernp/alpaca-cerebras-6.7B", torch_dtype=torch.float16, device_map='auto')
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Environmental Impact
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO<sub>2</sub>eq/kWh. A cumulative of 5 hours of computation was performed on hardware of type RTX 3090Ti (TDP of 450W).
Total emissions are estimated to be 0.97 kgCO<sub>2</sub>eq of which 0 percents were directly offset.
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** RTX 3090Ti
- **Hours used:** 5
- **Carbon Emitted:** 0.97 kgCO<sub>2</sub>eq
|
n6ai-archive/lowdef
|
n6ai-archive
| 2023-03-31T17:13:44Z | 0 | 1 | null |
[
"stable diffusion",
"style",
"hypernetwork",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T09:33:18Z |
---
license: creativeml-openrail-m
task_categories:
- text-to-image
tags:
- stable diffusion
- style
- hypernetwork
pretty_name: lowdef
base_model: runwayml/stable-diffusion-v1-5
---

# Lowdef
Lowdef is a model trained on a stylized lowpoly dataset that captures a unique low-definition style (base model [SD 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)). It's not meant to be used stand-alone but with other checkpoints.
## Auto1111 Quick Start
Instructions for use with Stable Diffusion Web UI.
### Hypernetwork
1. Download the [`lowpdef.pt`](https://huggingface.co/n6ai/lowdef/resolve/main/lowdef.pt) file.
2. Place the downloaded `lowdef.pt` file inside `stable-diffusion-webui/models/hypernetworks` directory. If the `hypernetworks` directory doesn't exist simply create it.
3. Add `<hypernet:lowdef:0.25>` to your prompt and adjust the blend to your liking.
**Example**
```xml
Your Prompt <hypernet:lowdef:0.25>
```
## Best Practices
> ⚠️ The model is quite aggressive and more unpredictable at higher blend values.
- Use a blend between `0.1` and `0.3`.
- Generate multiple images at once, minimum `4`.
- Use `Lowdef` with other artistic checkpoints.
|
dvilasuero/alpaca-gigo-detector
|
dvilasuero
| 2023-03-31T16:52:48Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-03-31T14:13:59Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# argilla/alpaca-gigo-detector
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("argilla/alpaca-gigo-detector")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
JamexX90/Cutiemixes
|
JamexX90
| 2023-03-31T16:23:27Z | 0 | 1 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-03-08T18:41:06Z |
---
license: cc-by-nc-4.0
---
just a random goofy merge I did, not good, but if you like it you can use it

|
asenella/reproducing_mmvae_2
|
asenella
| 2023-03-31T16:08:44Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-03-31T16:08:41Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
dctrain/sd-class-butterflies-32
|
dctrain
| 2023-03-31T16:07:10Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-31T16:06:29Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dctrain/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
lunnan/dqn-SpaceInvadersNoFrameskip-v4
|
lunnan
| 2023-03-31T15:55:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T15:54:17Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 664.00 +/- 139.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lunnan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lunnan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lunnan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Dewa/bert-finetuned-ner
|
Dewa
| 2023-03-31T15:48:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-31T10:23:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9276294098252555
- name: Recall
type: recall
value: 0.9469875462807136
- name: F1
type: f1
value: 0.9372085276482345
- name: Accuracy
type: accuracy
value: 0.9853270147760052
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0675
- Precision: 0.9276
- Recall: 0.9470
- F1: 0.9372
- Accuracy: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.078 | 1.0 | 1756 | 0.0712 | 0.9212 | 0.9364 | 0.9287 | 0.9829 |
| 0.0288 | 2.0 | 3512 | 0.0682 | 0.9281 | 0.9472 | 0.9375 | 0.9853 |
| 0.0149 | 3.0 | 5268 | 0.0675 | 0.9276 | 0.9470 | 0.9372 | 0.9853 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Zyes/Trial_and_erorr_1
|
Zyes
| 2023-03-31T15:48:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T15:48:24Z |
---
license: creativeml-openrail-m
---
|
Harshil13/botGPT2_Context_v1
|
Harshil13
| 2023-03-31T15:43:25Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-28T06:14:31Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: botGPT2_Context_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# botGPT2_Context_v1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3524
- Train Accuracy: 0.0000
- Train Perplexity: 18824.3340
- Validation Loss: 0.3106
- Validation Accuracy: 0.0
- Validation Perplexity: 39785.5430
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 16381, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Perplexity | Validation Loss | Validation Accuracy | Validation Perplexity | Epoch |
|:----------:|:--------------:|:----------------:|:---------------:|:-------------------:|:---------------------:|:-----:|
| 0.6295 | 0.0032 | 100042.4062 | 0.3106 | 0.0 | 39785.5273 | 0 |
| 0.3528 | 0.0000 | 18560.1328 | 0.3106 | 0.0 | 39785.5391 | 1 |
| 0.3525 | 0.0000 | 18773.9668 | 0.3106 | 0.0 | 39785.5156 | 2 |
| 0.3525 | 0.0 | 18342.8223 | 0.3106 | 0.0 | 39785.5078 | 3 |
| 0.3525 | 0.0000 | 19026.9180 | 0.3106 | 0.0 | 39785.5508 | 4 |
| 0.3526 | 0.0 | 19108.625 | 0.3106 | 0.0 | 39785.5195 | 5 |
| 0.3526 | 0.0000 | 19143.7520 | 0.3106 | 0.0 | 39785.5312 | 6 |
| 0.3525 | 0.0000 | 18503.0938 | 0.3106 | 0.0 | 39785.5195 | 7 |
| 0.3524 | 0.0000 | 18824.3340 | 0.3106 | 0.0 | 39785.5430 | 8 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
helpingstar/poca-SoccerTwos12M
|
helpingstar
| 2023-03-31T15:28:34Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-31T15:28:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: helpingstar/poca-SoccerTwos12M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
junklivs/distilbert-base-uncased-finetuned-cola
|
junklivs
| 2023-03-31T15:25:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-31T13:28:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5361146089547957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8228
- Matthews Correlation: 0.5361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.5480 | 0.4006 |
| 0.3496 | 2.0 | 1070 | 0.5164 | 0.4819 |
| 0.2387 | 3.0 | 1605 | 0.6022 | 0.5138 |
| 0.1779 | 4.0 | 2140 | 0.7458 | 0.5280 |
| 0.127 | 5.0 | 2675 | 0.8228 | 0.5361 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
keskesm/ppo-LunarLander-v2
|
keskesm
| 2023-03-31T15:20:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T15:20:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.43 +/- 25.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahana/my_awesome_billsum_model
|
ahana
| 2023-03-31T15:08:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-31T13:23:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1531
- Rouge1: 0.1799
- Rouge2: 0.1086
- Rougel: 0.1599
- Rougelsum: 0.1598
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.2365 | 1.0 | 1635 | 0.1723 | 0.1782 | 0.1055 | 0.1575 | 0.1575 | 19.0 |
| 0.209 | 2.0 | 3270 | 0.1596 | 0.1787 | 0.1067 | 0.1584 | 0.1584 | 19.0 |
| 0.1986 | 3.0 | 4905 | 0.1545 | 0.1794 | 0.1079 | 0.1593 | 0.1593 | 19.0 |
| 0.1917 | 4.0 | 6540 | 0.1531 | 0.1799 | 0.1086 | 0.1599 | 0.1598 | 19.0 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vcncolin/ppo-LunarLander-v2
|
vcncolin
| 2023-03-31T15:04:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T14:29:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.26 +/- 15.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/yaeMikoRealisticAnime_offset
|
LarryAIDraw
| 2023-03-31T14:52:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T14:51:13Z |
---
license: creativeml-openrail-m
---
|
csukuangfj/sherpa-onnx-lstm-en-2023-02-17
|
csukuangfj
| 2023-03-31T14:47:55Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-02-17T07:28:18Z |
---
license: apache-2.0
---
# Speech recognition with Next-gen Kaldi
The torchscript model is from
<https://huggingface.co/csukuangfj/icefall-asr-librispeech-lstm-transducer-stateless2-2022-09-03>
|
pregonas/a2c-AntBulletEnv-v0
|
pregonas
| 2023-03-31T14:43:38Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T14:42:28Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1749.05 +/- 115.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SEVUNX/JURGENIME-MIX
|
SEVUNX
| 2023-03-31T14:28:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-24T11:54:50Z |
---
license: creativeml-openrail-m
---
|
lavera/epic-diffusion-v1.1-controlnet-hed
|
lavera
| 2023-03-31T14:25:10Z | 5 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T14:23:26Z |
---
license: creativeml-openrail-m
---
|
rubentito/hivt5-base-mpdocvqa
|
rubentito
| 2023-03-31T14:25:08Z | 77 | 5 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"DocVQA",
"Document Question Answering",
"Document Visual Question Answering",
"en",
"dataset:rubentito/mp-docvqa",
"arxiv:2212.05935",
"arxiv:1905.13648",
"license:gpl-3.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-31T13:55:58Z |
---
license: gpl-3.0
tags:
- DocVQA
- Document Question Answering
- Document Visual Question Answering
datasets:
- rubentito/mp-docvqa
language:
- en
---
# Hi-VT5 base fine-tuned on MP-DocVQA
This is Hierarchical Visual T5 (Hi-VT5) base fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.
This model was proposed in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
- Results on the MP-DocVQA dataset are reported in Table 2.
- Training hyperparameters can be found in Table 8 of Appendix D.
<b style="color: #ff0000">Disclaimer</b>: Due to some issues, this model does not achieve as good results as the reported ones in the paper. Please refer to the [project Github](https://github.com/rubenpt91/MP-DocVQA-Framework) for more details.
## How to use
Hi-VT5 is not integrated into HF yet. Please download the code from [Github repository](https://github.com/rubenpt91/MP-DocVQA-Framework) and follow the instructions.
## Metrics
**Average Normalized Levenshtein Similarity (ANLS)**
The standard metric for text-based VQA tasks (ST-VQA and DocVQA). It evaluates the method's reasoning capabilities while smoothly penalizes OCR recognition errors.
Check [Scene Text Visual Question Answering](https://arxiv.org/abs/1905.13648) for detailed information.
**Answer Page Prediction Accuracy (APPA)**
In the MP-DocVQA task, the models can provide the index of the page where the information required to answer the question is located. For this subtask accuracy is used to evaluate the predictions: i.e. if the predicted page is correct or not.
Check [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935) for detailed information.
## Model results
Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
| Model | HF name | Parameters | ANLS | APPA |
|-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:-------------:|:---------:|
| [Bert large](https://huggingface.co/rubentito/bert-large-mpdocvqa) | rubentito/bert-large-mpdocvqa | 334M | 0.4183 | 51.6177 |
| [Longformer base](https://huggingface.co/rubentito/longformer-base-mpdocvqa) | rubentito/longformer-base-mpdocvqa | 148M | 0.5287 | 71.1696 |
| [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa) | rubentito/bigbird-base-itc-mpdocvqa | 131M | 0.4929 | 67.5433 |
| [LayoutLMv3 base](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa) | rubentito/layoutlmv3-base-mpdocvqa | 125M | 0.4538 | 51.9426 |
| [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa) | rubentito/t5-base-mpdocvqa | 223M | 0.5050 | 0.0000 |
| [**Hi-VT5**](https://huggingface.co/rubentito/hivt5-base-mpdocvqa) | rubentito/hivt5-base-mpdocvqa | 316M | 0.6201 | 79.23 |
## Citation Information
```tex
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}
```
|
anna-t/Reinforce-Pixelcopter-PLE-v0
|
anna-t
| 2023-03-31T14:22:43Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-31T13:25:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.20 +/- 15.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
koutch/setfit_staqt
|
koutch
| 2023-03-31T14:19:17Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-03-31T06:45:59Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# SetFit StaQT
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("koutch/setfit_staqt")
# Run inference
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.