modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 12:29:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 12:24:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
emmanuel17/LunarLander12 | emmanuel17 | 2022-12-07T20:43:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T20:42:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.50 +/- 21.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sulroy/PPO-LunarLander-v2 | Sulroy | 2022-12-07T20:34:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T17:30:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.23 +/- 20.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GIanlucaRub/whisper-tiny-it-7 | GIanlucaRub | 2022-12-07T20:34:20Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T10:55:46Z | ---
language:
- it
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny it 7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: it
split: test[:10%]
args: 'config: it, split: test'
metrics:
- name: Wer
type: wer
value: 97.56655574043262)
---
# Whisper Tiny it 7
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.137834
- Wer: 97.566556
## Model description
This model is the openai whisper small transformer adapted for Italian audio to text transcription.
As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1,
the learning rate has been set to 1e-6, the number of decoder attention heads and encoder attention heads have been set to 8
however, it did not improved the performance on the evaluation set.
## Intended uses & limitations
The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it)
## Training and evaluation data
Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation.
The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
## Training procedure
After loading the pre trained model, it has been trained on the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.7353 | 3.82 | 4000 | 2.1378 | 97.5666 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2 |
farsipal/whisper-small-el | farsipal | 2022-12-07T20:27:27Z | 37 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"el",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T01:46:32Z | ---
language:
- el
license: apache-2.0
tags:
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-el
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 el
type: mozilla-foundation/common_voice_11_0
config: el
split: test
args: el
metrics:
- name: Wer
type: wer
value: 25.696508172362552
---
# Whisper Small - Greek (el)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 el dataset
for transcription in Greek.
It achieves the following results on the evaluation set:
- train_loss: 0.0615
- Wer: 20.2080
### Training results
Upon completion of training the best model was reloaded and tested with the following results extracted from the stdout log:
```
Loading best model from ./whisper-small-el/checkpoint-5000 (score: 20.208023774145616).
{'train_runtime': 73232.697,
'train_samples_per_second': 4.37,
'train_steps_per_second': 0.068,
'train_loss': 0.06146362095708027,
'epoch': 94.34}
TrainOutput(global_step=5000,
training_loss=0.06146362095708027,
metrics={'train_runtime': 73232.697,
'train_samples_per_second': 4.37,
'train_steps_per_second': 0.068,
'train_loss': 0.06146362095708027,
'epoch': 94.34})
```
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
|
graphcore-rahult/gpt2-wikitext2 | graphcore-rahult | 2022-12-07T20:23:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-29T19:12:50Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
ursus/sd-class-butterflies-32 | ursus | 2022-12-07T20:13:44Z | 2 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-07T20:13:15Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ursus/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Jaster111/ppo-LunarLander-v2 | Jaster111 | 2022-12-07T19:53:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T19:53:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.25 +/- 36.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kushal256/ppo-LunarLander-v2 | kushal256 | 2022-12-07T19:44:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T18:43:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.25 +/- 15.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
graphcore-rahult/vit-base-patch16-224-in21k-finetuned-eurosat | graphcore-rahult | 2022-12-07T19:33:53Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"vit",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T20:25:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
- Accuracy: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0804 | 1.0 | 759 | 0.1383 | 0.9741 |
| 0.0385 | 2.0 | 1518 | 0.0756 | 0.9859 |
| 0.1211 | 3.0 | 2277 | 0.0581 | 0.9904 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
graphcore-rahult/roberta-base-finetuned-cola | graphcore-rahult | 2022-12-07T19:31:26Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T16:50:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5776
- Matthews Correlation: 0.6121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5149 | 1.0 | 534 | 0.4097 | 0.5753 |
| 0.3749 | 2.0 | 1068 | 0.4736 | 0.5927 |
| 0.1327 | 3.0 | 1602 | 0.4639 | 0.5969 |
| 0.2031 | 4.0 | 2136 | 0.5474 | 0.5696 |
| 0.1133 | 5.0 | 2670 | 0.5776 | 0.6121 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
alighasemi/fa-t5-base | alighasemi | 2022-12-07T19:29:59Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"farsi/persian",
"fa",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-06T21:37:20Z | ---
language: ["fa", "en"]
tags:
- farsi/persian
---
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only Farsi and some English embeddings left.
* The original model has 582M parameters, with 384M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 30K (top 10K English and top 20K Russian tokens) the number of model parameters was reduced to 244M parameters, and the model size was reduced from 2.2GB to 0.9GB - 42% of the original one.
The creation of this model is described in the post [How to adapt a multilingual T5 model for a single language](https://cointegrated.medium.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) along with the source code.
|
daripaez/ppo-Huggy | daripaez | 2022-12-07T19:25:59Z | 23 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-07T19:25:51Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: daripaez/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
wmingch/distilbert-base-uncased-finetuned-emotion | wmingch | 2022-12-07T19:16:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T18:49:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249684190735334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8164 | 1.0 | 250 | 0.3181 | 0.9015 | 0.8984 |
| 0.2434 | 2.0 | 500 | 0.2174 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Gnanesh5/SEF | Gnanesh5 | 2022-12-07T18:53:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-04T23:00:34Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SEF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEF
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Santi20/LunarLander-v2 | Santi20 | 2022-12-07T18:47:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T18:47:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 19.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Chemsseddine/bert2gpt2SUMM | Chemsseddine | 2022-12-07T18:43:18Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"fr",
"dataset:Chemsseddine/autotrain-data-bertSummGpt2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-14T00:34:06Z | ---
language: fr
widget:
- text: "Your text here"
datasets:
- Chemsseddine/autotrain-data-bertSummGpt2
co2_eq_emissions: 0.10685501288084795
---
<img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/>
## This model is used for french summarization
- Problem type: Summarization
- Model ID: 980832493
- CO2 Emissions (in grams): 0.10685501288084795
## Validation Metrics
- Loss: 4.03749418258667
- Rouge1: 28.8384
- Rouge2: 10.7511
- RougeL: 27.0842
- RougeLsum: 27.5118
- Gen Len: 22.0625
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Chemsseddine/autotrain-bertSummGpt2-980832493
``` |
Gnanesh5/SFF | Gnanesh5 | 2022-12-07T18:37:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-04T22:43:46Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SFF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SFF
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Gnanesh5/SAF | Gnanesh5 | 2022-12-07T18:22:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-04T22:36:47Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SAF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAF
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hhsavich/accent_determinator | hhsavich | 2022-12-07T18:17:59Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-12-07T17:17:22Z |
# Model Card for LatAm Accent Determination
Wav2Vec2 Model to classify audio based on the accent of the speaker as Puerto Rican, Colombian, Venezuelan, Peruvian, or Chilean
# Table of Contents
- [Model Card for LatAm Accent Determination](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Technical Specs](#technical-specifications)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Model Card Authors](#model-card-authors)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
Wav2Vec2 Model to classify audio based on the accent of the speaker as Puerto Rican, Colombian, Venezuelan, Peruvian, or Chilean
- **Developed by:** Henry Savich
- **Shared by [Optional]:** Henry Savich
- **Model type:** Language model
- **Language(s) (NLP):** es
- **License:** openrail
- **Parent Model:** Wav2Vec2 Base
- **Resources for more information:**
- [GitHub Repo](https://github.com/HSavich/dialect_discrimination)
# Uses
## Direct Use
Classify an audio clip as Puerto Rican, Peruvian, Venezuelan, Colombian, or Chilean Spanish
## Out-of-Scope Use
The model was trained on speakers reciting pre-chosen sentences, thus it does not reflect any knowledge of lexical differences between dialects.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
# Training Details
## Training Data
OpenSLR 71,72,73,74,75,76
## Training Procedure
### Preprocessing
Data was Train-Test split on speakers, so as to prevent the model from achieving high test accuracy by matching voices.
### Speeds, Sizes, Times
Trained on ~3000 5-second audio clips, Training is lightwegiht taking < 1 hr on using Google Colaboratory Premium GPUs
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
OpenSLR 71,72,73,74,75,76
https://huggingface.co/datasets/openslr
### Factors
Audio Quality - training and testing data was higher quality than can be expected from found audio
### Metrics
Accuracy
## Results
~85% depending on random train-test split
# Model Examination
Even splitting on speakers, our model achieves excellent accuracy on the testing set. This is interesting because it indicates that accent classification, at least at this granularity, is an easier task than voice identification, which could have just as easily met the training objective.
The confusion matrix shows that Basque is the most easily distinguished, which should be expecting as it is the only language that isn't Spanish. Puerto Rican was the hardest to identify in the testing set, but I think this is more having to do with PR having the least data moreso than something about the accent itself.
I think if this same size of dataset was used for this same experiment, but there were more speakers (and so not as much fitting on individual voices), we could expect near perfect accuracy.
# Technical Specifications
## Model Architecture and Objective
Wav2Vec2
## Compute Infrastructure
Google Colaboratory Pro+
### Hardware
Google Colaboratory Pro+ Premium GPUS
### Software
Pytorch via huggingface
# Model Card Authors
Henry Savich
# Model Card Contact
[email protected]
|
abesmon/LunarLander-v2 | abesmon | 2022-12-07T18:09:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T18:09:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.78 +/- 37.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alexao/whisper-tiny-swe | Alexao | 2022-12-07T17:48:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"swe",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T17:45:23Z | ---
language:
- swe
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Tiny swe - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny swe - Swedish
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
magleb/ppo-LunarLander-v2-3.1mil | magleb | 2022-12-07T17:42:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T17:41:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.37 +/- 16.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
reaverlee/xlm-roberta-base-finetuned-panx-all | reaverlee | 2022-12-07T17:29:54Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T17:13:07Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- F1: 0.8532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2999 | 1.0 | 835 | 0.1961 | 0.8018 |
| 0.1565 | 2.0 | 1670 | 0.1772 | 0.8465 |
| 0.0998 | 3.0 | 2505 | 0.1750 | 0.8532 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
Alexao/whisper-tiny-hi | Alexao | 2022-12-07T17:22:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"swe",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T13:28:29Z | ---
language:
- swe
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Tiny Hi - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hi - Swedish
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
TUMxudashuai/testpyramidsrnd | TUMxudashuai | 2022-12-07T17:20:26Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2022-12-07T17:20:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: TUMxudashuai/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
kennethgoodman/ppo-Taxi-v3 | kennethgoodman | 2022-12-07T17:08:52Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Taxi-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T17:08:41Z | ---
library_name: stable-baselines3
tags:
- Taxi-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **Taxi-v3**
This is a trained model of a **PPO** agent playing **Taxi-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bayartsogt/whisper-small-mn-2 | bayartsogt | 2022-12-07T17:07:09Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T04:01:43Z | ---
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-small-mn-2-bayartsogt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mn
split: test
args:
language: mn
metrics:
- name: Wer
type: wer
value: 40.87830456630981
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-mn-2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7259
- Wer: 40.8783
- Cer: 13.9617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 15000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0839 | 4.26 | 1000 | 0.4647 | 45.7286 | 16.0020 |
| 0.0093 | 8.51 | 2000 | 0.5434 | 43.9753 | 15.2446 |
| 0.0044 | 12.77 | 3000 | 0.6009 | 43.6257 | 15.1717 |
| 0.0029 | 17.02 | 4000 | 0.6166 | 43.0031 | 14.7578 |
| 0.002 | 21.28 | 5000 | 0.6390 | 42.6098 | 14.7286 |
| 0.001 | 25.53 | 6000 | 0.6558 | 41.7468 | 14.3516 |
| 0.0021 | 29.79 | 7000 | 0.6714 | 42.3039 | 14.4589 |
| 0.0003 | 34.04 | 8000 | 0.6791 | 41.0586 | 13.9506 |
| 0.0001 | 38.3 | 9000 | 0.6949 | 41.3808 | 14.1670 |
| 0.0013 | 42.55 | 10000 | 0.6875 | 41.4682 | 14.2983 |
| 0.0001 | 46.81 | 11000 | 0.6937 | 40.9165 | 13.9549 |
| 0.0001 | 51.06 | 12000 | 0.7092 | 40.9275 | 13.9549 |
| 0.0 | 55.32 | 13000 | 0.7190 | 40.9657 | 13.9703 |
| 0.0 | 59.57 | 14000 | 0.7259 | 40.8783 | 13.9617 |
| 0.0 | 63.83 | 15000 | 0.7292 | 40.8838 | 13.9274 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
bayartsogt/whisper-medium-mn-4 | bayartsogt | 2022-12-07T17:06:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T14:32:16Z | ---
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-medium-mn-4-bayartsogt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mn
split: test
args:
language: mn
metrics:
- name: Wer
type: wer
value: 33.029276818876994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-mn-4
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6015
- Wer: 33.0293
- Cer: 10.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 15000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0362 | 4.26 | 1000 | 0.4204 | 40.2720 | 13.8389 |
| 0.0087 | 8.51 | 2000 | 0.4712 | 37.4918 | 12.9175 |
| 0.0044 | 12.77 | 3000 | 0.4893 | 36.3393 | 12.4727 |
| 0.0033 | 17.02 | 4000 | 0.5159 | 35.8423 | 12.2933 |
| 0.0017 | 21.28 | 5000 | 0.5183 | 35.2797 | 12.1104 |
| 0.0016 | 25.53 | 6000 | 0.5422 | 35.4326 | 11.7454 |
| 0.0011 | 29.79 | 7000 | 0.5361 | 34.5314 | 11.5196 |
| 0.0004 | 34.04 | 8000 | 0.5406 | 34.0998 | 11.3650 |
| 0.0006 | 38.3 | 9000 | 0.5540 | 33.8650 | 11.2912 |
| 0.0002 | 42.55 | 10000 | 0.5748 | 34.0889 | 11.5333 |
| 0.0003 | 46.81 | 11000 | 0.5771 | 34.5641 | 11.4895 |
| 0.0 | 51.06 | 12000 | 0.5809 | 33.4335 | 11.2070 |
| 0.0 | 55.32 | 13000 | 0.5941 | 33.2095 | 11.0009 |
| 0.0 | 59.57 | 14000 | 0.6015 | 33.0293 | 10.9236 |
| 0.0 | 63.83 | 15000 | 0.6045 | 33.0347 | 10.9125 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
bayartsogt/whisper-small-mn-3 | bayartsogt | 2022-12-07T17:05:33Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"dataset:bayartsogt/ulaanbal-v0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T06:21:22Z | ---
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
- bayartsogt/ulaanbal-v0
metrics:
- wer
model-index:
- name: whisper-small-mn-3-bayartsogt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mn
split: test
args:
language: mn
metrics:
- name: Wer
type: wer
value: 30.36923749180686
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-mn-3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3277
- Wer: 30.3692
- Cer: 10.9030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 15000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.3408 | 0.61 | 1000 | 0.4062 | 47.6841 | 17.3811 |
| 0.2261 | 1.22 | 2000 | 0.3262 | 37.8086 | 13.6466 |
| 0.2135 | 1.83 | 3000 | 0.2863 | 33.7175 | 12.2246 |
| 0.1643 | 2.43 | 4000 | 0.2803 | 32.5978 | 11.4526 |
| 0.1198 | 3.04 | 5000 | 0.2747 | 31.1121 | 11.0533 |
| 0.1279 | 3.65 | 6000 | 0.2757 | 30.7243 | 10.8927 |
| 0.0891 | 4.26 | 7000 | 0.2878 | 30.9209 | 11.0610 |
| 0.0899 | 4.87 | 8000 | 0.2906 | 30.6642 | 11.0799 |
| 0.0648 | 5.48 | 9000 | 0.3054 | 30.5986 | 10.9030 |
| 0.0436 | 6.09 | 10000 | 0.3184 | 30.5222 | 10.9434 |
| 0.0468 | 6.7 | 11000 | 0.3277 | 30.3692 | 10.9030 |
| 0.0291 | 7.3 | 12000 | 0.3411 | 30.9810 | 11.1572 |
| 0.0275 | 7.91 | 13000 | 0.3476 | 31.0684 | 11.1555 |
| 0.0196 | 8.52 | 14000 | 0.3572 | 30.9154 | 11.1065 |
| 0.0159 | 9.13 | 15000 | 0.3600 | 31.0356 | 11.2087 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
rchan26/dit_base_binary_task | rchan26 | 2022-12-07T16:59:18Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-07T16:55:55Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dit_base_binary_task
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit_base_binary_task
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the davanstrien/leicester_loaded_annotations_binary dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0513
- Accuracy: 0.9873
- F1: 0.9600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.87 | 5 | 0.6816 | 0.5 | 0.2476 |
| 0.7387 | 1.87 | 10 | 0.5142 | 0.8354 | 0.0 |
| 0.7387 | 2.87 | 15 | 0.4690 | 0.8354 | 0.0 |
| 0.4219 | 3.87 | 20 | 0.5460 | 0.8354 | 0.0 |
| 0.4219 | 4.87 | 25 | 0.4703 | 0.8354 | 0.0 |
| 0.3734 | 5.87 | 30 | 0.4371 | 0.8354 | 0.0 |
| 0.3734 | 6.87 | 35 | 0.4147 | 0.8354 | 0.0 |
| 0.3261 | 7.87 | 40 | 0.4272 | 0.8354 | 0.0 |
| 0.3261 | 8.87 | 45 | 0.4038 | 0.8354 | 0.0 |
| 0.3078 | 9.87 | 50 | 0.3418 | 0.8354 | 0.0 |
| 0.3078 | 10.87 | 55 | 0.3042 | 0.8354 | 0.0 |
| 0.2501 | 11.87 | 60 | 0.2799 | 0.8354 | 0.0 |
| 0.2501 | 12.87 | 65 | 0.1419 | 0.9367 | 0.7619 |
| 0.1987 | 13.87 | 70 | 0.1224 | 0.9494 | 0.8182 |
| 0.1987 | 14.87 | 75 | 0.0749 | 0.9747 | 0.9167 |
| 0.1391 | 15.87 | 80 | 0.0539 | 0.9810 | 0.9412 |
| 0.1391 | 16.87 | 85 | 0.0830 | 0.9873 | 0.9600 |
| 0.1085 | 17.87 | 90 | 0.0443 | 0.9873 | 0.9600 |
| 0.1085 | 18.87 | 95 | 0.0258 | 0.9937 | 0.9804 |
| 0.1039 | 19.87 | 100 | 0.1025 | 0.9684 | 0.8936 |
| 0.1039 | 20.87 | 105 | 0.1597 | 0.9684 | 0.8936 |
| 0.1217 | 21.87 | 110 | 0.0278 | 0.9937 | 0.9811 |
| 0.1217 | 22.87 | 115 | 0.0458 | 0.9873 | 0.9600 |
| 0.0609 | 23.87 | 120 | 0.0478 | 0.9937 | 0.9804 |
| 0.0609 | 24.87 | 125 | 0.0671 | 0.9747 | 0.9231 |
| 0.1031 | 25.87 | 130 | 0.0751 | 0.9873 | 0.9600 |
| 0.1031 | 26.87 | 135 | 0.1963 | 0.9557 | 0.8444 |
| 0.0601 | 27.87 | 140 | 0.0870 | 0.9747 | 0.9167 |
| 0.0601 | 28.87 | 145 | 0.0890 | 0.9747 | 0.9167 |
| 0.0799 | 29.87 | 150 | 0.1017 | 0.9747 | 0.9167 |
| 0.0799 | 30.87 | 155 | 0.0041 | 1.0 | 1.0 |
| 0.0441 | 31.87 | 160 | 0.0332 | 0.9873 | 0.9615 |
| 0.0441 | 32.87 | 165 | 0.0839 | 0.9747 | 0.9167 |
| 0.0757 | 33.87 | 170 | 0.0722 | 0.9873 | 0.9600 |
| 0.0757 | 34.87 | 175 | 0.0168 | 0.9937 | 0.9804 |
| 0.0555 | 35.87 | 180 | 0.0443 | 0.9937 | 0.9804 |
| 0.0555 | 36.87 | 185 | 0.0227 | 0.9873 | 0.9615 |
| 0.0336 | 37.87 | 190 | 0.0128 | 0.9937 | 0.9804 |
| 0.0336 | 38.87 | 195 | 0.0169 | 0.9937 | 0.9811 |
| 0.0405 | 39.87 | 200 | 0.0193 | 0.9937 | 0.9804 |
| 0.0405 | 40.87 | 205 | 0.1216 | 0.9810 | 0.9388 |
| 0.0578 | 41.87 | 210 | 0.0307 | 0.9937 | 0.9804 |
| 0.0578 | 42.87 | 215 | 0.0539 | 0.9873 | 0.9600 |
| 0.0338 | 43.87 | 220 | 0.0573 | 0.9937 | 0.9804 |
| 0.0338 | 44.87 | 225 | 0.0086 | 1.0 | 1.0 |
| 0.0417 | 45.87 | 230 | 0.0491 | 0.9873 | 0.9600 |
| 0.0417 | 46.87 | 235 | 0.0089 | 1.0 | 1.0 |
| 0.0538 | 47.87 | 240 | 0.0846 | 0.9810 | 0.9388 |
| 0.0538 | 48.87 | 245 | 0.0452 | 0.9810 | 0.9388 |
| 0.0364 | 49.87 | 250 | 0.0513 | 0.9873 | 0.9600 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.1
|
reaverlee/xlm-roberta-base-finetuned-panx-it | reaverlee | 2022-12-07T16:59:17Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T16:45:39Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8286066584463625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2514
- F1: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8346 | 1.0 | 70 | 0.3343 | 0.7262 |
| 0.308 | 2.0 | 140 | 0.2860 | 0.7951 |
| 0.1967 | 3.0 | 210 | 0.2514 | 0.8286 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
gemasphi/real-setfit-ss-distiluse-base-multilingual-cased-v1 | gemasphi | 2022-12-07T16:58:59Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-07T16:58:41Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 650 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 650,
"warmup_steps": 65,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
npayaresc/lilt-en-funsd | npayaresc | 2022-12-07T16:58:48Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T15:27:38Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7699
- Answer: {'precision': 0.8906439854191981, 'recall': 0.8971848225214198, 'f1': 0.8939024390243904, 'number': 817}
- Header: {'precision': 0.6274509803921569, 'recall': 0.5378151260504201, 'f1': 0.579185520361991, 'number': 119}
- Question: {'precision': 0.8778359511343804, 'recall': 0.9340761374187558, 'f1': 0.9050832208726944, 'number': 1077}
- Overall Precision: 0.8706
- Overall Recall: 0.8957
- Overall F1: 0.8830
- Overall Accuracy: 0.7973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4312 | 10.53 | 200 | 0.9853 | {'precision': 0.8581818181818182, 'recall': 0.8665850673194615, 'f1': 0.8623629719853837, 'number': 817} | {'precision': 0.5625, 'recall': 0.5294117647058824, 'f1': 0.5454545454545455, 'number': 119} | {'precision': 0.8788706739526412, 'recall': 0.8960074280408542, 'f1': 0.8873563218390804, 'number': 1077} | 0.8531 | 0.8624 | 0.8577 | 0.8172 |
| 0.0478 | 21.05 | 400 | 1.2825 | {'precision': 0.8571428571428571, 'recall': 0.9033047735618115, 'f1': 0.8796185935637664, 'number': 817} | {'precision': 0.5136986301369864, 'recall': 0.6302521008403361, 'f1': 0.5660377358490567, 'number': 119} | {'precision': 0.8739650413983441, 'recall': 0.8820798514391829, 'f1': 0.878003696857671, 'number': 1077} | 0.8419 | 0.8758 | 0.8585 | 0.8026 |
| 0.0127 | 31.58 | 600 | 1.4791 | {'precision': 0.8568075117370892, 'recall': 0.8935128518971848, 'f1': 0.8747753145596165, 'number': 817} | {'precision': 0.5779816513761468, 'recall': 0.5294117647058824, 'f1': 0.5526315789473684, 'number': 119} | {'precision': 0.8909426987060998, 'recall': 0.8950789229340761, 'f1': 0.8930060213061601, 'number': 1077} | 0.8600 | 0.8728 | 0.8664 | 0.7957 |
| 0.0073 | 42.11 | 800 | 1.3846 | {'precision': 0.8853046594982079, 'recall': 0.9069767441860465, 'f1': 0.8960096735187424, 'number': 817} | {'precision': 0.5333333333333333, 'recall': 0.6050420168067226, 'f1': 0.5669291338582677, 'number': 119} | {'precision': 0.8932584269662921, 'recall': 0.8857938718662952, 'f1': 0.8895104895104896, 'number': 1077} | 0.8662 | 0.8778 | 0.8719 | 0.8142 |
| 0.0023 | 52.63 | 1000 | 1.5955 | {'precision': 0.8430034129692833, 'recall': 0.9069767441860465, 'f1': 0.8738207547169811, 'number': 817} | {'precision': 0.6190476190476191, 'recall': 0.5462184873949579, 'f1': 0.5803571428571429, 'number': 119} | {'precision': 0.8935574229691877, 'recall': 0.8885793871866295, 'f1': 0.8910614525139665, 'number': 1077} | 0.8579 | 0.8758 | 0.8668 | 0.7992 |
| 0.0023 | 63.16 | 1200 | 1.6214 | {'precision': 0.8955773955773956, 'recall': 0.8922888616891065, 'f1': 0.8939301042305334, 'number': 817} | {'precision': 0.5882352941176471, 'recall': 0.5882352941176471, 'f1': 0.5882352941176471, 'number': 119} | {'precision': 0.8841354723707665, 'recall': 0.9210770659238626, 'f1': 0.9022282855843565, 'number': 1077} | 0.8715 | 0.8897 | 0.8805 | 0.8057 |
| 0.0016 | 73.68 | 1400 | 1.8002 | {'precision': 0.8732394366197183, 'recall': 0.9106487148102815, 'f1': 0.8915518274415818, 'number': 817} | {'precision': 0.5765765765765766, 'recall': 0.5378151260504201, 'f1': 0.5565217391304348, 'number': 119} | {'precision': 0.8892921960072595, 'recall': 0.9099350046425255, 'f1': 0.8994951812758146, 'number': 1077} | 0.8659 | 0.8882 | 0.8769 | 0.7860 |
| 0.0013 | 84.21 | 1600 | 1.7699 | {'precision': 0.8906439854191981, 'recall': 0.8971848225214198, 'f1': 0.8939024390243904, 'number': 817} | {'precision': 0.6274509803921569, 'recall': 0.5378151260504201, 'f1': 0.579185520361991, 'number': 119} | {'precision': 0.8778359511343804, 'recall': 0.9340761374187558, 'f1': 0.9050832208726944, 'number': 1077} | 0.8706 | 0.8957 | 0.8830 | 0.7973 |
| 0.0008 | 94.74 | 1800 | 1.7824 | {'precision': 0.8733572281959379, 'recall': 0.8947368421052632, 'f1': 0.8839177750906893, 'number': 817} | {'precision': 0.616822429906542, 'recall': 0.5546218487394958, 'f1': 0.5840707964601769, 'number': 119} | {'precision': 0.8901996370235935, 'recall': 0.9108635097493036, 'f1': 0.9004130335016063, 'number': 1077} | 0.8690 | 0.8833 | 0.8761 | 0.8019 |
| 0.0005 | 105.26 | 2000 | 1.7894 | {'precision': 0.872791519434629, 'recall': 0.9069767441860465, 'f1': 0.8895558223289316, 'number': 817} | {'precision': 0.6036036036036037, 'recall': 0.5630252100840336, 'f1': 0.582608695652174, 'number': 119} | {'precision': 0.8931506849315068, 'recall': 0.9080779944289693, 'f1': 0.9005524861878452, 'number': 1077} | 0.8691 | 0.8872 | 0.8781 | 0.7940 |
| 0.0002 | 115.79 | 2200 | 1.8409 | {'precision': 0.8665893271461717, 'recall': 0.9143206854345165, 'f1': 0.8898153662894581, 'number': 817} | {'precision': 0.6296296296296297, 'recall': 0.5714285714285714, 'f1': 0.5991189427312775, 'number': 119} | {'precision': 0.8978644382544104, 'recall': 0.8978644382544104, 'f1': 0.8978644382544104, 'number': 1077} | 0.8705 | 0.8852 | 0.8778 | 0.7982 |
| 0.0002 | 126.32 | 2400 | 1.8311 | {'precision': 0.8709302325581395, 'recall': 0.9167686658506732, 'f1': 0.8932617769827073, 'number': 817} | {'precision': 0.6018518518518519, 'recall': 0.5462184873949579, 'f1': 0.5726872246696034, 'number': 119} | {'precision': 0.893953488372093, 'recall': 0.8922934076137419, 'f1': 0.8931226765799257, 'number': 1077} | 0.8688 | 0.8818 | 0.8752 | 0.7988 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kennethgoodman/ppo-FrozenLake-v1 | kennethgoodman | 2022-12-07T16:53:08Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T16:06:25Z | ---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
metrics:
- type: mean_reward
value: 0.20 +/- 0.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
reaverlee/xlm-roberta-base-finetuned-panx-fr | reaverlee | 2022-12-07T16:45:28Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T16:31:26Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8350428787624012
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2761
- F1: 0.8350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5826 | 1.0 | 191 | 0.3409 | 0.7713 |
| 0.2674 | 2.0 | 382 | 0.2889 | 0.8314 |
| 0.1738 | 3.0 | 573 | 0.2761 | 0.8350 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
enniorampello/whisper-small-hi | enniorampello | 2022-12-07T16:42:22Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-03T15:58:00Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Swedish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 19.647226479524615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3953
- Wer: 19.6472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1331 | 1.29 | 1000 | 0.3014 | 22.3602 |
| 0.0537 | 2.59 | 2000 | 0.2988 | 20.8572 |
| 0.0217 | 3.88 | 3000 | 0.3093 | 20.5641 |
| 0.004 | 5.17 | 4000 | 0.3551 | 20.0479 |
| 0.0015 | 6.47 | 5000 | 0.3701 | 20.0022 |
| 0.0015 | 7.76 | 6000 | 0.3769 | 19.7386 |
| 0.0007 | 9.06 | 7000 | 0.3908 | 19.7010 |
| 0.0006 | 10.35 | 8000 | 0.3953 | 19.6472 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kalisia/whisper-small-tonga_5hrs | kalisia | 2022-12-07T16:41:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T17:21:33Z | ---
widget:
- src: https://huggingface.co/datasets/kalisia/TongaASR_Space_Examples/blob/main/220929-200958_toi_97d_elicit_17.wav
example_title: Tonga Speech Sample 1
- example_title: toi sample 1
src: https://huggingface.co/datasets/kalisia/TongaASR_Space_Examples/blob/main/220929-200958_toi_97d_elicit_17.wav
model-index:
- name: whisper-tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Tonga
type: tongaspeech_asr
config: clean
split: test
args:
language: toi
metrics:
- name: Test WER
type: wer
value: 52.59
license: apache-2.0
tags:
- automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-tonga_5hrs
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9145
- Wer: 52.2928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3353 | 1.45 | 200 | 1.9984 | 113.0627 |
| 1.7712 | 2.9 | 400 | 1.2576 | 72.0656 |
| 1.1476 | 4.35 | 600 | 1.0129 | 59.8233 |
| 1.004 | 5.79 | 800 | 0.9406 | 53.2183 |
| 0.9169 | 7.25 | 1000 | 0.9145 | 52.2928 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
reaverlee/xlm-roberta-base-finetuned-panx-de-fr | reaverlee | 2022-12-07T16:30:58Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T16:14:27Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1621
- F1: 0.8552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2898 | 1.0 | 715 | 0.1830 | 0.8332 |
| 0.1479 | 2.0 | 1430 | 0.1576 | 0.8496 |
| 0.0952 | 3.0 | 2145 | 0.1621 | 0.8552 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
sachinshinde/sentiment-model-imdb-small-demo | sachinshinde | 2022-12-07T16:22:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T16:09:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-model-imdb-small-demo
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-imdb-small-demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6147
- Accuracy: 0.8667
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
MontaR/ppo-LunarLander-v2-TEST | MontaR | 2022-12-07T16:10:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T17:42:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.73 +/- 12.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
midon/dsdsdsd | midon | 2022-12-07T16:03:32Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-07T16:03:32Z | ---
license: bigscience-openrail-m
---
|
motmono/output | motmono | 2022-12-07T16:00:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | 2022-12-07T15:58:19Z | ---
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CWhy/q-FrozenLake-v1-4x4-noSlippery | CWhy | 2022-12-07T15:42:59Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-29T04:37:18Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CWhy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
budbudbud/Holiday_Stop_Motion | budbudbud | 2022-12-07T15:39:27Z | 5 | 3 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:openrail++",
"region:us"
] | text-to-image | 2022-12-07T12:56:02Z | ---
license: openrail++
language:
- en
tags:
- stable-diffusion
- text-to-image
- diffusers
thumbnail: "https://huggingface.co/budbudbud/Holiday_Stop_Motion/resolve/main/Santa.png"
inference: false
---
### Holiday Stop Motion
This is the fine-tuned Stable Diffusion 1.5 model trained on classic Christmas stop motion tv specials by Rankin and Bass.
Use the tokens
`rbsm style`
in your prompts for the effect.
Trained on Stability.ai's 1.5 model with 768x768 resolution.
**Characters rendered with the model:**








This model was trained using fast-DreamBooth colab by TheLastBen with text_trainer_encoder for 10 and 8000 steps.
## License
This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage.
[Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) |
kennethgoodman/ppo-CartPole-v1 | kennethgoodman | 2022-12-07T15:10:43Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T15:10:21Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anuragshas/whisper-small-mr | anuragshas | 2022-12-07T15:07:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"mr",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T12:13:38Z | ---
language:
- mr
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Marathi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 mr
type: mozilla-foundation/common_voice_11_0
config: mr
split: test
args: mr
metrics:
- name: Wer
type: wer
value: 19.71
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Marathi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 mr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 19.71 |
zyoscovits/sd-class-butterflies-32 | zyoscovits | 2022-12-07T14:46:53Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-07T14:46:29Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('zyoscovits/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
shivkumarganesh/whisper-small-hi | shivkumarganesh | 2022-12-07T14:41:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T18:45:23Z | ---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Shiv Kumar Ganesh
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 21.30001146394589
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Shiv Kumar Ganesh
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6273
- Wer: 21.3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0036 | 14.01 | 1000 | 0.4864 | 21.9993 |
| 0.001 | 28.01 | 2000 | 0.5495 | 21.9592 |
| 0.0001 | 43.01 | 3000 | 0.5957 | 21.2026 |
| 0.0 | 57.01 | 4000 | 0.6168 | 21.4032 |
| 0.0 | 72.01 | 5000 | 0.6273 | 21.3000 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ljh1/bert-base-uncased-finetuned-conll2003 | ljh1 | 2022-12-07T14:33:56Z | 16 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T14:27:56Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
datasets:
- conll2003
model-index:
- name: ljh1/bert-base-uncased-finetuned-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ljh1/bert-base-uncased-finetuned-conll2003
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0867
- Validation Loss: 0.0477
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': 1.0, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1755, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0867 | 0.0477 | 0 |
### Framework versions
- Transformers 4.26.0.dev0
- TensorFlow 2.11.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
mistapproach/ppo-LunarLander-v2 | mistapproach | 2022-12-07T14:33:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T13:43:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.28 +/- 18.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NathanaelM/ppo_lunar_lander_v2 | NathanaelM | 2022-12-07T14:26:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T14:26:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.18 +/- 24.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tim-binding/ppo-Huggy | tim-binding | 2022-12-07T14:19:41Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-07T14:19:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: tim-binding/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
manirai91/enlm-roberta-130 | manirai91 | 2022-12-07T14:00:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-01T12:02:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: enlm-roberta-130
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-130
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 8192
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5183 | 0.34 | 160 | 1.4159 |
| 1.5188 | 0.69 | 320 | 1.4158 |
| 1.5205 | 1.03 | 480 | 1.4153 |
| 1.5213 | 1.37 | 640 | 1.4162 |
| 1.5195 | 1.72 | 800 | 1.4168 |
| 1.5194 | 2.06 | 960 | 1.4150 |
| 1.5182 | 2.4 | 1120 | 1.4142 |
| 1.5182 | 2.75 | 1280 | 1.4131 |
| 1.5177 | 3.09 | 1440 | 1.4167 |
| 1.5201 | 3.43 | 1600 | 1.4156 |
| 1.5173 | 3.78 | 1760 | 1.4111 |
| 1.52 | 4.12 | 1920 | 1.4117 |
| 1.5184 | 4.46 | 2080 | 1.4151 |
| 1.5198 | 4.81 | 2240 | 1.4097 |
| 1.5202 | 5.15 | 2400 | 1.4162 |
| 1.5166 | 5.49 | 2560 | 1.4130 |
| 1.5184 | 5.84 | 2720 | 1.4139 |
| 1.5174 | 6.18 | 2880 | 1.4128 |
| 1.5161 | 6.52 | 3040 | 1.4126 |
| 1.5175 | 6.87 | 3200 | 1.4095 |
| 1.5169 | 7.21 | 3360 | 1.4118 |
| 1.516 | 7.55 | 3520 | 1.4113 |
| 1.5182 | 7.9 | 3680 | 1.4097 |
| 1.5195 | 8.24 | 3840 | 1.4118 |
| 1.5187 | 8.26 | 4000 | 1.4119 |
| 1.5149 | 8.6 | 4160 | 1.4133 |
| 1.5183 | 8.94 | 4320 | 1.4097 |
| 1.5192 | 9.29 | 4480 | 1.4101 |
| 1.5191 | 9.63 | 4640 | 1.4146 |
| 1.5192 | 9.97 | 4800 | 1.4165 |
| 1.5164 | 10.32 | 4960 | 1.4119 |
| 1.5235 | 10.66 | 5120 | 1.4089 |
| 1.6571 | 11.0 | 5280 | 1.4121 |
| 1.5184 | 11.35 | 5440 | 1.4102 |
| 1.5185 | 11.69 | 5600 | 1.4111 |
| 1.5172 | 12.03 | 5760 | 1.4142 |
| 1.5189 | 12.38 | 5920 | 1.4129 |
| 1.5147 | 12.72 | 6080 | 1.4089 |
| 1.5177 | 13.06 | 6240 | 1.4098 |
| 1.5164 | 13.41 | 6400 | 1.4097 |
| 1.5188 | 13.75 | 6560 | 1.4109 |
| 1.5158 | 14.09 | 6720 | 1.4134 |
| 1.5134 | 14.44 | 6880 | 1.4091 |
| 1.5167 | 14.78 | 7040 | 1.4089 |
| 1.5163 | 15.12 | 7200 | 1.4140 |
| 1.5172 | 15.47 | 7360 | 1.4083 |
| 1.5153 | 15.81 | 7520 | 1.4109 |
| 1.5164 | 16.15 | 7680 | 1.4093 |
| 1.5164 | 16.17 | 7840 | 1.4108 |
| 1.515 | 16.51 | 8000 | 1.4102 |
| 1.5164 | 16.86 | 8160 | 1.4090 |
| 1.5163 | 17.2 | 8320 | 1.4110 |
| 1.5142 | 17.54 | 8480 | 1.4122 |
| 1.5166 | 17.89 | 8640 | 1.4092 |
| 1.5172 | 18.23 | 8800 | 1.4058 |
| 1.5153 | 18.57 | 8960 | 1.4112 |
| 1.517 | 18.92 | 9120 | 1.4098 |
| 1.5163 | 19.26 | 9280 | 1.4113 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AliMMZ/ppo-LunarLander-v2 | AliMMZ | 2022-12-07T14:00:14Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T13:59:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.26 +/- 23.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dharkelf/Dharkelf_model_u1 | Dharkelf | 2022-12-07T13:59:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T13:32:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 288.24 +/- 20.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
feabries/testpyramidsrnd | feabries | 2022-12-07T13:56:19Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2022-12-07T13:56:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: feabries/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Manbearpig01/whisper-small-hi | Manbearpig01 | 2022-12-07T13:53:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-02T10:44:42Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Swedish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 19.684869995429004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3275
- Wer: 19.6849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1378 | 1.29 | 1000 | 0.2953 | 21.4165 |
| 0.0475 | 2.59 | 2000 | 0.2913 | 20.3275 |
| 0.0187 | 3.88 | 3000 | 0.3026 | 19.9000 |
| 0.0043 | 5.17 | 4000 | 0.3275 | 19.6849 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
garnagar/whisper-ft-libri-en | garnagar | 2022-12-07T13:45:29Z | 29 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-04T16:30:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- librispeech_asr
metrics:
- wer
model-index:
- name: whisper-ft-libri-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: librispeech_asr
type: librispeech_asr
config: clean
split: test
args: clean
metrics:
- name: Wer
type: wer
value: 31.616341030195382
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-ft-libri-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8069
- Wer: 31.6163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.740176574997311e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.1717 | 0.38 | 5 | 2.1709 | 98.0462 |
| 1.2371 | 0.77 | 10 | 1.2719 | 79.9290 |
| 0.7577 | 1.15 | 15 | 1.0510 | 35.3464 |
| 0.5325 | 1.54 | 20 | 0.9475 | 32.6821 |
| 0.5545 | 1.92 | 25 | 0.8607 | 30.3730 |
| 0.2957 | 2.31 | 30 | 0.8051 | 33.3925 |
| 0.1846 | 2.69 | 35 | 0.7487 | 30.1954 |
| 0.0748 | 3.08 | 40 | 0.6882 | 32.1492 |
| 0.0709 | 3.46 | 45 | 0.6692 | 31.2611 |
| 0.0908 | 3.85 | 50 | 0.6465 | 29.4849 |
| 0.0764 | 4.23 | 55 | 0.6578 | 28.9520 |
| 0.0259 | 4.62 | 60 | 0.6637 | 30.0178 |
| 0.0178 | 5.0 | 65 | 0.6955 | 30.3730 |
| 0.0131 | 5.38 | 70 | 0.6869 | 33.2149 |
| 0.0162 | 5.77 | 75 | 0.7000 | 32.3268 |
| 0.0081 | 6.15 | 80 | 0.6814 | 32.3268 |
| 0.0075 | 6.54 | 85 | 0.6897 | 31.0835 |
| 0.0069 | 6.92 | 90 | 0.7151 | 32.6821 |
| 0.0062 | 7.31 | 95 | 0.7181 | 30.3730 |
| 0.0056 | 7.69 | 100 | 0.7173 | 30.0178 |
| 0.0052 | 8.08 | 105 | 0.7411 | 31.9716 |
| 0.0073 | 8.46 | 110 | 0.7526 | 32.5044 |
| 0.0061 | 8.85 | 115 | 0.7467 | 32.8597 |
| 0.0034 | 9.23 | 120 | 0.7314 | 31.7940 |
| 0.0122 | 9.62 | 125 | 0.7276 | 31.7940 |
| 0.0429 | 10.0 | 130 | 0.7417 | 32.5044 |
| 0.0032 | 10.38 | 135 | 0.7555 | 31.9716 |
| 0.0141 | 10.77 | 140 | 0.7636 | 31.2611 |
| 0.0038 | 11.15 | 145 | 0.7607 | 31.9716 |
| 0.0038 | 11.54 | 150 | 0.7716 | 33.0373 |
| 0.0035 | 11.92 | 155 | 0.7985 | 34.2806 |
| 0.0038 | 12.31 | 160 | 0.7797 | 32.1492 |
| 0.0036 | 12.69 | 165 | 0.7767 | 31.4387 |
| 0.0022 | 13.08 | 170 | 0.7830 | 31.7940 |
| 0.0033 | 13.46 | 175 | 0.7992 | 30.7282 |
| 0.0019 | 13.85 | 180 | 0.7541 | 30.0178 |
| 0.0016 | 14.23 | 185 | 0.7587 | 30.0178 |
| 0.0027 | 14.62 | 190 | 0.7766 | 30.3730 |
| 0.0016 | 15.0 | 195 | 0.8056 | 32.8597 |
| 0.0015 | 15.38 | 200 | 0.8096 | 32.5044 |
| 0.0012 | 15.77 | 205 | 0.7931 | 32.6821 |
| 0.001 | 16.15 | 210 | 0.7829 | 31.6163 |
| 0.0045 | 16.54 | 215 | 0.7774 | 30.9059 |
| 0.0009 | 16.92 | 220 | 0.7750 | 30.1954 |
| 0.0009 | 17.31 | 225 | 0.7780 | 28.9520 |
| 0.0008 | 17.69 | 230 | 0.7803 | 29.1297 |
| 0.0007 | 18.08 | 235 | 0.7807 | 29.6625 |
| 0.0025 | 18.46 | 240 | 0.7813 | 30.1954 |
| 0.0007 | 18.85 | 245 | 0.7840 | 30.0178 |
| 0.0006 | 19.23 | 250 | 0.7860 | 30.0178 |
| 0.0007 | 19.62 | 255 | 0.7839 | 30.1954 |
| 0.0005 | 20.0 | 260 | 0.7834 | 30.1954 |
| 0.0006 | 20.38 | 265 | 0.7844 | 30.3730 |
| 0.0102 | 20.77 | 270 | 0.7859 | 30.7282 |
| 0.0006 | 21.15 | 275 | 0.7901 | 30.7282 |
| 0.0006 | 21.54 | 280 | 0.7950 | 30.7282 |
| 0.0006 | 21.92 | 285 | 0.7975 | 31.0835 |
| 0.0006 | 22.31 | 290 | 0.7984 | 30.7282 |
| 0.0006 | 22.69 | 295 | 0.7954 | 30.3730 |
| 0.0005 | 23.08 | 300 | 0.7935 | 31.0835 |
| 0.0005 | 23.46 | 305 | 0.7928 | 31.0835 |
| 0.0005 | 23.85 | 310 | 0.7933 | 31.2611 |
| 0.0038 | 24.23 | 315 | 0.7950 | 30.9059 |
| 0.0005 | 24.62 | 320 | 0.7976 | 31.6163 |
| 0.0004 | 25.0 | 325 | 0.7995 | 31.7940 |
| 0.0004 | 25.38 | 330 | 0.8006 | 31.4387 |
| 0.0004 | 25.77 | 335 | 0.8005 | 31.6163 |
| 0.0005 | 26.15 | 340 | 0.8011 | 31.4387 |
| 0.0004 | 26.54 | 345 | 0.8020 | 31.6163 |
| 0.0004 | 26.92 | 350 | 0.8024 | 31.4387 |
| 0.0017 | 27.31 | 355 | 0.8029 | 31.4387 |
| 0.0004 | 27.69 | 360 | 0.8035 | 31.4387 |
| 0.0004 | 28.08 | 365 | 0.8045 | 31.4387 |
| 0.0004 | 28.46 | 370 | 0.8049 | 31.4387 |
| 0.0004 | 28.85 | 375 | 0.8056 | 31.4387 |
| 0.0011 | 29.23 | 380 | 0.8060 | 31.4387 |
| 0.0004 | 29.62 | 385 | 0.8065 | 31.4387 |
| 0.0004 | 30.0 | 390 | 0.8065 | 31.4387 |
| 0.0004 | 30.38 | 395 | 0.8068 | 31.4387 |
| 0.0004 | 30.77 | 400 | 0.8069 | 31.6163 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
aalsinat/lunar_lander_first | aalsinat | 2022-12-07T13:40:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T12:13:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -417.41 +/- 345.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GhifSmile/mt5-base-coba | GhifSmile | 2022-12-07T13:38:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-07T04:07:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-coba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-coba
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5870
- Rouge1: 0.4338
- Rouge2: 0.2876
- Rougel: 0.3743
- Rougelsum: 0.409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.0922 | 1.0 | 7452 | 0.6538 | 0.3566 | 0.239 | 0.3218 | 0.3348 |
| 0.9442 | 2.0 | 14904 | 0.6900 | 0.427 | 0.2868 | 0.3711 | 0.402 |
| 3.0789 | 3.0 | 22356 | 0.6775 | 0.3808 | 0.2584 | 0.3398 | 0.3567 |
| 1.0565 | 4.0 | 29808 | 0.5928 | 0.4348 | 0.2882 | 0.3756 | 0.4096 |
| 0.7872 | 5.0 | 37260 | 0.5870 | 0.4338 | 0.2876 | 0.3743 | 0.409 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.9.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ntinosmg/ppo-LunarLander-v2 | ntinosmg | 2022-12-07T13:30:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-26T11:30:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.90 +/- 12.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rchan26/dit_base | rchan26 | 2022-12-07T13:09:28Z | 28 | 1 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-07T12:05:48Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit_base
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the davanstrien/leicester_loaded_annotations dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4527
- Accuracy: 0.8190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.89 | 6 | 1.7452 | 0.4095 |
| 1.8958 | 1.89 | 12 | 1.6185 | 0.4286 |
| 1.8958 | 2.89 | 18 | 1.4731 | 0.4857 |
| 1.8466 | 3.89 | 24 | 1.3459 | 0.5524 |
| 1.445 | 4.89 | 30 | 1.1766 | 0.5810 |
| 1.445 | 5.89 | 36 | 1.0902 | 0.6381 |
| 1.2077 | 6.89 | 42 | 0.9331 | 0.6762 |
| 1.2077 | 7.89 | 48 | 0.8431 | 0.6762 |
| 1.0254 | 8.89 | 54 | 0.8657 | 0.6857 |
| 0.8275 | 9.89 | 60 | 0.6801 | 0.7429 |
| 0.8275 | 10.89 | 66 | 0.6699 | 0.7810 |
| 0.8063 | 11.89 | 72 | 0.6296 | 0.7524 |
| 0.8063 | 12.89 | 78 | 0.5498 | 0.7905 |
| 0.7127 | 13.89 | 84 | 0.4974 | 0.8381 |
| 0.6356 | 14.89 | 90 | 0.6715 | 0.7619 |
| 0.6356 | 15.89 | 96 | 0.4602 | 0.8095 |
| 0.6438 | 16.89 | 102 | 0.4886 | 0.8095 |
| 0.6438 | 17.89 | 108 | 0.4332 | 0.8 |
| 0.5329 | 18.89 | 114 | 0.4197 | 0.8095 |
| 0.4932 | 19.89 | 120 | 0.4168 | 0.8190 |
| 0.4932 | 20.89 | 126 | 0.4691 | 0.8 |
| 0.4861 | 21.89 | 132 | 0.4263 | 0.8476 |
| 0.4861 | 22.89 | 138 | 0.4464 | 0.8190 |
| 0.4935 | 23.89 | 144 | 0.4857 | 0.7905 |
| 0.433 | 24.89 | 150 | 0.4873 | 0.7810 |
| 0.433 | 25.89 | 156 | 0.4641 | 0.8095 |
| 0.4289 | 26.89 | 162 | 0.5316 | 0.8 |
| 0.4289 | 27.89 | 168 | 0.3389 | 0.8571 |
| 0.4204 | 28.89 | 174 | 0.4272 | 0.8 |
| 0.3668 | 29.89 | 180 | 0.3493 | 0.8667 |
| 0.3668 | 30.89 | 186 | 0.3861 | 0.8571 |
| 0.4101 | 31.89 | 192 | 0.4216 | 0.8381 |
| 0.4101 | 32.89 | 198 | 0.4258 | 0.8190 |
| 0.3614 | 33.89 | 204 | 0.4409 | 0.8571 |
| 0.3267 | 34.89 | 210 | 0.4475 | 0.8190 |
| 0.3267 | 35.89 | 216 | 0.4316 | 0.8190 |
| 0.3423 | 36.89 | 222 | 0.4095 | 0.8381 |
| 0.3423 | 37.89 | 228 | 0.4671 | 0.8286 |
| 0.3325 | 38.89 | 234 | 0.3994 | 0.8286 |
| 0.3326 | 39.89 | 240 | 0.5004 | 0.8190 |
| 0.3326 | 40.89 | 246 | 0.4103 | 0.8381 |
| 0.2964 | 41.89 | 252 | 0.4469 | 0.8286 |
| 0.2964 | 42.89 | 258 | 0.4774 | 0.8286 |
| 0.3435 | 43.89 | 264 | 0.3843 | 0.8381 |
| 0.3146 | 44.89 | 270 | 0.3710 | 0.8667 |
| 0.3146 | 45.89 | 276 | 0.3392 | 0.8667 |
| 0.3168 | 46.89 | 282 | 0.3597 | 0.8667 |
| 0.3168 | 47.89 | 288 | 0.4143 | 0.8381 |
| 0.3081 | 48.89 | 294 | 0.3579 | 0.8571 |
| 0.3103 | 49.89 | 300 | 0.4527 | 0.8190 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.1
|
Hayoung/my_awesome_ko_en_model | Hayoung | 2022-12-07T12:43:13Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-06T15:49:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_ko_en_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_ko_en_model
This model is a fine-tuned version of [KETI-AIR/ke-t5-small](https://huggingface.co/KETI-AIR/ke-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| No log | 1.0 | 67 | nan | 0.0 | 19.0 |
| No log | 2.0 | 134 | nan | 0.0 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.9.0+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kpriyanshu256/whisper-small-as-500-64-1e-05-bn | kpriyanshu256 | 2022-12-07T12:02:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"as",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T03:56:24Z | ---
language:
- as
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-small-Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: as
split: test
args: as
metrics:
- name: Wer
type: wer
value: 61.75780545027973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small-Assamese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3386
- Wer: 61.7578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3132 | 0.3 | 150 | 1.4029 | 161.4149 |
| 0.1888 | 1.08 | 300 | 1.3000 | 61.7217 |
| 0.1358 | 1.38 | 450 | 1.3386 | 61.7578 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
alsolera/ppo-LunarLander-v2 | alsolera | 2022-12-07T11:41:58Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T11:41:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.02 +/- 9.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yip-i/wav2vec2-demo-F04-2 | yip-i | 2022-12-07T11:38:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-19T04:34:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo-F04-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo-F04-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3203
- Wer: 0.5353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 23.5576 | 0.89 | 500 | 3.3654 | 1.0 |
| 3.3953 | 1.79 | 1000 | 3.1729 | 1.0 |
| 2.9514 | 2.68 | 1500 | 2.8946 | 1.0 |
| 2.84 | 3.57 | 2000 | 2.8386 | 1.0 |
| 2.7685 | 4.46 | 2500 | 2.7147 | 1.0 |
| 2.5059 | 5.36 | 3000 | 2.1341 | 1.1752 |
| 1.8907 | 6.25 | 3500 | 1.3604 | 1.2403 |
| 1.3892 | 7.14 | 4000 | 0.8814 | 1.1989 |
| 1.0754 | 8.04 | 4500 | 0.6416 | 1.0529 |
| 0.8795 | 8.93 | 5000 | 0.5760 | 0.9641 |
| 0.7478 | 9.82 | 5500 | 0.4633 | 0.8790 |
| 0.6107 | 10.71 | 6000 | 0.3921 | 0.8394 |
| 0.5445 | 11.61 | 6500 | 0.3579 | 0.7987 |
| 0.4788 | 12.5 | 7000 | 0.3034 | 0.7470 |
| 0.4435 | 13.39 | 7500 | 0.2989 | 0.7311 |
| 0.4057 | 14.29 | 8000 | 0.3366 | 0.7092 |
| 0.3606 | 15.18 | 8500 | 0.2783 | 0.6892 |
| 0.343 | 16.07 | 9000 | 0.2593 | 0.6612 |
| 0.3189 | 16.96 | 9500 | 0.2780 | 0.6460 |
| 0.277 | 17.86 | 10000 | 0.3266 | 0.6277 |
| 0.2789 | 18.75 | 10500 | 0.3582 | 0.6253 |
| 0.2552 | 19.64 | 11000 | 0.3422 | 0.6156 |
| 0.2416 | 20.54 | 11500 | 0.3387 | 0.6016 |
| 0.2187 | 21.43 | 12000 | 0.3657 | 0.5845 |
| 0.2317 | 22.32 | 12500 | 0.2932 | 0.5845 |
| 0.2091 | 23.21 | 13000 | 0.2551 | 0.5614 |
| 0.199 | 24.11 | 13500 | 0.3113 | 0.5474 |
| 0.1777 | 25.0 | 14000 | 0.2895 | 0.5572 |
| 0.1823 | 25.89 | 14500 | 0.3127 | 0.5456 |
| 0.179 | 26.79 | 15000 | 0.2945 | 0.5438 |
| 0.1596 | 27.68 | 15500 | 0.3052 | 0.5322 |
| 0.1671 | 28.57 | 16000 | 0.3119 | 0.5365 |
| 0.1564 | 29.46 | 16500 | 0.3203 | 0.5353 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
smejak/ppo-LunarLander-v2 | smejak | 2022-12-07T11:31:33Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T11:31:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.33 +/- 30.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rifkat/uztext-3Gb-BPE-Roberta | rifkat | 2022-12-07T11:13:53Z | 74 | 6 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"mit",
"robert",
"uzrobert",
"uzbek",
"cyrillic",
"latin",
"uz",
"doi:10.57967/hf/0210",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z |
---
language:
- uz
tags:
- transformers
- mit
- robert
- uzrobert
- uzbek
- cyrillic
- latin
license: apache-2.0
widget:
- text: "Kuchli yomgโirlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi."
example_title: "Latin script"
- text: "ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ <mask>, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ."
example_title: "Cyrillic script"
---
<p><b>UzRoBerta model.</b>
Pre-prepared model in Uzbek (Cyrillic and latin script) to model the masked language and predict the next sentences.
<p><b>How to use.</b>
You can use this model directly with a pipeline for masked language modeling:
<pre><code class="language-python">
from transformers import pipeline
unmasker = pipeline('fill-mask', model='rifkat/uztext-3Gb-BPE-Roberta')
unmasker("ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ [mask], ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.")
[{'score': 0.5902208685874939,
'sequence': 'ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ ัะพะธัะธ, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.',
'token': 28809,
'token_str': ' ัะพะธัะธ'},
{'score': 0.08303504437208176,
'sequence': 'ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ ัััะพะทะธ, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.',
'token': 17484,
'token_str': ' ัััะพะทะธ'},
{'score': 0.035882771015167236,
'sequence': 'ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ ะฐัะฑะพะฑะธ, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.',
'token': 34552,
'token_str': ' ะฐัะฑะพะฑะธ'},
{'score': 0.03447483479976654,
'sequence': 'ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ ะฐัะพััะธัะธ, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.',
'token': 14034,
'token_str': ' ะฐัะพััะธัะธ'},
{'score': 0.03044942207634449,
'sequence': 'ะะปะธัะตั ะะฐะฒะพะธะน โ ัะปัา ัะทะฑะตะบ ะฒะฐ ะฑะพัาะฐ ัััะบะธะน ั
ะฐะปาะปะฐัะฝะธะฝะณ ะดัััะธ, ะผััะฐัะฐะบะบะธัะธ ะฒะฐ ะดะฐะฒะปะฐั ะฐัะฑะพะฑะธ ะฑัะปะณะฐะฝ.',
'token': 28100,
'token_str': ' ะดัััะธ'}]
unmasker("Kuchli yomgโirlar tufayli bir qator [mask] kuchli sel oqishi kuzatildi.")
[{'score': 0.410250186920166,
'sequence': 'Kuchli yomgโirlar tufayli bir qator hududlarda kuchli sel oqishi kuzatildi.',
'token': 11009,
'token_str': ' hududlarda'},
{'score': 0.2023029774427414,
'sequence': 'Kuchli yomgโirlar tufayli bir qator tumanlarda kuchli sel oqishi kuzatildi.',
'token': 35370,
'token_str': ' tumanlarda'},
{'score': 0.129830002784729,
'sequence': 'Kuchli yomgโirlar tufayli bir qator viloyatlarda kuchli sel oqishi kuzatildi.',
'token': 33584,
'token_str': ' viloyatlarda'},
{'score': 0.04539087787270546,
'sequence': 'Kuchli yomgโirlar tufayli bir qator mamlakatlarda kuchli sel oqishi kuzatildi.',
'token': 19315,
'token_str': ' mamlakatlarda'},
{'score': 0.0369882769882679,
'sequence': 'Kuchli yomgโirlar tufayli bir qator joylarda kuchli sel oqishi kuzatildi.',
'token': 5853,
'token_str': ' joylarda'}]
</code></pre>
<p><b>Training data.</b>
UzBERT model was pretrained on ≈2M news articles (≈3Gb).
<pre><code class="language-python">
@misc {rifkat_davronov_2022,
author = { {Adilova Fatima,Rifkat Davronov, Samariddin Kushmuratov, Ruzmat Safarov} },
title = { uztext-3Gb-BPE-Roberta (Revision 0c87494) },
year = 2022,
url = { https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta },
doi = { 10.57967/hf/0140 },
publisher = { Hugging Face }
}
</code></pre>
|
muhtasham/finetuned-self_mlm_medium | muhtasham | 2022-12-07T11:07:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T09:59:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuned-self_mlm_medium
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91304
- name: F1
type: f1
value: 0.9545435537155521
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_medium
This model is a fine-tuned version of [muhtasham/bert-medium-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-medium-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3215
- Accuracy: 0.9130
- F1: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2916 | 0.64 | 500 | 0.2293 | 0.9087 | 0.9522 |
| 0.1969 | 1.28 | 1000 | 0.1605 | 0.9442 | 0.9713 |
| 0.1511 | 1.92 | 1500 | 0.1787 | 0.9406 | 0.9694 |
| 0.1046 | 2.56 | 2000 | 0.2280 | 0.9379 | 0.9680 |
| 0.0852 | 3.2 | 2500 | 0.3215 | 0.9130 | 0.9545 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ahmeticomadrid/art | ahmeticomadrid | 2022-12-07T10:56:46Z | 0 | 0 | null | [
"region:us"
] | null | 2022-12-07T10:52:48Z | create a black horse
splash some pink color
torn half of the page
fulfill the torned part with roses |
Conflictx/CandyPunk | Conflictx | 2022-12-07T10:34:52Z | 0 | 34 | null | [
"text-to-image",
"v2.0",
"Embedding",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2022-12-03T20:38:50Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- v2.0
- Embedding
---
Textual Inversion Embedding by ConflictX For SD 2.0 trained on 768x768 images from midjourney and other sources.
Install by downloading the step embedding, and put it in the \embeddings folder
Another themed one, this one is more focused on vibrant and sweet environments.
Use keyword: CandyPunk
Images:






|
abbeyalien/abbeyalien | abbeyalien | 2022-12-07T10:19:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-07T10:19:12Z | ---
license: creativeml-openrail-m
---
|
SatCat/ppo-LunarLander-v2 | SatCat | 2022-12-07T10:08:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T10:05:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.31 +/- 20.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
No preview (Windows dev.).
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muhtasham/finetuned-self_mlm_small | muhtasham | 2022-12-07T09:58:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T22:57:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuned-self_mlm_small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9372
- name: F1
type: f1
value: 0.9675820772248607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_small
This model is a fine-tuned version of [muhtasham/bert-small-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-small-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3759
- Accuracy: 0.9372
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2834 | 1.28 | 500 | 0.2254 | 0.9150 | 0.9556 |
| 0.1683 | 2.56 | 1000 | 0.3738 | 0.8694 | 0.9301 |
| 0.1069 | 3.84 | 1500 | 0.2102 | 0.9354 | 0.9666 |
| 0.0651 | 5.12 | 2000 | 0.2278 | 0.9446 | 0.9715 |
| 0.0412 | 6.39 | 2500 | 0.4061 | 0.9156 | 0.9559 |
| 0.0316 | 7.67 | 3000 | 0.4371 | 0.9110 | 0.9534 |
| 0.0219 | 8.95 | 3500 | 0.3759 | 0.9372 | 0.9676 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ljh1/mrpc | ljh1 | 2022-12-07T08:59:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T08:56:43Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6911764705882353
- name: F1
type: f1
value: 0.8157894736842105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5611
- Accuracy: 0.6912
- F1: 0.8158
- Combined Score: 0.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.12.1
|
smartlens/donut-id-model-525-v1.0 | smartlens | 2022-12-07T08:54:03Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-12-07T05:53:02Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: donut-id-model-525-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-id-model-525-v1.0
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
GIanlucaRub/whisper-tiny-it-5 | GIanlucaRub | 2022-12-07T08:46:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T20:26:42Z | ---
language:
- it
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny it 5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: it
split: test[:10%]
args: 'config: it, split: test'
metrics:
- name: Wer
type: wer
value: 41.271491957848035
---
# Whisper Tiny it 5
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.760934
- Wer: 41.271492
## Model description
This model is the openai whisper small transformer adapted for Italian audio to text transcription. This model has weight decay set to 0.1 and the learning rate has been set to 1e-4 in the hyperparameter tuning process and it improved the performance on the evaluation set.
## Intended uses & limitations
The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it)
## Training and evaluation data
Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation.
The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
## Training procedure
After loading the pre trained model, it has been trained on the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.7015 | 0.95 | 1000 | 0.9463 | 64.4689 |
| 0.3579 | 1.91 | 2000 | 0.8363 | 51.7471 |
| 0.1388 | 2.86 | 3000 | 0.7766 | 43.6425 |
| 0.0403 | 3.82 | 4000 | 0.7609 | 41.2715 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2 |
teacookies/autotrain-07122022-2-exam_cert-2364774382 | teacookies | 2022-12-07T08:44:08Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-07122022-2-exam_cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T08:29:16Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- teacookies/autotrain-data-07122022-2-exam_cert
co2_eq_emissions:
emissions: 24.71153691821318
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2364774382
- CO2 Emissions (in grams): 24.7115
## Validation Metrics
- Loss: 0.021
- Accuracy: 0.995
- Precision: 0.917
- Recall: 0.932
- F1: 0.924
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-07122022-2-exam_cert-2364774382
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-07122022-2-exam_cert-2364774382", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-07122022-2-exam_cert-2364774382", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
weiweishi/roc-bert-base-zh | weiweishi | 2022-12-07T08:30:15Z | 2,187 | 5 | transformers | [
"transformers",
"pytorch",
"roc_bert",
"pretraining",
"fill-mask",
"zh",
"doi:10.57967/hf/0097",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-10-13T07:03:32Z | ---
language:
- zh
pipeline_tag: "fill-mask"
widget:
- text: "ba้ป็ณป[MASK]ๅฝ็้ฆ้ฝ"
example_title: "Adversarial Attack Test"
---
# RoCBert
## Introduction
RoCBert is a pretrained Chinese language model that is robust under various forms of adversarial attacks proposed by WeChatAI in 2022,
More detail: https://aclanthology.org/2022.acl-long.65.pdf
Pretrained code: https://github.com/sww9370/RoCBert
## How to use
```Python
# pip install transformers>=4.25.1
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh")
model = AutoModel.from_pretrained("weiweishi/roc-bert-base-zh")
```
## Citation
```bibtex
@inproceedings{su2022rocbert,
title={RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining},
author={Su, Hui and Shi, Weiwei and Shen, Xiaoyu and Xiao, Zhou and Ji, Tuo and Fang, Jiarui and Zhou, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={921--931},
year={2022}
}
``` |
SantoshUske/my_awesome_wnut_model | SantoshUske | 2022-12-07T07:49:18Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T07:28:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
SantoshUske/my_awesome_model | SantoshUske | 2022-12-07T07:25:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T06:57:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
anthonyduer/ppo-LunarLander-v2 | anthonyduer | 2022-12-07T07:20:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T07:19:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 226.55 +/- 49.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gamesxymo10/Tti | gamesxymo10 | 2022-12-07T06:46:54Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-07T06:46:54Z | ---
license: bigscience-openrail-m
---
|
Nhat1904/best-120-shot-model | Nhat1904 | 2022-12-07T06:43:46Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-07T06:43:31Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 300,
"warmup_steps": 30,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kennethgoodman/ppo-LunarLander-v2 | kennethgoodman | 2022-12-07T06:42:21Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T22:15:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.77 +/- 23.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jihoonkimharu/bert-base-klue-ynat-finetuned | jihoonkimharu | 2022-12-07T05:45:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T05:44:13Z | ---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# ์ธํ๋ฐ ๊ฐ์์ฉ checkpoint
KLUE์ YNAT task์ ํ์ธํ๋๋ ๋ชจ๋ธ์
๋๋ค. |
Shularp/krirk-finetuned-Helsinki-NLP_opus-mt-ar-en | Shularp | 2022-12-07T05:41:57Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-12-07T04:44:26Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: krirk-finetuned-Helsinki-NLP_opus-mt-ar-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# krirk-finetuned-Helsinki-NLP_opus-mt-ar-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3665
- Bleu: 35.0219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.4469 | 1.0 | 32 | 1.3744 | 34.9616 |
| 1.2938 | 2.0 | 64 | 1.3674 | 34.9145 |
| 1.2582 | 3.0 | 96 | 1.3665 | 35.0219 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fanzru/t5-small-finetuned-xlsum-concat-multi-news | fanzru | 2022-12-07T05:28:52Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-06T16:55:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xlsum-concat-multi-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xlsum-concat-multi-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4230
- Rouge1: 29.1361
- Rouge2: 8.0189
- Rougel: 22.513
- Rougelsum: 22.5598
- Gen Len: 18.8373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.2181 | 1.0 | 20543 | 2.4230 | 29.1361 | 8.0189 | 22.513 | 22.5598 | 18.8373 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Shularp/finetuned-bert-mrpc | Shularp | 2022-12-07T05:17:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T05:01:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8504901960784313
- name: F1
type: f1
value: 0.8960817717206134
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4478
- Accuracy: 0.8505
- F1: 0.8961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5323 | 1.0 | 230 | 0.3748 | 0.8480 | 0.8916 |
| 0.2969 | 2.0 | 460 | 0.3628 | 0.8603 | 0.9005 |
| 0.1535 | 3.0 | 690 | 0.4478 | 0.8505 | 0.8961 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
enzokro/sd-class-butterflies-64 | enzokro | 2022-12-07T05:16:07Z | 10 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-07T05:15:52Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
The model was trained with Adam parameters from fast.ai.
Batch size was also doubled to 64.
Learning rate happens over 160 steps, aka 20% of training.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('enzokro/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
hyorea1/KoT5-test-add-data-from5ep | hyorea1 | 2022-12-07T04:45:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-06T08:33:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: KoT5-test-add-data-from5ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KoT5-test-add-data-from5ep
This model is a fine-tuned version of [hyorea1/KoT5-test](https://huggingface.co/hyorea1/KoT5-test) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1737
- Rouge1: 11.8294
- Rouge2: 3.2314
- Rougel: 11.7891
- Rougelsum: 11.8237
- Gen Len: 35.2824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.9029 | 0.16 | 400 | 1.1695 | 12.8243 | 3.2659 | 12.7542 | 12.8276 | 35.5743 |
| 1.7971 | 0.32 | 800 | 1.1646 | 12.259 | 3.0668 | 12.1254 | 12.1927 | 35.2353 |
| 1.4396 | 0.48 | 1200 | 1.1681 | 12.1151 | 3.1908 | 11.9507 | 12.0305 | 35.3125 |
| 1.0945 | 0.64 | 1600 | 1.1703 | 12.0576 | 2.9688 | 11.9292 | 11.9792 | 35.0926 |
| 1.1924 | 0.8 | 2000 | 1.1667 | 11.7835 | 2.9605 | 11.6755 | 11.7318 | 35.3596 |
| 1.3711 | 0.97 | 2400 | 1.1668 | 11.9873 | 3.1107 | 11.9369 | 12.0207 | 34.5309 |
| 1.6031 | 1.13 | 2800 | 1.1673 | 11.6049 | 3.1121 | 11.5527 | 11.5976 | 34.6551 |
| 1.5254 | 1.29 | 3200 | 1.1693 | 11.6803 | 2.8527 | 11.6116 | 11.6829 | 34.8066 |
| 1.641 | 1.45 | 3600 | 1.1737 | 11.8294 | 3.2314 | 11.7891 | 11.8237 | 35.2824 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Weili/vit-base-patch16-224-finetuned-eurosat | Weili | 2022-12-07T04:37:58Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-07T03:45:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0363
- Accuracy: 0.9889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1667 | 1.0 | 190 | 0.0731 | 0.9756 |
| 0.115 | 2.0 | 380 | 0.0426 | 0.9878 |
| 0.0903 | 3.0 | 570 | 0.0363 | 0.9889 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Scrwed/ppo-LunarLander-v2 | Scrwed | 2022-12-07T04:30:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T05:10:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.91 +/- 68.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gym
from huggingface_sb3 import load_from_hub, package_to_hub, push_to_hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
# Create the environment
env = make_vec_env('LunarLander-v2', n_envs=16)
model = PPO(
policy = 'MlpPolicy',
env = env,
n_steps = 1024,
batch_size = 64,
n_epochs = 8,
gamma = 0.995,
gae_lambda = 1,
ent_coef = 0.001,
verbose=1
)
model.learn(total_timesteps=2_000_000, log_interval=25, progress_bar=True)
model_name = "ppo-LunarLander-v2"
# Evaluate the agent
# Create a new environment for evaluation
eval_env = gym.make("LunarLander-v2")
# Evaluate the model with 10 evaluation episodes and deterministic=True
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Upload to Hugging Face Hub
...
```
|
aammari/setfit-zero-shot-classification-pbsp-p1 | aammari | 2022-12-07T04:16:20Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-07T04:15:42Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 518 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 518,
"warmup_steps": 52,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
shicz86/ppo-LunarLander-v2 | shicz86 | 2022-12-07T04:02:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T08:05:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.56 +/- 12.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Quaouar/VLP_singleLED-model | Quaouar | 2022-12-07T03:46:24Z | 14 | 0 | tf-keras | [
"tf-keras",
"tensorboard",
"license:afl-3.0",
"region:us"
] | null | 2022-12-06T20:38:09Z | ---
license: afl-3.0
---
# VLP Dataset Metadata
This dataset was acquired during the dissertation entitled **Optical Camera Communications and Machine Learning for Indoor Visible Light Positioning**. This work was carried out in the academic year
2020/2021 at the Instituto de Telecomunicacoes in Aveiro.
The images that constitute this dataset were acquired over a grid with 15 regularly spaced reference
points on the floor surface. Table 2 shows the position of these points in relation to the referential
defined in the room along with the position of the LED luminaires. During the dataset acquisition,
the CMOS image sensor (Sony IMX219) was positioned parallel to the floor at a height of 25.6 cm
facing upwards, i.e. with pitch and yaw angles equal to 0. All images were saved as TIFF (Tagged
Image File Format) with a resolution of 3264 ร 2464 pixels and exposure and readout times equal to
9 ยตs and 18 ยตs, respectively. |
zlicastro/zl-ppo-Huggy | zlicastro | 2022-12-07T03:29:42Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-07T03:29:02Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: zlicastro/zl-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Kuaaangwen/roberta-base-finetuned-mnli | Kuaaangwen | 2022-12-07T03:24:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-07T01:11:00Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mnli
split: train
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.865206316861946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-mnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3914
- Accuracy: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3982 | 1.0 | 49088 | 0.3914 | 0.8652 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
haining/ppo-huggy | haining | 2022-12-07T03:19:31Z | 5 | 1 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-07T03:19:25Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: haining/ppo-huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Delcos/Hentai-Diffusion | Delcos | 2022-12-07T03:08:01Z | 0 | 202 | null | [
"region:us"
] | null | 2022-10-03T20:28:35Z | Update: https://huggingface.co/Deltaadams/HentaiDiffusion |
Subsets and Splits