modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ColabPro/PPO-LunarLander-v2-v1 | ColabPro | 2022-05-16T22:03:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T22:02:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -4.65 +/- 21.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
espnet/english_male_ryanspeech_fastspeech2 | espnet | 2022-05-16T22:00:14Z | 5 | 4 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ryanspeech",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2022-05-10T18:13:25Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ryanspeech
license: cc-by-nc-4.0
widget:
- text: "This seems a very pleasant place, and I think I shall enjoy myself very much."
---
## RyanSpeech model (based on ESPnet2)
### `espnet/english_male_ryanspeech_fastspeech2`
This model was trained by [Rohola Zandie](https://scholar.google.com/citations?user=xv0jIe0AAAAJ&hl=en) using ryanspeech recipe in [espnet](https://github.com/espnet/espnet/). For the best results you need to download the vocoder separately from [here](https://drive.google.com/file/d/10GYvB_mIKzXzSjD67tSnBhknZRoBjsNb/view?usp=sharing) and then use the following code:
```
from espnet2.bin.tts_inference import Text2Speech
from scipy.io.wavfile import write
model = Text2Speech.from_pretrained(
model_file="espnet/english_male_ryanspeech_fastspeech2",
vocoder_file="path_to_vocoder/train_nodev_parallel_wavegan.v1.long/checkpoint-1000000steps.pkl"
)
output = model("This is a simple test.")
write("x.wav", 22050, output['wav'].numpy())
```
## Download the dataset
You can download RyanSpeech dataset from [here](https://www.kaggle.com/datasets/roholazandie/ryanspeech) or here.
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_fastspeech.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 6
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 800000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- AH0
- T
- N
- S
- R
- D
- L
- K
- IH1
- M
- EH1
- Z
- DH
- UW1
- AE1
- IH0
- AY1
- AH1
- W
- .
- P
- F
- IY1
- V
- ER0
- AA1
- B
- AO1
- HH
- EY1
- IY0
- ','
- Y
- NG
- OW1
- G
- AW1
- TH
- SH
- UH1
- '?'
- ER1
- JH
- CH
- OW0
- OW2
- EH2
- IH2
- EY2
- AA2
- AE2
- AY2
- ''''
- OY1
- UW0
- '!'
- AO2
- EH0
- ZH
- AH2
- AE0
- UW2
- AA0
- AY0
- IY2
- AW2
- AO0
- EY0
- ER2
- UH2
- '...'
- AW0
- UH0
- OY2
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
hop_length: 256
n_fft: 1024
win_length: null
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz
tts: fastspeech
tts_conf:
adim: 384
aheads: 2
elayers: 6
eunits: 1536
dlayers: 6
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 384
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
distributed: false
```
</details>
### Citing RyanSpeech
```BibTex
@inproceedings{Zandie2021RyanSpeechAC,
title={RyanSpeech: A Corpus for Conversational Text-to-Speech Synthesis},
author={Rohola Zandie and Mohammad H. Mahoor and Julia Madsen and Eshrat S. Emamian},
booktitle={Interspeech},
year={2021}
}
``` |
ATH0/ppo-LunarLander-v2 | ATH0 | 2022-05-16T21:43:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T21:43:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 280.92 +/- 14.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
evolvingstuff/gpt2-wikitext2 | evolvingstuff | 2022-05-16T21:25:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-16T20:30:35Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5628 | 1.0 | 2249 | 6.4705 |
| 6.1956 | 2.0 | 4498 | 6.2012 |
| 6.021 | 3.0 | 6747 | 6.1128 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
microsoft/swin-large-patch4-window7-224-in22k | microsoft | 2022-05-16T19:59:30Z | 450 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/swin-large-patch4-window7-224 | microsoft | 2022-05-16T19:58:33Z | 3,115 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Ukhushn/DistilHomeDepot-finetuned | Ukhushn | 2022-05-16T19:16:36Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-09T06:37:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Ukhushn/DistilHomeDepot-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ukhushn/DistilHomeDepot-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6502
- Validation Loss: 2.2067
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1437, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6502 | 2.2067 | 0 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
KhariotnovKK/Car_racing_v0 | KhariotnovKK | 2022-05-16T19:02:47Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T18:45:52Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 58.17 +/- 51.28
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
maazmikail/finetuning-sentiment-model-urdu-roberta | maazmikail | 2022-05-16T19:01:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-16T12:46:28Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-urdu-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-urdu-roberta
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Rocketknight1/temp-colab-upload-test2 | Rocketknight1 | 2022-05-16T18:59:35Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-23T17:02:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/temp-colab-upload-test2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6931
- Validation Loss: 0.6931
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6931 | 0.6931 | 0 |
| 0.6931 | 0.6931 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
microsoft/swin-base-patch4-window12-384 | microsoft | 2022-05-16T18:32:57Z | 28,937 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (base-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-base-patch4-window12-384")
model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window12-384")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/swin-large-patch4-window12-384 | microsoft | 2022-05-16T18:08:30Z | 697 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window12-384")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window12-3844")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
importsmart/bert-to-distilbert-NER | importsmart | 2022-05-16T18:02:27Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-16T17:45:16Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-to-distilbert-NER
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.014488935721812434
- name: Recall
type: recall
value: 0.018512285425782565
- name: F1
type: f1
value: 0.016255356878971478
- name: Accuracy
type: accuracy
value: 0.7597280273150055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-to-distilbert-NER
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 44.0386
- Precision: 0.0145
- Recall: 0.0185
- F1: 0.0163
- Accuracy: 0.7597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 201.4012 | 1.0 | 110 | 133.7231 | 0.0153 | 0.0106 | 0.0125 | 0.7539 |
| 106.9317 | 2.0 | 220 | 99.3629 | 0.0266 | 0.0305 | 0.0284 | 0.7593 |
| 81.3601 | 3.0 | 330 | 80.3763 | 0.0159 | 0.0214 | 0.0183 | 0.7604 |
| 63.8325 | 4.0 | 440 | 67.7620 | 0.0179 | 0.0244 | 0.0207 | 0.7599 |
| 52.0271 | 5.0 | 550 | 59.0806 | 0.0203 | 0.0268 | 0.0231 | 0.7598 |
| 44.4419 | 6.0 | 660 | 55.3208 | 0.0211 | 0.0278 | 0.0240 | 0.7603 |
| 39.2351 | 7.0 | 770 | 52.4510 | 0.0170 | 0.0222 | 0.0193 | 0.7598 |
| 35.3438 | 8.0 | 880 | 50.4576 | 0.0205 | 0.0268 | 0.0232 | 0.7604 |
| 32.7385 | 9.0 | 990 | 48.3418 | 0.0173 | 0.0227 | 0.0197 | 0.7595 |
| 30.6531 | 10.0 | 1100 | 46.7304 | 0.0147 | 0.0188 | 0.0165 | 0.7600 |
| 29.0811 | 11.0 | 1210 | 46.3386 | 0.0151 | 0.0190 | 0.0168 | 0.7599 |
| 27.9501 | 12.0 | 1320 | 45.4516 | 0.0163 | 0.0204 | 0.0181 | 0.7604 |
| 26.7452 | 13.0 | 1430 | 44.3425 | 0.0154 | 0.0199 | 0.0173 | 0.7592 |
| 25.5367 | 14.0 | 1540 | 44.0415 | 0.0146 | 0.0190 | 0.0165 | 0.7594 |
| 24.5507 | 15.0 | 1650 | 44.0386 | 0.0145 | 0.0185 | 0.0163 | 0.7597 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
BobBraico/bert-finetuned-mrpc | BobBraico | 2022-05-16T17:13:31Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-16T17:04:54Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1719
- Train Accuracy: 0.9359
- Validation Loss: 0.4050
- Validation Accuracy: 0.8382
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5877 | 0.6823 | 0.4665 | 0.8015 | 0 |
| 0.3843 | 0.8201 | 0.4026 | 0.8309 | 1 |
| 0.1719 | 0.9359 | 0.4050 | 0.8382 | 2 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
mariastull/unit_1 | mariastull | 2022-05-16T16:56:24Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T16:55:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 225.02 +/- 24.01
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
subhasisj/ar-kd-XLM-minilmv2-32 | subhasisj | 2022-05-16T16:50:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-16T10:49:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: ar-kd-XLM-minilmv2-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar-kd-XLM-minilmv2-32
This model is a fine-tuned version of [subhasisj/ar-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/ar-TAPT-MLM-MiniLM) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ThoDum/DQN-LunarLander-v2 | ThoDum | 2022-05-16T16:27:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T16:26:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -123.02 +/- 62.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nouman10/robertabase-finetuned-claim-ltp-full-prompt_ | nouman10 | 2022-05-16T16:23:35Z | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-16T16:09:03Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nouman10/robertabase-finetuned-claim-ltp-full-prompt_
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nouman10/robertabase-finetuned-claim-ltp-full-prompt_
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0334
- Validation Loss: 0.0237
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -427, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1997 | 0.0443 | 0 |
| 0.0334 | 0.0237 | 1 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kingabzpro/Moonman-Lunar-Landing-v2 | kingabzpro | 2022-05-16T16:07:26Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-11T09:44:34Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.93 +/- 24.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="kingabzpro/Moonman-Lunar-Landing-v2", filename="Moonman-Lunar-Landing-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = eval_env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = eval_env.step(action)
eval_env.render()
if done:
obs = eval_env.reset()
eval_env.close()
```
|
vitouphy/wav2vec2-xls-r-1b-khmer | vitouphy | 2022-05-16T16:04:46Z | 28 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"openslr",
"robust-speech-event",
"km",
"generated_from_trainer",
"hf-asr-leaderboard",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- km
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr
- robust-speech-event
- km
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- openslr
model-index:
- name: wav2vec2-xls-r-1b-km
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR km
type: openslr
args: km
metrics:
- name: Test WER
type: wer
value: 32.13
- name: Test CER
type: cer
value: 9.35
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: km
metrics:
- name: Test WER
type: wer
value: 32.13
- name: Test CER
type: cer
value: 9.35
---
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the openslr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4239
- Wer: 0.4221
# Evaluation results on OpenSLR "test" (self-split 10%) (Running ./eval.py):
- WER: 0.4490281634272114
- CER: 0.12198285179047481
# Evaluation results on OpenSLR "test" with LM ngram (self-split 10%) (Running ./eval.py):
- WER: 0.32130107100357
- CER: 0.09345053678218891
# Note
- Since this dataset is small (4 hours of voice recording), we decided not to train that for too long to avoid overfitting and under-generalization.
- This model performs worse than its 300M-variant. Probably, we don't explore the hyper-parameter enough?
## Installation
Install the following libraries on top of HuggingFace Transformers for the supports of language model.
```
pip install pyctcdecode
pip install https://github.com/kpu/kenlm/archive/master.zip
```
## Usage
**Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
```python
from transformers import pipeline
# Load the model
pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-khmer")
# Process raw audio
output = pipe("sound_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
```
**Approach 2:** More custom way to predict phonemes.
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import librosa
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
# Read and process the input
speech_array, sampling_rate = librosa.load("sound_file.wav", sr=16_000)
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, axis=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
print(predicted_sentences)
```
## Intended uses & limitations
The data used for this model is only around 4 hours of recordings.
- We split into 80/10/10. Hence, the training hour is 3.2 hours, which is very very small.
- Yet, its performance is not too bad. Quite interesting for such small dataset, actually. You can try it out.
- Its limitation is:
- Rare characters, e.g. ឬស្សី ឪឡឹក
- Speech needs to be clear and articulate.
- More data to cover more vocabulary and character may help improve this system.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 75
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5671 | 5.47 | 400 | 12.0218 | 1.0 |
| 3.5159 | 10.95 | 800 | 10.6337 | 1.0 |
| 2.4543 | 16.43 | 1200 | 1.8256 | 0.9839 |
| 1.9437 | 21.91 | 1600 | 1.1237 | 0.9173 |
| 1.696 | 27.39 | 2000 | 0.8246 | 0.7700 |
| 1.5342 | 32.87 | 2400 | 0.6433 | 0.6594 |
| 1.4509 | 38.35 | 2800 | 0.5500 | 0.5787 |
| 1.3478 | 43.83 | 3200 | 0.5070 | 0.4907 |
| 1.3096 | 49.31 | 3600 | 0.4692 | 0.4726 |
| 1.2532 | 54.79 | 4000 | 0.4448 | 0.4479 |
| 1.2291 | 60.27 | 4400 | 0.4374 | 0.4366 |
| 1.196 | 65.75 | 4800 | 0.4314 | 0.4310 |
| 1.1862 | 71.23 | 5200 | 0.4239 | 0.4221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
APY/LunarLander | APY | 2022-05-16T15:54:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T15:53:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 220.47 +/- 44.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
omar47/wav2vec2-large-xls-r-300m-urdu | omar47 | 2022-05-16T15:20:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-29T19:05:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m).
It achieves the following results on the evaluation set:
- Loss: 0.5285
- Wer: 0.1702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.9618 | 0.74 | 32 | 15.0745 | 1.0 |
| 9.1928 | 1.49 | 64 | 5.9361 | 1.0 |
| 4.9307 | 2.23 | 96 | 4.2924 | 1.0 |
| 3.8917 | 2.98 | 128 | 3.5873 | 1.0 |
| 3.3867 | 3.72 | 160 | 3.2594 | 1.0 |
| 3.2107 | 4.47 | 192 | 3.1718 | 1.0 |
| 3.1395 | 5.21 | 224 | 3.1281 | 1.0 |
| 3.115 | 5.95 | 256 | 3.1238 | 1.0 |
| 3.0801 | 6.7 | 288 | 3.0674 | 1.0 |
| 2.9725 | 7.44 | 320 | 2.8277 | 1.0 |
| 2.4159 | 8.19 | 352 | 1.7186 | 0.9036 |
| 1.3377 | 8.93 | 384 | 1.0271 | 0.6433 |
| 0.8591 | 9.67 | 416 | 0.8087 | 0.5441 |
| 0.726 | 10.42 | 448 | 0.7263 | 0.4634 |
| 0.6242 | 11.16 | 480 | 0.6783 | 0.4156 |
| 0.5417 | 11.91 | 512 | 0.6611 | 0.4305 |
| 0.4784 | 12.65 | 544 | 0.6300 | 0.3926 |
| 0.4198 | 13.4 | 576 | 0.5646 | 0.3499 |
| 0.3798 | 14.14 | 608 | 0.5919 | 0.3229 |
| 0.3356 | 14.88 | 640 | 0.5715 | 0.3369 |
| 0.2954 | 15.63 | 672 | 0.5325 | 0.2728 |
| 0.264 | 16.37 | 704 | 0.5535 | 0.2689 |
| 0.2535 | 17.12 | 736 | 0.5467 | 0.2366 |
| 0.2277 | 17.86 | 768 | 0.5219 | 0.2345 |
| 0.2141 | 18.6 | 800 | 0.5314 | 0.2487 |
| 0.2036 | 19.35 | 832 | 0.5382 | 0.2236 |
| 0.2021 | 20.09 | 864 | 0.5038 | 0.1922 |
| 0.1676 | 20.84 | 896 | 0.5238 | 0.2033 |
| 0.1544 | 21.58 | 928 | 0.5069 | 0.1866 |
| 0.1512 | 22.33 | 960 | 0.5045 | 0.1965 |
| 0.1512 | 23.07 | 992 | 0.5167 | 0.1862 |
| 0.1399 | 23.81 | 1024 | 0.5236 | 0.1840 |
| 0.1291 | 24.56 | 1056 | 0.5234 | 0.1957 |
| 0.1274 | 25.3 | 1088 | 0.5348 | 0.1943 |
| 0.127 | 26.05 | 1120 | 0.4978 | 0.1719 |
| 0.1105 | 26.79 | 1152 | 0.5067 | 0.1767 |
| 0.1069 | 27.53 | 1184 | 0.5150 | 0.1758 |
| 0.1058 | 28.28 | 1216 | 0.5218 | 0.1844 |
| 0.0999 | 29.02 | 1248 | 0.5375 | 0.1852 |
| 0.0964 | 29.77 | 1280 | 0.5373 | 0.1843 |
| 0.0971 | 30.51 | 1312 | 0.5190 | 0.1776 |
| 0.0906 | 31.26 | 1344 | 0.5217 | 0.1747 |
| 0.0909 | 32.0 | 1376 | 0.5204 | 0.1778 |
| 0.0784 | 32.74 | 1408 | 0.5336 | 0.1756 |
| 0.0823 | 33.49 | 1440 | 0.5281 | 0.1699 |
| 0.0834 | 34.23 | 1472 | 0.5292 | 0.1700 |
| 0.0827 | 34.98 | 1504 | 0.5285 | 0.1702 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
espnet/kyrgyz_commonvoice_blstm | espnet | 2022-05-16T15:20:01Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"ky",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-05-16T15:19:16Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ky
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/kyrgyz_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kyrgyz_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon May 16 11:17:33 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b`
- Commit date: `Thu Apr 28 19:50:59 2022 -0400`
## asr_train_asr_rnn_tr_raw_ky_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ky|1613|11040|95.0|4.2|0.8|0.5|5.4|18.4|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ky|1613|77711|98.0|0.9|1.1|0.5|2.5|18.4|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ky|1613|56384|97.6|1.4|1.1|0.6|3.1|18.3|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn_tr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_tr_raw_ky_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ky_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_ky_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ky_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_ky_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_ky_sp/wav.scp
- speech
- sound
- - dump/raw/train_ky_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_ky/wav.scp
- speech
- sound
- - dump/raw/dev_ky/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- р
- л
- н
- к
- а
- т
- и
- .
- е
- о
- м
- у
- ш
- ы
- й
- ө
- ү
- п
- з
- с
- да
- б
- ч
- та
- ▁жа
- ','
- ке
- ▁ка
- ▁б
- ▁ал
- ар
- ын
- ▁э
- те
- ган
- де
- уу
- ты
- ды
- ▁ба
- ин
- ▁а
- га
- ▁А
- ▁ж
- ма
- ди
- үн
- г
- я
- ла
- ат
- гы
- ▁с
- ба
- ▁ко
- кан
- ң
- ти
- ун
- ▁Ал
- ▁К
- ▁кө
- ду
- сы
- кы
- чы
- ги
- на
- ▁Б
- в
- до
- чу
- —
- ▁ай
- ары
- ▁бир
- ▁эле
- ып
- дө
- үү
- ▁М
- ▁да
- ▁же
- ында
- ча
- гө
- оо
- дын
- дар
- ▁бер
- э
- ген
- ▁менен
- го
- ▁са
- тар
- Ж
- О
- С
- Т
- ю
- Э
- ж
- '?'
- д
- У
- ф
- И
- Д
- К
- х
- П
- Н
- ь
- Ш
- М
- ц
- Ч
- Р
- З
- Ф
- Г
- '!'
- Л
- А
- Б
- Е
- ӊ
- '1'
- '8'
- ⁄
- Ю
- Й
- Ц
- щ
- Я
- Ы
- ъ
- ё
- –
- Х
- В
- Ү
- Ө
- ”
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/ky_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ky_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huawei-noah/AutoTinyBERT-KD-S4 | huawei-noah | 2022-05-16T15:14:43Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T15:09:51Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
bartelds/wav2vec2-large-ft-cgn-3hrs | bartelds | 2022-05-16T14:59:59Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"nl",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-16T14:38:39Z | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Large-ft-CGN-3hrs
An English Wav2Vec2 model fine-tuned on Dutch. This model is created by fine-tuning [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on 3 hours of Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). |
huawei-noah/AutoTinyBERT-S4 | huawei-noah | 2022-05-16T14:57:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T14:54:54Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
huawei-noah/AutoTinyBERT-S3 | huawei-noah | 2022-05-16T14:56:13Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T14:52:15Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
Vvek/ppo-LunarLander-v2 | Vvek | 2022-05-16T14:34:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T14:33:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 146.29 +/- 113.97
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
David-Tedesco/MLops | David-Tedesco | 2022-05-16T14:17:19Z | 0 | 0 | null | [
"region:us"
] | null | 2022-05-12T19:25:54Z | This project is made in the Epitech Tek4 cursus for the MLops project
---
title: IOT
emoji: 🐢
colorFrom: pink
colorTo: pink
sdk: streamlit
sdk_version: 1.2.0
app_file: model.py
pinned: false
---
Check out the configuration reference at
https://huggingface.co/docs/hub/spaces#reference
|
jespern/TEST2ppo-LunarLander-v2 | jespern | 2022-05-16T14:11:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T14:11:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 236.68 +/- 25.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
lirondos/anglicisms-spanish-mbert | lirondos | 2022-05-16T14:03:29Z | 36 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"anglicisms",
"loanwords",
"borrowing",
"codeswitching",
"arxiv:2203.16169",
"es",
"dataset:coalas",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-28T13:26:51Z | ---
language:
- es
license: cc-by-4.0
tags:
- anglicisms # Example: audio
- loanwords # Example: automatic-speech-recognition
- borrowing # Example: speech
- codeswitching # Example to specify a library: allennlp
- arxiv:2203.16169
datasets:
- coalas # Example: common_voice. Use dataset id from https://hf.co/datasets
widget:
- text: "Las fake news sobre la celebrity se reprodujeron por los 'mass media' en prime time."
- text: "Me gusta el cine noir y el anime."
- text: "Benching, estar en el banquillo de tu 'crush' mientras otro juega de titular."
- text: "Recetas de noviembre para el batch cooking."
- text: "Utilizaron técnicas de machine learning, big data o blockchain."
---
# anglicisms-spanish-mbert
This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*.
The model is a fine-tuned version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings.
The model considers two labels:
* ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*)
* ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*)
The model uses BIO encoding to account for multitoken borrowings.
**⚠ This is not the best-performing model for this task.** For the best-performing model (F1=85.76) see [Flair model](https://huggingface.co/lirondos/anglicisms-spanish-flair-cs).
## Metrics (on the test set)
The following table summarizes the results obtained on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus.
| LABEL | Precision | Recall | F1 |
|:-------|-----:|-----:|---------:|
| ALL | 88.09 | 79.46 | 83.55 |
| ENG | 88.44 | 82.16 | 85.19 |
| OTHER | 37.5 | 6.52 | 11.11 |
## Dataset
This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens).
|Set | Tokens | ENG | OTHER | Unique |
|:-------|-----:|-----:|---------:|---------:|
|Training |231,126 |1,493 | 28 |380 |
|Development |82,578 |306 |49 |316|
|Test |58,997 |1,239 |46 |987|
|**Total** |372,701 |3,038 |123 |1,683 |
## More info
More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*.
## How to use
```
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("lirondos/anglicisms-spanish-mbert")
model = AutoModelForTokenClassification.from_pretrained("lirondos/anglicisms-spanish-mbert")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = example = "Buscamos data scientist para proyecto de machine learning."
borrowings = nlp(example)
print(borrowings)
```
## Citation
If you use this model, please cite the following reference:
```
@inproceedings{alvarez-mellado-lignos-2022-detecting,
title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling",
author = "{\'A}lvarez-Mellado, Elena and
Lignos, Constantine",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.268",
pages = "3868--3888",
abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.",
}
```
|
cuuupid/maeve-12-6-xsum | cuuupid | 2022-05-16T13:55:59Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:xsum",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T06:26:17Z | ---
language:
- en
tags:
- text2text-generation
- pytorch
license: "gpl-3.0"
datasets:
- xsum
widget:
- text: "President Biden met with Russia's Putin over the weekend to discuss a ceasefire in Ukraine."
example_title: "Ukrainian Ceasefire"
- text: "Acme Ventures recently led a seed round to provide over $2MM in funding to Aiko Mail, an AI startup tackling email."
example_title: "VC Investment"
- text: "In a shocking move, Florida has decided to formally secede from the United States, opting to sink into the Atlantic Ocean."
example_title: "Florida secedes"
---
# Maeve - XSUM
Maeve is a language model that is similar to BART in structure but trained specially using a CAT (Conditionally Adversarial Transformer).
This allows the model to learn to create long-form text from short entries with high degrees of control and coherence that are impossible to achieve with traditional transformers.
This specific model has been trained on the XSUM dataset, and can invert summaries into full-length news articles. Feel free to try examples on the right!
|
prashanth/mbart-large-cc25-ge-hi-to-en | prashanth | 2022-05-16T13:47:51Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:hindi_english_machine_translation",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T18:42:50Z | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
metrics:
- bleu
model-index:
- name: mbart-large-cc25-ge-hi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hindi_english_machine_translation
type: hindi_english_machine_translation
args: hi-en
metrics:
- name: Bleu
type: bleu
value: 0.1823
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-ge-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1000
- Bleu: 0.1823
- Gen Len: 1023.383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:--------:|
| 1.4078 | 1.0 | 135739 | 1.1000 | 0.1823 | 1023.383 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
Manaranjan/TEST2ppo-LunarLander-v2 | Manaranjan | 2022-05-16T13:24:37Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T12:49:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 196.09 +/- 31.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
subhasisj/zh-kd-XLM-minilmv2-4 | subhasisj | 2022-05-16T12:40:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T19:07:20Z | Multilingual MiniLMv2 fine-tuned using Knowledge Distillation with a XLM Roberta Base Teacher Model on ZH Language |
nouman10/robertabase-finetuned-claim-ltp-full-prompt | nouman10 | 2022-05-16T11:49:28Z | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-16T10:45:36Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nouman10/robertabase-finetuned-claim-ltp-full-prompt
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nouman10/robertabase-finetuned-claim-ltp-full-prompt
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0233
- Validation Loss: 0.0231
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -425, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1965 | 0.0452 | 0 |
| 0.0321 | 0.0231 | 1 |
| 0.0232 | 0.0231 | 2 |
| 0.0232 | 0.0231 | 3 |
| 0.0233 | 0.0231 | 4 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ml6team/mbart-large-cc25-cnn-dailymail-xsum-nl | ml6team | 2022-05-16T11:41:07Z | 30 | 4 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"bart",
"summarization",
"nl",
"dataset:ml6team/cnn_dailymail_nl",
"dataset:ml6team/xsum_nl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- nl
tags:
- mbart
- bart
- summarization
datasets:
- ml6team/cnn_dailymail_nl
- ml6team/xsum_nl
pipeline_tag: summarization
widget:
- text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.'
- text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.'
---
# mbart-large-cc25-cnn-dailymail-xsum-nl
## Model description
Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97)
## Intended uses & limitations
It's meant for summarizing Dutch news articles.
#### How to use
```python
import transformers
undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained(
"ml6team/mbart-large-cc25-cnn-dailymail-xsum-nl"
)
tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
summarization_pipeline = transformers.pipeline(
task="summarization",
model=undisputed_best_model,
tokenizer=tokenizer,
)
summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[
"nl_XX"
]
article = "Kan je dit even samenvatten alsjeblief." # Dutch
summarization_pipeline(
article,
do_sample=True,
top_p=0.75,
top_k=50,
min_length=50,
early_stopping=True,
truncation=True,
)[0]["summary_text"]
```
## Training data
Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) and [this dataset](https://huggingface.co/datasets/ml6team/xsum_nl)
|
anes-saidi/aragpt2-base-finetuned-wikitext2 | anes-saidi | 2022-05-16T11:14:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-16T10:51:28Z | ---
tags:
- generated_from_trainer
model-index:
- name: aragpt2-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aragpt2-base-finetuned-wikitext2
This model is a fine-tuned version of [aubmindlab/aragpt2-base](https://huggingface.co/aubmindlab/aragpt2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 387 | 5.1841 |
| 5.9664 | 2.0 | 774 | 5.0627 |
| 5.4166 | 3.0 | 1161 | 5.0307 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.10.3
|
araffin/RecurrentPPO-CarRacing-v0_2 | araffin | 2022-05-16T10:57:49Z | 2 | 1 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T10:55:12Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- metrics:
- type: mean_reward
value: 896.87 +/- 23.31
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **RecurrentPPO** Agent playing **CarRacing-v0**
This is a trained model of a **RecurrentPPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
Using recurrent PPO implementation from SB3 contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/pull/53
|
jsunster/layoutlmv2-finetuned-cord | jsunster | 2022-05-16T09:35:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-16T08:58:07Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.10.0+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
SreyanG-NVIDIA/bert-base-cased-finetuned-squad | SreyanG-NVIDIA | 2022-05-16T08:39:41Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-13T13:39:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0337 | 1.0 | 5546 | 1.0150 |
| 0.7546 | 2.0 | 11092 | 1.0015 |
| 0.5537 | 3.0 | 16638 | 1.0848 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nandezgarcia/roberta-base-bne-sqac-finetuned-recores | nandezgarcia | 2022-05-16T08:07:43Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-05-16T07:52:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-sqac-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-sqac-finetuned-recores
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4624
- Accuracy: 0.3691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5643 | 1.0 | 1047 | 1.5474 | 0.3526 |
| 0.8147 | 2.0 | 2094 | 2.6498 | 0.3719 |
| 0.1618 | 3.0 | 3141 | 3.1061 | 0.3719 |
| 0.0135 | 4.0 | 4188 | 3.4624 | 0.3691 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
withU/kogpt2-emotion-chatbot | withU | 2022-05-16T07:58:01Z | 237 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-12T05:21:44Z | # KoGPT2-emotion-chatbot
kogpt2 on hugging face Transformers for Psychological Counseling
- [full project link](https://github.com/jiminAn/Capstone_2022)
## how to use
```
from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast
model = GPT2LMHeadModel.from_pretrained("withU/kogpt2-emotion-chatbot")
tokenizer = PreTrainedTokenizerFast.from_pretrained("withU/kogpt2-emotion-chatbot")
input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt")
output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=80, num_return_sequences=4)
for generated_sequence in output_sequences:
generated_sequence = generated_sequence.tolist()
print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)))
```
## dataset finetuned on
- [wellness dataset](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006)
- [emotion corpus of conversations](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-010)
- [chatbot data](https://jeongukjae.github.io/tfds-korean/datasets/korean_chatbot_qa_data.html)
## references
- [WelllnessConversation-LanguageModel](https://github.com/nawnoes/WellnessConversation-LanguageModel)
- [KoGPT2: SKT-AI](https://github.com/SKT-AI/KoGPT2) |
madatnlp/sk-kogptv2-kormath-causal | madatnlp | 2022-05-16T07:56:43Z | 8 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-13T11:28:16Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/sk-kogptv2-kormath-causal
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/sk-kogptv2-kormath-causal
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3184
- Validation Loss: 1.4046
- Epoch: 15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2.2999999e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7142 | 1.8683 | 0 |
| 1.6077 | 1.4417 | 1 |
| 1.2458 | 1.3161 | 2 |
| 1.0396 | 1.2704 | 3 |
| 0.8848 | 1.2818 | 4 |
| 0.7634 | 1.2579 | 5 |
| 0.6699 | 1.2724 | 6 |
| 0.5948 | 1.2718 | 7 |
| 0.5306 | 1.3300 | 8 |
| 0.4832 | 1.3377 | 9 |
| 0.4401 | 1.3038 | 10 |
| 0.4053 | 1.3622 | 11 |
| 0.3782 | 1.3577 | 12 |
| 0.3550 | 1.3696 | 13 |
| 0.3347 | 1.3682 | 14 |
| 0.3184 | 1.4046 | 15 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
hm192494/distilgpt2-finetuned-wikitext2 | hm192494 | 2022-05-16T07:42:01Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-14T22:18:41Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hm192494/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hm192494/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.1414
- Validation Loss: 5.4539
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.1414 | 5.4539 | 0 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ceggian/sbert_pt_reddit_softmax_256 | ceggian | 2022-05-16T06:52:11Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-05-16T06:48:33Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kompactss/JeBERT_ko_je | kompactss | 2022-05-16T06:11:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-01T15:16:08Z | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 표준어 -> 제주어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 67.3
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
kompactss/JeBERT_je_ko | kompactss | 2022-05-16T06:11:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-01T15:03:18Z | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 제주어 -> 표준어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 8 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 79.0
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
kompactss/JeBERT_ko_je_v2 | kompactss | 2022-05-16T06:10:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-02T17:30:31Z | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 표준어 -> 제주어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)_v2
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 67.6
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
Tititun/consumer_super | Tititun | 2022-05-16T04:46:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-15T15:31:47Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: consumer_super
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# consumer_super
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData | tbosse | 2022-05-16T01:21:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-15T14:21:40Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_preTrained_with_noisyData
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Precision: 0.9203
- Recall: 0.8703
- F1: 0.8946
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 245 | 0.0258 | 0.9172 | 0.7990 | 0.8540 | 0.9918 |
| No log | 2.0 | 490 | 0.0192 | 0.9203 | 0.8703 | 0.8946 | 0.9939 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nttoanh/t5vi-finetuned-en-to-vi | nttoanh | 2022-05-15T22:20:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:mt_eng_vietnamese",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T17:03:36Z | ---
tags:
- generated_from_trainer
datasets:
- mt_eng_vietnamese
metrics:
- bleu
model-index:
- name: t5vi-finetuned-en-to-vi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mt_eng_vietnamese
type: mt_eng_vietnamese
args: iwslt2015-en-vi
metrics:
- name: Bleu
type: bleu
value: 13.547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5vi-finetuned-en-to-vi
This model is a fine-tuned version of [imthanhlv/t5vi](https://huggingface.co/imthanhlv/t5vi) on the mt_eng_vietnamese dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3827
- Bleu: 13.547
- Gen Len: 17.3719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8026 | 1.0 | 6666 | 1.5907 | 10.9756 | 17.3231 |
| 1.6217 | 2.0 | 13332 | 1.4635 | 12.375 | 17.3444 |
| 1.5087 | 3.0 | 19998 | 1.4131 | 13.1828 | 17.3924 |
| 1.4446 | 4.0 | 26664 | 1.3915 | 13.5217 | 17.3617 |
| 1.4076 | 5.0 | 33330 | 1.3827 | 13.547 | 17.3719 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ali-issa/FYP_ARABIC | ali-issa | 2022-05-15T19:44:11Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-15T14:10:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab-similar-to-german-bigger-warm-up
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab-similar-to-german-bigger-warm-up
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6370
- Wer: 0.4146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.4958 | 2.83 | 400 | 3.4822 | 1.0 |
| 3.2281 | 5.67 | 800 | 2.9404 | 1.0 |
| 2.942 | 8.51 | 1200 | 2.8690 | 1.0 |
| 2.6346 | 11.35 | 1600 | 1.5452 | 0.9994 |
| 1.3472 | 14.18 | 2000 | 0.8261 | 0.6853 |
| 0.8972 | 17.02 | 2400 | 0.6812 | 0.5737 |
| 0.6924 | 19.85 | 2800 | 0.6552 | 0.5291 |
| 0.5687 | 22.69 | 3200 | 0.6108 | 0.4909 |
| 0.4734 | 25.53 | 3600 | 0.5877 | 0.4674 |
| 0.4029 | 28.37 | 4000 | 0.6204 | 0.4662 |
| 0.3483 | 31.2 | 4400 | 0.5932 | 0.4451 |
| 0.307 | 34.04 | 4800 | 0.6445 | 0.4392 |
| 0.2722 | 36.88 | 5200 | 0.6126 | 0.4292 |
| 0.2247 | 39.71 | 5600 | 0.6370 | 0.4146 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
subhasisj/zh-TAPT-MLM-MiniLM | subhasisj | 2022-05-15T19:26:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-09T20:04:18Z | ---
tags:
- generated_from_trainer
model-index:
- name: zh-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zh-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
KhariotnovKK/luna_lender_v1 | KhariotnovKK | 2022-05-15T18:37:37Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-06T08:33:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 260.20 +/- 20.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
SebastianS/bert-finetuned-squad | SebastianS | 2022-05-15T16:19:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T14:39:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/dclblogger-loopifyyy | huggingtweets | 2022-05-15T15:32:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-15T15:28:31Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dclblogger-loopifyyy/1652628765621/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1472740175130230784/L7Xcs7wJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480550067564163078/D90SnyUa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matty & Loopify 🧙♂️</div>
<div style="text-align: center; font-size: 14px;">@dclblogger-loopifyyy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matty & Loopify 🧙♂️.
| Data | Matty | Loopify 🧙♂️ |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3250 |
| Retweets | 62 | 117 |
| Short tweets | 494 | 867 |
| Tweets kept | 2694 | 2266 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pq5pxck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dclblogger-loopifyyy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dclblogger-loopifyyy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
traxes/repos | traxes | 2022-05-15T15:03:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T15:03:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -130.18 +/- 34.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
umbertospazio/1500000_PPO-LunarLander-v2 | umbertospazio | 2022-05-15T15:03:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T15:02:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.46 +/- 17.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
umbertospazio/Test_PPO-LunarLander-v2 | umbertospazio | 2022-05-15T15:02:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T14:28:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 278.51 +/- 23.01
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Zohar/distilgpt2-finetuned-negative-restaurant-reviews-clean | Zohar | 2022-05-15T14:12:08Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-15T11:47:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-negative-restaurant-reviews-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-negative-restaurant-reviews-clean
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6841 | 1.0 | 3105 | 3.5793 |
| 3.6184 | 2.0 | 6210 | 3.5313 |
| 3.5943 | 3.0 | 9315 | 3.5187 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
WhatIsThisSignupForm/ppo-LunarLander-v2 | WhatIsThisSignupForm | 2022-05-15T12:50:36Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T12:46:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 174.04 +/- 57.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
dragonSwing/audify | dragonSwing | 2022-05-15T12:49:48Z | 14 | 0 | transformers | [
"transformers",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-17T06:58:48Z | ---
license: cc-by-nc-sa-4.0
---
|
ipvikas/rare-puppers | ipvikas | 2022-05-15T12:47:13Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-05-01T16:51:17Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9552238583564758
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
FollishBoi/dqn-MountainCar-v0-try2 | FollishBoi | 2022-05-15T12:00:52Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T12:00:30Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -100.70 +/- 7.47
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
FollishBoi/dqn-MountainCar-v0-try1 | FollishBoi | 2022-05-15T12:00:10Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T11:59:45Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -102.50 +/- 5.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
anas-awadalla/splinter-base-finetuned-squad | anas-awadalla | 2022-05-15T11:49:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T10:55:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-base-finetuned-squad
This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mubikan/xlm-roberta-base-finetuned-panx-de | mubikan | 2022-05-15T11:48:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-14T15:57:44Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8588964027959312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2631 | 1.0 | 525 | 0.1596 | 0.8218 |
| 0.1296 | 2.0 | 1050 | 0.1353 | 0.8479 |
| 0.0821 | 3.0 | 1575 | 0.1383 | 0.8589 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
harikp20/hkp24 | harikp20 | 2022-05-15T11:34:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T08:30:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: hkp24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hkp24
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2249 | 1.0 | 5533 | 1.1675 |
| 0.961 | 2.0 | 11066 | 1.1376 |
| 0.7581 | 3.0 | 16599 | 1.1619 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
unfinity/PPO-LunarLander-v2 | unfinity | 2022-05-15T10:52:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T10:33:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 262.60 +/- 17.02
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
anas-awadalla/splinter-large-finetuned-squad | anas-awadalla | 2022-05-15T10:51:43Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T08:20:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-finetuned-squad
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ZaynSu99/weibo_senti_cls | ZaynSu99 | 2022-05-15T10:46:21Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | 2022-05-15T10:31:02Z | ---
license: afl-3.0
---
this model is for Weibo comment sentiment analysis |
nandezgarcia/roberta-base-bne-finetuned-recores | nandezgarcia | 2022-05-15T10:24:41Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-05-15T07:43:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-recores
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1113
- Accuracy: 0.4601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5294 | 1.0 | 1047 | 1.4094 | 0.4242 |
| 0.6886 | 2.0 | 2094 | 2.1629 | 0.4545 |
| 0.0779 | 3.0 | 3141 | 2.3083 | 0.4545 |
| 0.0103 | 4.0 | 4188 | 3.0327 | 0.4628 |
| 0.0019 | 5.0 | 5235 | 3.1113 | 0.4601 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
atsanda/ppo-LunarLander-v2 | atsanda | 2022-05-15T09:28:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T09:27:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 241.67 +/- 9.99
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Metformin/BART_medFineTune | Metformin | 2022-05-15T09:11:06Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T05:39:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Metformin/BART_medFineTune
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Metformin/BART_medFineTune
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7982
- Validation Loss: 0.9953
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 7820, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1563 | 1.3468 | 0 |
| 1.4157 | 1.2090 | 1 |
| 1.2579 | 1.1387 | 2 |
| 1.1819 | 1.0888 | 3 |
| 1.1438 | 1.0848 | 4 |
| 1.0629 | 1.0512 | 5 |
| 1.0163 | 1.0454 | 6 |
| 0.9801 | 1.0248 | 7 |
| 0.9530 | 1.0171 | 8 |
| 0.9262 | 1.0108 | 9 |
| 0.9124 | 1.0116 | 10 |
| 0.8853 | 1.0043 | 11 |
| 0.8658 | 1.0023 | 12 |
| 0.8511 | 0.9987 | 13 |
| 0.8394 | 0.9988 | 14 |
| 0.8298 | 0.9994 | 15 |
| 0.8175 | 0.9985 | 16 |
| 0.8105 | 0.9936 | 17 |
| 0.8033 | 0.9974 | 18 |
| 0.8012 | 0.9948 | 19 |
| 0.7997 | 0.9948 | 20 |
| 0.7970 | 0.9957 | 21 |
| 0.7956 | 0.9958 | 22 |
| 0.8002 | 0.9954 | 23 |
| 0.7951 | 0.9957 | 24 |
| 0.7994 | 0.9948 | 25 |
| 0.7964 | 0.9958 | 26 |
| 0.7948 | 0.9957 | 27 |
| 0.7979 | 0.9956 | 28 |
| 0.7982 | 0.9953 | 29 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.3
- Datasets 2.0.0
- Tokenizers 0.12.1
|
esh/ppo-LunarLander-v2 | esh | 2022-05-15T09:01:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-09T16:40:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.69 +/- 23.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
pujaburman30/autotrain-hi_ner_xlmr-869827677 | pujaburman30 | 2022-05-15T09:00:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain",
"unk",
"dataset:pujaburman30/autotrain-data-hi_ner_xlmr",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-15T08:56:46Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pujaburman30/autotrain-data-hi_ner_xlmr
co2_eq_emissions: 4.365496441173981
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 869827677
- CO2 Emissions (in grams): 4.365496441173981
## Validation Metrics
- Loss: 0.894961416721344
- Accuracy: 0.7411180773249739
- Precision: 0.590625
- Recall: 0.5080645161290323
- F1: 0.546242774566474
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pujaburman30/autotrain-hi_ner_xlmr-869827677
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2 | anas-awadalla | 2022-05-15T07:40:11Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T05:02:33Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
meln1k/ppo-CarRacing-v0-v1 | meln1k | 2022-05-15T07:33:43Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T07:32:54Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 800.67 +/- 46.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
meln1k/ppo-CarRacing-v0 | meln1k | 2022-05-15T07:31:25Z | 11 | 2 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T07:19:11Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 840.32 +/- 21.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0 | anas-awadalla | 2022-05-15T07:30:26Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T04:52:31Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | 2022-05-15T07:06:17Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T04:38:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0 | anas-awadalla | 2022-05-15T06:45:01Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T03:10:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 400
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ahmeddbahaa/mbart-large-50-finetuned-persian | ahmeddbahaa | 2022-05-15T04:01:56Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"summarization",
"persian",
"MBart50",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-05-14T13:40:15Z | ---
tags:
- summarization
- persian
- MBart50
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mbart-large-50-finetuned-persian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-persian
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1932
- Rouge-1: 26.11
- Rouge-2: 8.11
- Rouge-l: 21.09
- Gen Len: 37.29
- Bertscore: 71.08
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 5.5612 | 1.0 | 1476 | 4.5015 | 17.07 | 3.14 | 13.54 | 47.49 | 66.83 |
| 4.3049 | 2.0 | 2952 | 4.1055 | 22.63 | 5.89 | 18.03 | 40.43 | 69.23 |
| 3.8154 | 3.0 | 4428 | 3.9822 | 24.57 | 7.15 | 19.74 | 37.35 | 70.36 |
| 3.3401 | 4.0 | 5904 | 4.0088 | 25.84 | 7.96 | 20.95 | 37.56 | 70.83 |
| 2.8879 | 5.0 | 7380 | 4.1932 | 26.24 | 8.26 | 21.23 | 37.78 | 71.05 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
bkh6722/bach-arb | bkh6722 | 2022-05-15T02:34:26Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-07T21:50:59Z |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bach-arb
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9404
- Wer: 0.6130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 115
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 27.8653 | 7.14 | 100 | 3.1369 | 1.0 |
| 2.5975 | 14.28 | 200 | 2.1223 | 0.9976 |
| 1.2001 | 21.41 | 300 | 1.7455 | 0.8774 |
| 0.5938 | 28.55 | 400 | 1.8534 | 0.7981 |
| 0.4001 | 35.69 | 500 | 2.3318 | 0.7740 |
| 0.2895 | 42.83 | 600 | 2.2214 | 0.7163 |
| 0.1853 | 49.97 | 700 | 2.4841 | 0.7043 |
| 0.1318 | 57.14 | 800 | 2.9749 | 0.7139 |
| 0.1067 | 64.28 | 900 | 2.4759 | 0.7115 |
| 0.0635 | 71.41 | 1000 | 2.6708 | 0.6635 |
| 0.0515 | 78.55 | 1100 | 3.0593 | 0.6923 |
| 0.0455 | 85.69 | 1200 | 2.9637 | 0.6587 |
| 0.0329 | 92.83 | 1300 | 2.9837 | 0.6346 |
| 0.0232 | 99.97 | 1400 | 2.9361 | 0.6178 |
| 0.021 | 107.14 | 1500 | 2.9221 | 0.6010 |
| 0.0193 | 114.28 | 1600 | 2.9404 | 0.6130 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
menglingbei/t5-small-finetuned-xsum | menglingbei | 2022-05-15T02:03:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T01:59:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-4 | anas-awadalla | 2022-05-15T00:58:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T00:45:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-512-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T23:53:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T23:32:52Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-2 | anas-awadalla | 2022-05-14T23:31:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T23:11:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-2 | anas-awadalla | 2022-05-14T23:31:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T23:11:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Metformin/T5model_medFineTune | Metformin | 2022-05-14T23:15:48Z | 5 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-14T10:11:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Metformin/T5model_medFineTune
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Metformin/T5model_medFineTune
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.0442
- Validation Loss: 6.1005
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 7820, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 42.6321 | 28.0647 | 0 |
| 31.2672 | 21.0068 | 1 |
| 24.8310 | 16.6186 | 2 |
| 20.5368 | 13.8025 | 3 |
| 17.3796 | 11.7180 | 4 |
| 15.0329 | 10.0404 | 5 |
| 13.0886 | 8.6286 | 6 |
| 11.5235 | 7.5594 | 7 |
| 10.1123 | 6.8079 | 8 |
| 9.0442 | 6.1005 | 9 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.3
- Datasets 2.0.0
- Tokenizers 0.12.1
|
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-0 | anas-awadalla | 2022-05-14T23:09:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T22:49:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-512-finetuned-squad-seed-0 | anas-awadalla | 2022-05-14T22:17:30Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T22:04:23Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T22:02:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:52:47Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-256-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-2 | anas-awadalla | 2022-05-14T21:51:44Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:41:56Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-256-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | 2022-05-14T21:40:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:30:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | 2022-05-14T21:40:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:30:14Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T21:28:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:16:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-128-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-128-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T21:28:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:10:06Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-128-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-2 | anas-awadalla | 2022-05-14T21:14:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T21:02:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-128-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits