modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 06:24:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Classroom-workshop/assignment2-francesco | Classroom-workshop | 2022-06-02T15:27:06Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-02T15:27:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 311.40 +/- 10.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Classroom-workshop/assignment1-francesco | Classroom-workshop | 2022-06-02T15:25:05Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-02T15:24:33Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
Classroom-workshop/assignment1-maria | Classroom-workshop | 2022-06-02T15:24:32Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-02T15:23:58Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
KFlash/bert-finetuned-squad | KFlash | 2022-06-02T15:22:00Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-29T15:15:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
lmazzon70/blurr_IMDB_distilbert_classification | lmazzon70 | 2022-06-02T14:30:46Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2022-06-02T14:30:34Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602 | YeRyeongLee | 2022-06-02T14:29:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T11:16:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-base-discriminator-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-filtered-0602
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Accuracy: 0.9720
- F1: 0.9721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
lmazzon70/identify-my-cat | lmazzon70 | 2022-06-02T14:24:41Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2022-06-02T14:24:29Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
huggingtweets/vborghesani | huggingtweets | 2022-06-02T14:00:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-02T13:54:30Z | ---
language: en
thumbnail: http://www.huggingtweets.com/vborghesani/1654178225151/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1279408626877304833/28JtkdiE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Valentina Borghesani</div>
<div style="text-align: center; font-size: 14px;">@vborghesani</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Valentina Borghesani.
| Data | Valentina Borghesani |
| --- | --- |
| Tweets downloaded | 1024 |
| Retweets | 140 |
| Short tweets | 23 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21epnhoj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vborghesani's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vf22msq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vf22msq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vborghesani')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/caballerogaudes | huggingtweets | 2022-06-02T13:25:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-02T13:23:37Z | ---
language: en
thumbnail: http://www.huggingtweets.com/caballerogaudes/1654176335515/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1011998779061559297/5gOeFvds_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CesarCaballeroGaudes</div>
<div style="text-align: center; font-size: 14px;">@caballerogaudes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CesarCaballeroGaudes.
| Data | CesarCaballeroGaudes |
| --- | --- |
| Tweets downloaded | 1724 |
| Retweets | 808 |
| Short tweets | 36 |
| Tweets kept | 880 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d76b6yf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @caballerogaudes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i6nt6oo6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i6nt6oo6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/caballerogaudes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/willsavino | huggingtweets | 2022-06-02T13:06:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-02T13:06:04Z | ---
language: en
thumbnail: http://www.huggingtweets.com/willsavino/1654175184979/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1078115982768525317/wk6NTSE0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Will Savino</div>
<div style="text-align: center; font-size: 14px;">@willsavino</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Will Savino.
| Data | Will Savino |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 355 |
| Short tweets | 244 |
| Tweets kept | 2630 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2nhwww0u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @willsavino's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3k5ueoap) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3k5ueoap/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/willsavino')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
yannis95/bert-finetuned-ner | yannis95 | 2022-06-02T12:35:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T06:57:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.926145730300033
- name: Recall
type: recall
value: 0.9454729047458769
- name: F1
type: f1
value: 0.935709526982012
- name: Accuracy
type: accuracy
value: 0.9851209748631307
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0665
- Precision: 0.9261
- Recall: 0.9455
- F1: 0.9357
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0650 | 0.9197 | 0.9367 | 0.9281 | 0.9830 |
| 0.0407 | 2.0 | 3512 | 0.0621 | 0.9225 | 0.9438 | 0.9330 | 0.9848 |
| 0.0195 | 3.0 | 5268 | 0.0665 | 0.9261 | 0.9455 | 0.9357 | 0.9851 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-v1 | tclong | 2022-06-02T11:33:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-31T14:48:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vios-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-v1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6352
- Wer: 0.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.7944 | 3.98 | 1000 | 1.7427 | 1.0387 |
| 0.7833 | 7.97 | 2000 | 0.4026 | 0.4364 |
| 0.4352 | 11.95 | 3000 | 0.3967 | 0.4042 |
| 0.4988 | 15.94 | 4000 | 0.5446 | 0.4632 |
| 0.7822 | 19.92 | 5000 | 0.6563 | 0.5491 |
| 0.8496 | 23.9 | 6000 | 0.5828 | 0.5045 |
| 0.8072 | 27.89 | 7000 | 0.6318 | 0.5109 |
| 0.8336 | 31.87 | 8000 | 0.6352 | 0.5161 |
| 0.8311 | 35.86 | 9000 | 0.6352 | 0.5161 |
| 0.839 | 39.84 | 10000 | 0.6352 | 0.5161 |
| 0.8297 | 43.82 | 11000 | 0.6352 | 0.5161 |
| 0.8288 | 47.81 | 12000 | 0.6352 | 0.5161 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
chrisvinsen/wav2vec2-final-1-lm-3 | chrisvinsen | 2022-06-02T11:11:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-02T02:20:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
WER 0.283
WER 0.126 with 4-Gram
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
elfray/q-FrozenLake-v1-4x4-noSlippery | elfray | 2022-06-02T10:55:16Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-02T10:55:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="elfray/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
SynamicTechnologies/CYBERT | SynamicTechnologies | 2022-06-02T09:51:10Z | 5,032 | 8 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T08:22:55Z | ## CYBERT
BERT model dedicated to the domain of cyber security. The model has been trained on a corpus of high-quality cyber security and computer science text and is unlikely to work outside this domain.
##Model architecture
The model architecture used is original Roberta and tokenizer to train the corpus is Byte Level.
##Hardware
The model is trained on GPU NVIDIA-SMI 510.54
|
chrisvinsen/wav2vec2-19 | chrisvinsen | 2022-06-02T09:03:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-01T10:35:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
dsghrg/bert-finetuned-ner | dsghrg | 2022-06-02T08:18:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T08:00:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.933895223929929
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9423830567831235
- name: Accuracy
type: accuracy
value: 0.9863572143403779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0646
- Precision: 0.9339
- Recall: 0.9510
- F1: 0.9424
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0864 | 1.0 | 1756 | 0.0659 | 0.9161 | 0.9372 | 0.9265 | 0.9830 |
| 0.0403 | 2.0 | 3512 | 0.0616 | 0.9271 | 0.9483 | 0.9376 | 0.9855 |
| 0.0199 | 3.0 | 5268 | 0.0646 | 0.9339 | 0.9510 | 0.9424 | 0.9864 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/paxt0n4 | huggingtweets | 2022-06-02T07:30:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-02T07:30:25Z | ---
language: en
thumbnail: http://www.huggingtweets.com/paxt0n4/1654155052782/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359906890340306950/s5cXHS11_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Paxton Fitzpatrick</div>
<div style="text-align: center; font-size: 14px;">@paxt0n4</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Paxton Fitzpatrick.
| Data | Paxton Fitzpatrick |
| --- | --- |
| Tweets downloaded | 2551 |
| Retweets | 1177 |
| Short tweets | 326 |
| Tweets kept | 1048 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1x9k9uk2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @paxt0n4's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34fd5zca) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34fd5zca/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/paxt0n4')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kabelomalapane/En-Tn | kabelomalapane | 2022-06-02T07:03:01Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-06-01T11:35:03Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Tn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Tn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tn](https://huggingface.co/Helsinki-NLP/opus-mt-en-tn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6174
- Bleu: 32.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ThePixOne/SeconBERTa1 | ThePixOne | 2022-06-02T05:51:30Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-06-02T05:46:38Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 20799 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4159.8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ShoneRan/bert-emotion | ShoneRan | 2022-06-02T05:15:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T04:55:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7262254187805659
- name: Recall
type: recall
value: 0.725549671319356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1670
- Precision: 0.7262
- Recall: 0.7255
- Fscore: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8561 | 1.0 | 815 | 0.7844 | 0.7575 | 0.6081 | 0.6253 |
| 0.5337 | 2.0 | 1630 | 0.9080 | 0.7567 | 0.7236 | 0.7325 |
| 0.2573 | 3.0 | 2445 | 1.1670 | 0.7262 | 0.7255 | 0.7253 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
thunninoi/wav2vec2-japanese-hiragana-vtuber | thunninoi | 2022-06-02T04:31:41Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-27T10:41:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4134
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 3
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 75
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4299 | 1.0 | 247 | 0.7608 | 0.4853 |
| 0.8045 | 2.0 | 494 | 0.6603 | 0.4449 |
| 0.6061 | 3.0 | 741 | 0.5527 | 0.4233 |
| 0.4372 | 4.0 | 988 | 0.6262 | 0.4029 |
| 0.3226 | 5.0 | 1235 | 0.4528 | 0.3462 |
| 0.2581 | 6.0 | 1482 | 0.4961 | 0.3226 |
| 0.2147 | 7.0 | 1729 | 0.4856 | 0.3075 |
| 0.1736 | 8.0 | 1976 | 0.4372 | 0.3063 |
| 0.1488 | 9.0 | 2223 | 0.3771 | 0.2761 |
| 0.1286 | 10.0 | 2470 | 0.4373 | 0.2590 |
| 0.1118 | 11.0 | 2717 | 0.3840 | 0.2594 |
| 0.1037 | 12.0 | 2964 | 0.4241 | 0.2590 |
| 0.0888 | 13.0 | 3211 | 0.4150 | 0.2410 |
| 0.0923 | 14.0 | 3458 | 0.3811 | 0.2524 |
| 0.0813 | 15.0 | 3705 | 0.4164 | 0.2459 |
| 0.0671 | 16.0 | 3952 | 0.3498 | 0.2288 |
| 0.0669 | 17.0 | 4199 | 0.3697 | 0.2247 |
| 0.0586 | 18.0 | 4446 | 0.3550 | 0.2251 |
| 0.0533 | 19.0 | 4693 | 0.4024 | 0.2231 |
| 0.0542 | 20.0 | 4940 | 0.4130 | 0.2121 |
| 0.0532 | 21.0 | 5187 | 0.3464 | 0.2231 |
| 0.0451 | 22.0 | 5434 | 0.3346 | 0.1966 |
| 0.0413 | 23.0 | 5681 | 0.4599 | 0.2088 |
| 0.0401 | 24.0 | 5928 | 0.4031 | 0.2162 |
| 0.0345 | 25.0 | 6175 | 0.3726 | 0.2084 |
| 0.033 | 26.0 | 6422 | 0.4619 | 0.2076 |
| 0.0366 | 27.0 | 6669 | 0.4071 | 0.2202 |
| 0.0343 | 28.0 | 6916 | 0.4114 | 0.2088 |
| 0.0319 | 29.0 | 7163 | 0.3605 | 0.2015 |
| 0.0304 | 30.0 | 7410 | 0.4097 | 0.2015 |
| 0.0253 | 31.0 | 7657 | 0.4152 | 0.1970 |
| 0.0235 | 32.0 | 7904 | 0.3829 | 0.2043 |
| 0.0255 | 33.0 | 8151 | 0.3976 | 0.2011 |
| 0.0201 | 34.0 | 8398 | 0.4247 | 0.2088 |
| 0.022 | 35.0 | 8645 | 0.3831 | 0.1945 |
| 0.0175 | 36.0 | 8892 | 0.3838 | 0.2007 |
| 0.0201 | 37.0 | 9139 | 0.4377 | 0.1986 |
| 0.0176 | 38.0 | 9386 | 0.4546 | 0.2043 |
| 0.021 | 39.0 | 9633 | 0.4341 | 0.2039 |
| 0.0191 | 40.0 | 9880 | 0.4043 | 0.1937 |
| 0.0159 | 41.0 | 10127 | 0.4098 | 0.2064 |
| 0.0148 | 42.0 | 10374 | 0.4027 | 0.1905 |
| 0.0129 | 43.0 | 10621 | 0.4104 | 0.1933 |
| 0.0123 | 44.0 | 10868 | 0.3738 | 0.1925 |
| 0.0159 | 45.0 | 11115 | 0.3946 | 0.1933 |
| 0.0091 | 46.0 | 11362 | 0.3971 | 0.1880 |
| 0.0082 | 47.0 | 11609 | 0.4042 | 0.1986 |
| 0.0108 | 48.0 | 11856 | 0.4092 | 0.1884 |
| 0.0123 | 49.0 | 12103 | 0.3674 | 0.1941 |
| 0.01 | 50.0 | 12350 | 0.3750 | 0.1876 |
| 0.0094 | 51.0 | 12597 | 0.3781 | 0.1831 |
| 0.008 | 52.0 | 12844 | 0.4051 | 0.1852 |
| 0.0079 | 53.0 | 13091 | 0.3981 | 0.1937 |
| 0.0068 | 54.0 | 13338 | 0.4425 | 0.1929 |
| 0.0061 | 55.0 | 13585 | 0.4183 | 0.1986 |
| 0.0074 | 56.0 | 13832 | 0.3502 | 0.1880 |
| 0.0071 | 57.0 | 14079 | 0.3908 | 0.1892 |
| 0.0079 | 58.0 | 14326 | 0.3908 | 0.1913 |
| 0.0042 | 59.0 | 14573 | 0.3801 | 0.1864 |
| 0.0049 | 60.0 | 14820 | 0.4065 | 0.1839 |
| 0.0063 | 61.0 | 15067 | 0.4170 | 0.1900 |
| 0.0049 | 62.0 | 15314 | 0.3903 | 0.1856 |
| 0.0031 | 63.0 | 15561 | 0.4042 | 0.1896 |
| 0.0054 | 64.0 | 15808 | 0.3890 | 0.1839 |
| 0.0061 | 65.0 | 16055 | 0.3831 | 0.1847 |
| 0.0052 | 66.0 | 16302 | 0.3898 | 0.1847 |
| 0.0032 | 67.0 | 16549 | 0.4230 | 0.1831 |
| 0.0017 | 68.0 | 16796 | 0.4241 | 0.1823 |
| 0.0022 | 69.0 | 17043 | 0.4360 | 0.1856 |
| 0.0026 | 70.0 | 17290 | 0.4233 | 0.1815 |
| 0.0028 | 71.0 | 17537 | 0.4225 | 0.1835 |
| 0.0018 | 72.0 | 17784 | 0.4163 | 0.1856 |
| 0.0034 | 73.0 | 18031 | 0.4120 | 0.1876 |
| 0.0019 | 74.0 | 18278 | 0.4129 | 0.1876 |
| 0.0023 | 75.0 | 18525 | 0.4134 | 0.1884 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
gullu72/bert-fine-tuned-rajat | gullu72 | 2022-06-02T04:22:58Z | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T03:50:40Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-rajat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-rajat
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1791
- Validation Loss: 0.4963
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5119 | 0.4245 | 0 |
| 0.3015 | 0.4296 | 1 |
| 0.1791 | 0.4963 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
NlpHUST/gpt2-vietnamese | NlpHUST | 2022-06-02T04:02:44Z | 3,159 | 22 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"vi",
"vietnamese",
"lm",
"nlp",
"dataset:oscar",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-23T08:04:12Z | ---
language: vi
tags:
- vi
- vietnamese
- gpt2
- text-generation
- lm
- nlp
datasets:
- oscar
widget:
- text: "Việt Nam là quốc gia có"
---
# GPT-2
Pretrained gpt model on Vietnamese language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
# How to use the model
~~~~
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('NlpHUST/gpt2-vietnamese')
model = GPT2LMHeadModel.from_pretrained('NlpHUST/gpt2-vietnamese')
text = "Việt Nam là quốc gia có"
input_ids = tokenizer.encode(text, return_tensors='pt')
max_length = 100
sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id,
do_sample=True,
max_length=max_length,
min_length=max_length,
top_k=40,
num_beams=5,
early_stopping=True,
no_repeat_ngram_size=2,
num_return_sequences=3)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
print('\n---')
~~~~
```bash
>> Generated text 1
Việt Nam là quốc gia có nền kinh tế hàng đầu thế giới về sản xuất, chế biến và tiêu thụ các sản phẩm nông sản, thủy sản. Tuy nhiên, trong những năm gần đây, nông nghiệp Việt Nam đang phải đối mặt với nhiều khó khăn, thách thức, đặc biệt là những tác động tiêu cực của biến đổi khí hậu.
Theo số liệu của Tổng cục Thống kê, tính đến cuối năm 2015, tổng diện tích gieo trồng, sản lượng lương thực, thực phẩm cả
---
>> Generated text 2
Việt Nam là quốc gia có nền kinh tế thị trường định hướng xã hội chủ nghĩa, có vai trò rất quan trọng đối với sự phát triển bền vững của đất nước. Do đó, trong quá trình đổi mới và hội nhập quốc tế, Việt Nam đã và đang phải đối mặt với không ít khó khăn, thách thức, đòi hỏi phải có những chủ trương, chính sách đúng đắn, kịp thời, phù hợp với tình hình thực tế. Để thực hiện thắng lợi mục tiêu, nhiệm vụ
---
>> Generated text 3
Việt Nam là quốc gia có nền kinh tế thị trường phát triển theo định hướng xã hội chủ nghĩa. Trong quá trình đổi mới và hội nhập quốc tế hiện nay, Việt Nam đang phải đối mặt với nhiều khó khăn, thách thức, đòi hỏi phải có những giải pháp đồng bộ, hiệu quả và phù hợp với tình hình thực tế của đất nước. Để thực hiện thắng lợi mục tiêu, nhiệm vụ mà Nghị quyết Đại hội XI của Đảng đề ra, Đảng và Nhà nước đã ban hành
---
```
# Model architecture
A 12-layer, 768-hidden-size transformer-based language model.
# Training
The model was trained on Vietnamese Oscar dataset (32 GB) to optimize a traditional language modelling objective on v3-8 TPU for around 6 days. It reaches around 13.4 perplexity on a chosen validation set from Oscar.
### GPT-2 Finetuning
The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2.
The script [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) .
```bash
python run_clm.py \
--model_name_or_path NlpHUST/gpt2-vietnamese \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
dkasti/xlm-roberta-base-finetuned-panx-all | dkasti | 2022-06-02T02:24:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T02:10:13Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1769
- F1: 0.8533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3049 | 1.0 | 835 | 0.1873 | 0.8139 |
| 0.1576 | 2.0 | 1670 | 0.1722 | 0.8403 |
| 0.1011 | 3.0 | 2505 | 0.1769 | 0.8533 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
JXL884/distilbert-base-uncased-finetuned-emotion | JXL884 | 2022-06-02T02:14:26Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T02:05:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-en | dkasti | 2022-06-02T02:07:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T02:05:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6885793871866295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1301 | 1.0 | 50 | 0.5666 | 0.4857 |
| 0.5143 | 2.0 | 100 | 0.4469 | 0.6449 |
| 0.3723 | 3.0 | 150 | 0.3996 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-it | dkasti | 2022-06-02T02:05:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T02:03:25Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8233360723089564
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- F1: 0.8233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8099 | 1.0 | 70 | 0.3035 | 0.7333 |
| 0.2766 | 2.0 | 140 | 0.2661 | 0.7948 |
| 0.1792 | 3.0 | 210 | 0.2388 | 0.8233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kktoto/tiny_kt_punctuator | kktoto | 2022-06-02T02:04:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T01:44:00Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_kt_punctuator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_kt_punctuator
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Precision: 0.6287
- Recall: 0.5781
- F1: 0.6023
- Accuracy: 0.9476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1621 | 1.0 | 5561 | 0.1508 | 0.6138 | 0.5359 | 0.5722 | 0.9450 |
| 0.1519 | 2.0 | 11122 | 0.1439 | 0.6279 | 0.5665 | 0.5956 | 0.9471 |
| 0.1496 | 3.0 | 16683 | 0.1424 | 0.6287 | 0.5781 | 0.6023 | 0.9476 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
dkasti/xlm-roberta-base-finetuned-panx-fr | dkasti | 2022-06-02T02:03:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T01:59:16Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.839946200403497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2789
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.587 | 1.0 | 191 | 0.3355 | 0.7929 |
| 0.274 | 2.0 | 382 | 0.2977 | 0.8283 |
| 0.1836 | 3.0 | 573 | 0.2789 | 0.8399 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-de-fr | dkasti | 2022-06-02T01:56:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-02T01:43:38Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2883 | 1.0 | 715 | 0.1818 | 0.8286 |
| 0.1461 | 2.0 | 1430 | 0.1539 | 0.8511 |
| 0.095 | 3.0 | 2145 | 0.1649 | 0.8555 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkasti/xlm-roberta-base-finetuned-panx-de | dkasti | 2022-06-02T00:32:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-27T07:02:10Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8615769427548178
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1401
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2605 | 1.0 | 525 | 0.1708 | 0.8198 |
| 0.1274 | 2.0 | 1050 | 0.1415 | 0.8449 |
| 0.0819 | 3.0 | 1575 | 0.1401 | 0.8616 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jiseong/mt5-small-finetuned-news-ab | jiseong | 2022-06-02T00:10:15Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-01T08:24:29Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jiseong/mt5-small-finetuned-news-ab
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jiseong/mt5-small-finetuned-news-ab
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0174
- Validation Loss: 1.7411
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1124 | 2.0706 | 0 |
| 2.4090 | 1.8742 | 1 |
| 2.1379 | 1.7889 | 2 |
| 2.0174 | 1.7411 | 3 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/bert-large-uncased-finetuned-filtered-0602 | YeRyeongLee | 2022-06-01T22:57:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-01T16:28:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-filtered-0602
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8409
- Accuracy: 0.1667
- F1: 0.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.8331 | 1.0 | 3180 | 1.8054 | 0.1667 | 0.0476 |
| 1.8158 | 2.0 | 6360 | 1.8196 | 0.1667 | 0.0476 |
| 1.8088 | 3.0 | 9540 | 1.8059 | 0.1667 | 0.0476 |
| 1.8072 | 4.0 | 12720 | 1.7996 | 0.1667 | 0.0476 |
| 1.8182 | 5.0 | 15900 | 1.7962 | 0.1667 | 0.0476 |
| 1.7993 | 6.0 | 19080 | 1.8622 | 0.1667 | 0.0476 |
| 1.7963 | 7.0 | 22260 | 1.8378 | 0.1667 | 0.0476 |
| 1.7956 | 8.0 | 25440 | 1.8419 | 0.1667 | 0.0476 |
| 1.7913 | 9.0 | 28620 | 1.8406 | 0.1667 | 0.0476 |
| 1.7948 | 10.0 | 31800 | 1.8409 | 0.1667 | 0.0476 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
meln1k/q-Taxi-v3-v1 | meln1k | 2022-06-01T22:47:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-01T22:47:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="meln1k/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
VanessaSchenkel/unicamp-finetuned-en-to-pt-dataset-ted | VanessaSchenkel | 2022-06-01T22:38:09Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:ted_iwlst2013",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-06-01T17:57:16Z | ---
tags:
- translation
- generated_from_trainer
datasets:
- ted_iwlst2013
metrics:
- bleu
model-index:
- name: unicamp-finetuned-en-to-pt-dataset-ted
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ted_iwlst2013
type: ted_iwlst2013
args: en-pt
metrics:
- name: Bleu
type: bleu
value: 25.65030250145235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unicamp-finetuned-en-to-pt-dataset-ted
This model is a fine-tuned version of [unicamp-dl/translation-pt-en-t5](https://huggingface.co/unicamp-dl/translation-pt-en-t5) on the ted_iwlst2013 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8861
- Bleu: 25.6503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
chrisvinsen/xlsr-wav2vec2-final-1-lm-2 | chrisvinsen | 2022-06-01T22:29:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-27T07:02:01Z | Indonli dataset --> Train + Validation + Test
WER : 0.216
WER with LM: 0.151 |
robinhad/ukrainian-qa | robinhad | 2022-06-01T22:08:47Z | 47 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"uk",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-01T19:28:07Z | ---
license: mit
language: uk
tags:
- generated_from_trainer
model-index:
- name: ukrainian-qa
results: []
widget:
- text: "Що відправлять для ЗСУ?"
context: "Про це повідомив міністр оборони Арвідас Анушаускас. Уряд Литви не має наміру зупинятися у військово-технічній допомозі Україні. Збройні сили отримають антидрони, тепловізори та ударний безпілотник. «Незабаром Литва передасть Україні не лише обіцяні бронетехніку, вантажівки та позашляховики, але також нову партію антидронів та тепловізорів. І, звичайно, Байрактар, який придбають на зібрані литовцями гроші», - написав глава Міноборони."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ukrainian-qa
This model is a fine-tuned version of [ukr-models/xlm-roberta-base-uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) on the [UA-SQuAD](https://github.com/fido-ai/ua-datasets/tree/main/ua_datasets/src/question_answering) dataset.
Link to training scripts - [https://github.com/robinhad/ukrainian-qa](https://github.com/robinhad/ukrainian-qa)
It achieves the following results on the evaluation set:
- Loss: 1.4778
## Model description
More information needed
## How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering
model_name = "robinhad/ukrainian-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
qa_model = pipeline("question-answering", model=model.to("cpu"), tokenizer=tokenizer)
question = "Де ти живеш?"
context = "Мене звати Сара і я живу у Лондоні"
qa_model(question = question, context = context)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4526 | 1.0 | 650 | 1.3631 |
| 1.3317 | 2.0 | 1300 | 1.2229 |
| 1.0693 | 3.0 | 1950 | 1.2184 |
| 0.6851 | 4.0 | 2600 | 1.3171 |
| 0.5594 | 5.0 | 3250 | 1.3893 |
| 0.4954 | 6.0 | 3900 | 1.4778 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kalmufti/PPO-LunarLander-v2 | kalmufti | 2022-06-01T21:03:19Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-10T16:37:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 275.34 +/- 14.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent Playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3, and huggingface_sb3)
To use this model make sure you are running Python version 3.7.13. You can use [pyenv](https://github.com/pyenv/pyenv) to manage multiple versions of Python on your system.
### Install required packages:
```bash
pip install stable-baselines3
pip install huggingface_sb3
pip install pickle5
pip install Box2D
pip install pyglet
```
You can use this simple script as a base to evaluate and run the model:
```python
import gym
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
from stable_baselines3.common.evaluation import evaluate_policy
# Download the model from the huggingface hub
checkpoint = load_from_hub(
repo_id="kalmufti/PPO-LunarLander-v2",
filename="ppo-LunarLander-v2.zip",
)
# Load the policy
model = PPO.load(checkpoint)
# Create an environment
env = gym.make("LunarLander-v2")
# Optional - evaluate the agent means
mean_reward, std_reward = evaluate_policy(
model, env, render=False, n_eval_episodes=5, deterministic=True, warn=False
)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent playing the environment
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
``` |
FritzOS/TEdetection_distiBERT_NER_V2 | FritzOS | 2022-06-01T20:40:16Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-01T20:40:03Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_V2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_V2
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_V2](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0032
- Validation Loss: 0.0032
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0032 | 0.0032 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/disgustingact84-kickswish-managertactical | huggingtweets | 2022-06-01T20:24:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-01T20:06:54Z | ---
language: en
thumbnail: http://www.huggingtweets.com/disgustingact84-kickswish-managertactical/1654115021712/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530279378332041220/1ysZA-S8_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1258515252163022848/_O1bOXBQ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1360389551336865797/6RERF_Gg_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ToxicAct 🇺🇸 ⚽️ & Justin Moran & Tactical Manager</div>
<div style="text-align: center; font-size: 14px;">@disgustingact84-kickswish-managertactical</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ToxicAct 🇺🇸 ⚽️ & Justin Moran & Tactical Manager.
| Data | ToxicAct 🇺🇸 ⚽️ | Justin Moran | Tactical Manager |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3237 | 3250 |
| Retweets | 260 | 286 | 47 |
| Short tweets | 333 | 81 | 302 |
| Tweets kept | 2654 | 2870 | 2901 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rtzdst3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disgustingact84-kickswish-managertactical's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/disgustingact84-kickswish-managertactical')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
erickfm/t5-base-finetuned-bias | erickfm | 2022-06-01T18:28:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-01T11:30:30Z | ---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-base](https://huggingface.co/t5-base), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.39 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-base).
|
Abderrahim2/bert-finetuned-gender_classification | Abderrahim2 | 2022-06-01T14:39:29Z | 3 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-01T00:12:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-gender_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-gender_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- F1: 0.9645
- Roc Auc: 0.9732
- Accuracy: 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.1679 | 1.0 | 1125 | 0.1781 | 0.928 | 0.946 | 0.927 |
| 0.1238 | 2.0 | 2250 | 0.1252 | 0.9516 | 0.9640 | 0.95 |
| 0.0863 | 3.0 | 3375 | 0.1283 | 0.9515 | 0.9637 | 0.95 |
| 0.0476 | 4.0 | 4500 | 0.1419 | 0.9565 | 0.9672 | 0.956 |
| 0.0286 | 5.0 | 5625 | 0.1428 | 0.9555 | 0.9667 | 0.954 |
| 0.0091 | 6.0 | 6750 | 0.1515 | 0.9604 | 0.9700 | 0.959 |
| 0.0157 | 7.0 | 7875 | 0.1535 | 0.9580 | 0.9682 | 0.957 |
| 0.0048 | 8.0 | 9000 | 0.1484 | 0.9645 | 0.9732 | 0.964 |
| 0.0045 | 9.0 | 10125 | 0.1769 | 0.9605 | 0.9703 | 0.96 |
| 0.0037 | 10.0 | 11250 | 0.2007 | 0.9565 | 0.9672 | 0.956 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bishmoy/q-Taxi-v3 | bishmoy | 2022-06-01T13:45:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-01T13:45:38Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bishmoy/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
cjbarrie/masress-medcrit-camel | cjbarrie | 2022-06-01T13:23:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:cjbarrie/autotrain-data-masress-medcrit-binary-5",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-01T12:56:34Z | ---
tags: autotrain
language: unk
widget:
- text: "الكل ينتقد الرئيس على إخفاقاته"
datasets:
- cjbarrie/autotrain-data-masress-medcrit-binary-5
co2_eq_emissions: 0.01017487638098474
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 937130980
- CO2 Emissions (in grams): 0.01017487638098474
## Validation Metrics
- Loss: 0.757265031337738
- Accuracy: 0.7551020408163265
- Macro F1: 0.7202470830473576
- Micro F1: 0.7551020408163265
- Weighted F1: 0.7594301962377263
- Macro Precision: 0.718716577540107
- Micro Precision: 0.7551020408163265
- Weighted Precision: 0.7711448215649895
- Macro Recall: 0.7285714285714286
- Micro Recall: 0.7551020408163265
- Weighted Recall: 0.7551020408163265
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-masress-medcrit-binary-5-937130980
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
facebook/levit-128 | facebook | 2022-06-01T13:21:29Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"levit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-01T11:27:59Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# LeViT
LeViT-128 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128')
model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
facebook/levit-384 | facebook | 2022-06-01T13:20:59Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"levit",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-01T11:27:30Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# LeViT
LeViT-384 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-384')
model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
pravesh/wav2vec2-large-xls-r-300m-Hindi-colab-v4 | pravesh | 2022-06-01T12:23:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-01T11:39:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-Hindi-colab-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Hindi-colab-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ibm-research/roberta-large-vira-intents | ibm-research | 2022-06-01T12:06:27Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"intent detection",
"en",
"dataset:ibm/vira-intents",
"arxiv:2205.11966",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T08:40:27Z | ---
language:
- en
tags:
- intent detection
license: "other"
datasets:
- ibm/vira-intents
metrics:
- accuracy
widget:
- text: "Should I be concerned about side effects of the vaccine if I'm breastfeeding?} & Is breastfeeding safe with the vaccine"
example_title: "Breastfeeding"
- text: "Does the vaccine prevent transmission?"
example_title: "Transmission"
- text: "Will the vaccine make me sterile or infertile? "
example_title: "Infertility"
---
## Model Description
This model is based on RoBERTa large (Liu, 2019), fine-tuned on a dataset of intent expressions available [here](https://research.ibm.com/haifa/dept/vst/debating_data.shtml) and also on 🤗 Transformer datasets hub [here](https://huggingface.co/datasets/ibm/vira-intents).
The model was created as part of the work described in [Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
](https://arxiv.org/abs/2205.11966). The model is released under the Community Data License Agreement - Sharing - Version 1.0 ([link](https://cdla.dev/sharing-1-0/)), If you use this model, please cite our paper.
The official GitHub is [here](https://github.com/IBM/vira-intent-discovery). The script used for training the model is [trainer.py](https://github.com/IBM/vira-intent-discovery/blob/master/trainer.py).
## Training parameters
1. base_model = 'roberta-large'
1. learning_rate=5e-6
1. per_device_train_batch_size=16,
1. per_device_eval_batch_size=16,
1. num_train_epochs=15,
1. load_best_model_at_end=True,
1. save_total_limit=1,
1. save_strategy='epoch',
1. evaluation_strategy='epoch',
1. metric_for_best_model='accuracy',
1. seed=123
## Data collator
DataCollatorWithPadding
|
jayeshgar/q-FrozenLake-v1-4x4-noSlippery | jayeshgar | 2022-06-01T11:40:35Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-01T11:40:28Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jayeshgar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
pravesh/wav2vec2-large-xls-r-300m-hindi-v2 | pravesh | 2022-06-01T10:49:32Z | 0 | 0 | null | [
"region:us"
] | null | 2022-06-01T10:11:49Z | This is Hindi ASR model finetuned on facebook wav2vec2-large-xls-r-300m model. |
aaatul/xlm-roberta-large-finetuned-ner | aaatul | 2022-06-01T09:06:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:hi_ner_config",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-05T06:32:26Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- hi_ner_config
model-index:
- name: xlm-roberta-large-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the hi_ner_config dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mccaffary/finetuning-sentiment-model-3000-samples-DM | mccaffary | 2022-06-01T09:01:21Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T22:26:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-DM
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8734177215189873
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-DM
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3248
- Accuracy: 0.8667
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-all | adache | 2022-06-01T08:20:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-01T07:54:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1782
- F1: 0.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2995 | 1.0 | 739 | 0.1891 | 0.8085 |
| 0.1552 | 2.0 | 1478 | 0.1798 | 0.8425 |
| 0.1008 | 3.0 | 2217 | 0.1782 | 0.8541 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
muhtasham/RoBERTa-tg | muhtasham | 2022-06-01T07:52:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"tg",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-31T21:06:31Z | ---
language:
- tg
widget:
- text: "Пойтахти <mask> Душанбе"
- text: "<mask> ба ин сайти шумо медароям."
- text: "Номи ман Акрам <mask>"
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-tg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-tg
This model is a fine-tuned version of [Tajik-Corpus](https://huggingface.co/datasets/muhtasham/tajik-corpus) dataset which is based on Leipzig Corpora.
## Model description
You can use model for masked text generation or fine-tune it to a downstream task.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
ceggian/sbart_pt_reddit_softmax_32 | ceggian | 2022-06-01T07:41:57Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bart",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-06-01T07:34:31Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BartModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RANG012/SENATOR | RANG012 | 2022-06-01T07:17:06Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-01T06:51:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: SENATOR
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.916
- name: F1
type: f1
value: 0.9166666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SENATOR
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2707
- Accuracy: 0.916
- F1: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-fr | adache | 2022-06-01T07:13:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-01T06:53:43Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8053736356003358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3196
- F1: 0.8054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7741 | 1.0 | 96 | 0.3784 | 0.7542 |
| 0.3235 | 2.0 | 192 | 0.3267 | 0.7947 |
| 0.2164 | 3.0 | 288 | 0.3196 | 0.8054 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-de-fr | adache | 2022-06-01T06:47:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-01T06:21:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
t-bank-ai/response-quality-classifier-tiny | t-bank-ai | 2022-06-01T06:34:56Z | 17 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"conversational",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T08:32:08Z | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.51 | 0.82 | 0.74 |
| specificity | 0.54 | 0.81 | 0.8 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). |
t-bank-ai/response-quality-classifier-base | t-bank-ai | 2022-06-01T06:34:22Z | 17 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"conversational",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T10:17:12Z | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.49 | 0.84 | 0.79 |
| specificity | 0.53 | 0.83 | 0.83 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). |
jiseong/mt5-small-finetuned-news | jiseong | 2022-06-01T06:22:12Z | 3 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-01T00:47:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jiseong/mt5-small-finetuned-news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jiseong/mt5-small-finetuned-news
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1208
- Validation Loss: 0.1012
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1829 | 0.1107 | 0 |
| 0.1421 | 0.1135 | 1 |
| 0.1208 | 0.1012 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
arize-ai/distilbert_reviews_with_language_drift | arize-ai | 2022-06-01T06:15:35Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ecommerce_reviews_with_language_drift",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-01T05:46:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ecommerce_reviews_with_language_drift
metrics:
- accuracy
- f1
model-index:
- name: distilbert_reviews_with_language_drift
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ecommerce_reviews_with_language_drift
type: ecommerce_reviews_with_language_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.818
- name: F1
type: f1
value: 0.8167126877417763
widget:
- text: "Poor quality of fabric and ridiculously tight at chest. It's way too short."
example_title: "Negative"
- text: "One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter."
example_title: "Neutral"
- text: "I liked the price most! Nothing to dislike here!"
example_title: "Positive"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_reviews_with_language_drift
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4970
- Accuracy: 0.818
- F1: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 |
| 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 |
| 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adache/xlm-roberta-base-finetuned-panx-de | adache | 2022-06-01T05:55:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-27T06:39:06Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Oseias/ppo-LunarLander-v2_review | Oseias | 2022-06-01T02:26:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-01T02:25:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 254.90 +/- 26.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
radev/distilbert-base-uncased-finetuned-emotion | radev | 2022-06-01T02:20:13Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-16T21:47:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8945
- name: F1
type: f1
value: 0.8871610121255439
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3645
- Accuracy: 0.8945
- F1: 0.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5816 | 0.8015 | 0.7597 |
| 0.7707 | 2.0 | 250 | 0.3645 | 0.8945 | 0.8872 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chrisvinsen/wav2vec2-16 | chrisvinsen | 2022-06-01T02:12:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-31T11:32:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-16
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1016
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.6682 | 1.37 | 200 | 3.3138 | 1.0 |
| 2.8751 | 2.74 | 400 | 2.9984 | 1.0 |
| 2.8697 | 4.11 | 600 | 3.0827 | 1.0 |
| 2.866 | 5.48 | 800 | 3.0697 | 1.0 |
| 2.8655 | 6.85 | 1000 | 3.1083 | 1.0 |
| 2.8629 | 8.22 | 1200 | 3.0888 | 1.0 |
| 2.8651 | 9.59 | 1400 | 3.2852 | 1.0 |
| 2.8601 | 10.96 | 1600 | 3.1155 | 1.0 |
| 2.8617 | 12.33 | 1800 | 3.1958 | 1.0 |
| 2.8595 | 13.7 | 2000 | 3.1070 | 1.0 |
| 2.858 | 15.07 | 2200 | 3.1483 | 1.0 |
| 2.8564 | 16.44 | 2400 | 3.0906 | 1.0 |
| 2.8561 | 17.81 | 2600 | 3.1412 | 1.0 |
| 2.8574 | 19.18 | 2800 | 3.0783 | 1.0 |
| 2.8543 | 20.55 | 3000 | 3.0624 | 1.0 |
| 2.8549 | 21.92 | 3200 | 3.0914 | 1.0 |
| 2.8556 | 23.29 | 3400 | 3.0735 | 1.0 |
| 2.8557 | 24.66 | 3600 | 3.1791 | 1.0 |
| 2.8576 | 26.03 | 3800 | 3.0645 | 1.0 |
| 2.8528 | 27.4 | 4000 | 3.1190 | 1.0 |
| 2.8551 | 28.77 | 4200 | 3.1016 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
erickfm/t5-small-finetuned-bias | erickfm | 2022-06-01T02:02:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-31T23:29:18Z | ---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-small](https://huggingface.co/t5-small), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.32 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-small).
|
sanchit-gandhi/flax-wav2vec2-2-bart-large-cv9-feature-encoder | sanchit-gandhi | 2022-06-01T00:43:26Z | 3 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-29T16:50:26Z | /home/sanchitgandhi/seq2seq-speech/README.md |
skr3178/xlm-roberta-base-finetuned-panx-all | skr3178 | 2022-05-31T23:55:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T23:31:21Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-en | skr3178 | 2022-05-31T23:31:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T23:14:17Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-it | skr3178 | 2022-05-31T23:14:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T22:57:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
skr3178/xlm-roberta-base-finetuned-panx-de-fr | skr3178 | 2022-05-31T22:37:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T22:14:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 |
| 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jppaolim/v40_NeoSmall | jppaolim | 2022-05-31T22:23:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T22:11:48Z | # My Story model
Arthur goes to the beach. Arthur is in the ocean. He is enjoying the water. He cannot wait for the sun to rise. He goes to the beach. It is very hot outside.
Arthur goes to the beach. Arthur is going to the beach. He is going to the beach. He is going to go swimming. He feels a breeze on his shirt. He feels very relaxed.
Arthur goes to the beach. Arthur is walking on the beach. He notices a sign for the beach club. He asks for a cab. He gets a cab to go to the beach. Arthur and his friends go to the beach together.
Arthur goes to the beach. Arthur was excited to go to the beach. He drove his car to the beach. When he got there, he was amazed at the waves. The waves had a huge sandcastle. Arthur went to the beach and enjoyed the beach.
Arthur goes to the beach. Arthur is playing in the sand with his friends. He is having a great time, and they are all laughing. They all seem to be enjoying themselves. Arthur decides he has to leave. Arthur is sad that he will not be able to go to the beach.
Arthur goes to the beach. Arthur wants to go to the beach. He decides to go to the beach. He sees a sign for the beach. He goes to the beach. Arthur is happy to go to the beach.
Arthur goes to the beach. Arthur is at the beach. He is playing with his friends. They go swimming. Arthur is caught in a water. Arthur is taken to the beach.
Arthur goes to the beach. Arthur is in the ocean. He is bored. He decides to go to the beach. He is bored for a few hours. Arthur leaves the beach.
Arthur goes to the beach. Arthur is out swimming. He is going to the beach. He goes to the beach. He goes to the beach. He goes to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They went swimming and laid out on the sand. They found a beach they liked. They decided to go to the beach and play. They were so happy that they decided to go back to the beach.
Arthur goes to the beach. Arthur is at the beach with his family. They are going to go to the beach. Arthur is very excited. He is going to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur was at the beach with his friends. They were having a great time. They all went to the beach. They had a great time. Arthur is very happy.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He goes to the beach. He goes to the beach. He is happy that he went to the beach.
Arthur goes to the beach. Arthur is bored. He decides to go to the beach. He is very bored. He decides to go to the beach. Arthur is happy that he went to the beach.
Arthur goes to the beach. Arthur is on his way to the beach. He is going to the beach. He is going to the beach. He is going to the beach. Arthur is going to the beach.
|
wrice/wav2vec2-large-robust-ft-timit | wrice | 2022-05-31T22:17:20Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-31T16:21:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-robust-ft-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-robust-ft-timit
This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2768
- Wer: 0.2321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.6175 | 1.0 | 500 | 3.3025 | 1.0 |
| 3.0746 | 2.01 | 1000 | 2.9598 | 1.0 |
| 1.967 | 3.01 | 1500 | 0.6760 | 0.5607 |
| 0.7545 | 4.02 | 2000 | 0.4500 | 0.4567 |
| 0.5415 | 5.02 | 2500 | 0.3702 | 0.3882 |
| 0.4445 | 6.02 | 3000 | 0.3421 | 0.3584 |
| 0.3601 | 7.03 | 3500 | 0.2947 | 0.3096 |
| 0.3098 | 8.03 | 4000 | 0.2740 | 0.2894 |
| 0.2606 | 9.04 | 4500 | 0.2725 | 0.2787 |
| 0.238 | 10.04 | 5000 | 0.2549 | 0.2617 |
| 0.2142 | 11.04 | 5500 | 0.2485 | 0.2530 |
| 0.1787 | 12.05 | 6000 | 0.2683 | 0.2514 |
| 0.1652 | 13.05 | 6500 | 0.2559 | 0.2476 |
| 0.1569 | 14.06 | 7000 | 0.2777 | 0.2470 |
| 0.1443 | 15.06 | 7500 | 0.2661 | 0.2431 |
| 0.1335 | 16.06 | 8000 | 0.2717 | 0.2422 |
| 0.1291 | 17.07 | 8500 | 0.2672 | 0.2428 |
| 0.1192 | 18.07 | 9000 | 0.2684 | 0.2395 |
| 0.1144 | 19.08 | 9500 | 0.2770 | 0.2411 |
| 0.1052 | 20.08 | 10000 | 0.2831 | 0.2379 |
| 0.1004 | 21.08 | 10500 | 0.2847 | 0.2375 |
| 0.1053 | 22.09 | 11000 | 0.2851 | 0.2360 |
| 0.1005 | 23.09 | 11500 | 0.2807 | 0.2361 |
| 0.0904 | 24.1 | 12000 | 0.2764 | 0.2346 |
| 0.0876 | 25.1 | 12500 | 0.2774 | 0.2325 |
| 0.0883 | 26.1 | 13000 | 0.2768 | 0.2313 |
| 0.0848 | 27.11 | 13500 | 0.2840 | 0.2307 |
| 0.0822 | 28.11 | 14000 | 0.2812 | 0.2316 |
| 0.09 | 29.12 | 14500 | 0.2768 | 0.2321 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.8.2+cu111
- Datasets 1.17.0
- Tokenizers 0.11.6
|
Simon10/my-awesome-model-3 | Simon10 | 2022-05-31T21:26:38Z | 7 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-31T21:20:01Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model-3
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2061
- Validation Loss: 0.0632
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -811, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2061 | 0.0632 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.1
- Datasets 2.2.2
- Tokenizers 0.11.0
|
Dizzykong/test-charles-dickens | Dizzykong | 2022-05-31T21:22:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T21:10:52Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test-charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-charles-dickens
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Dizzykong/test-recipe | Dizzykong | 2022-05-31T21:17:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T20:42:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: test-recipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-recipe
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.001
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sanchit-gandhi/flax-wav2vec2-2-bart-large-tedlium-feature-encoder | sanchit-gandhi | 2022-05-31T21:06:15Z | 7 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-29T16:54:24Z | /home/sanchitgandhi/seq2seq-speech/README.md |
malra/segformer-b5-segments-warehouse1 | malra | 2022-05-31T20:54:00Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-05-31T16:02:39Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-segments-warehouse1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-segments-warehouse1
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1610
- Mean Iou: 0.6952
- Mean Accuracy: 0.8014
- Overall Accuracy: 0.9648
- Per Category Iou: [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979]
- Per Category Accuracy: [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.1656 | 1.0 | 787 | 0.1917 | 0.5943 | 0.6937 | 0.9348 | [0.0, 0.8760430595457738, 0.8113714411434076, 0.9533787339343942, 0.8499988352439646, 0.9330256290984922, 0.964368918196211, 0.6984009498117659, 0.9341093239597545, 0.288411561596369, 0.0, 0.6496866199024376, 0.4510074387900882, 0.5206343319728309, 0.6377305875444397, 0.5391733301507737, 0.1395685713288422, 0.390702947845805, 0.6999919374344916, 0.548023343373494] | [nan, 0.9502542152644661, 0.9516900451328754, 0.9788975544390225, 0.921821413759201, 0.9534230318615367, 0.9778020069070933, 0.8108538425970355, 0.970571911491369, 0.2993067645848501, 0.0, 0.7454496363566233, 0.5849840255591054, 0.5858306866277158, 0.7137540570947559, 0.6925710548100606, 0.16576498144808574, 0.4165357186026834, 0.8142326593390103, 0.6474578532983408] |
| 0.0948 | 2.0 | 1574 | 0.2058 | 0.6310 | 0.7305 | 0.9442 | [0.0, 0.904077233776714, 0.8616556242304713, 0.9604692135700761, 0.8306854004041632, 0.9459690932012119, 0.9714777936344227, 0.7463801249809481, 0.9197830038961162, 0.4759644364074744, 0.0, 0.7133768631713745, 0.4878118726699168, 0.5403469048526253, 0.6267211124010835, 0.6280780328151242, 0.11116434156063161, 0.4757211293446132, 0.7386220435315599, 0.6814722192019137] | [nan, 0.9530795697109564, 0.9481439135801821, 0.9753750826203033, 0.9328161802391284, 0.9783733696392768, 0.9831560736299451, 0.8544532947139754, 0.9700176894451403, 0.5598936405938401, 0.0, 0.8212854589792271, 0.5434504792332269, 0.5765256977221256, 0.7602586827898242, 0.745275787709383, 0.12024542420662065, 0.5128732019823522, 0.8080522939565592, 0.8363729371469241] |
| 0.0595 | 3.0 | 2361 | 0.1363 | 0.6578 | 0.7540 | 0.9494 | [0.0, 0.9109388123768081, 0.8466263269727539, 0.965583073696094, 0.8848508600101197, 0.9507919193853351, 0.9742807972055659, 0.7672266040033193, 0.9571650494933543, 0.5580972230045627, 0.0, 0.7572676505482382, 0.5338298840118263, 0.5743160573368553, 0.6964399439112182, 0.6369583059750492, 0.19255896751223853, 0.49017131449756574, 0.7563405327946686, 0.7018448645266491] | [nan, 0.9587813659877967, 0.9568298005631468, 0.9842947615263231, 0.9380059570384915, 0.9734457175747111, 0.9839202800499454, 0.863077218359317, 0.9757816512090675, 0.6272609287455287, 0.0, 0.8589569413670591, 0.5999361022364217, 0.6161844118746441, 0.7983763527021668, 0.793146442915981, 0.2242190576871256, 0.5288397085810358, 0.8216978654762351, 0.8232729860771318] |
| 0.0863 | 4.0 | 3148 | 0.1706 | 0.6597 | 0.7678 | 0.9537 | [0.0, 0.5911845175607978, 0.8922572171811833, 0.9657396689703207, 0.8726664918778465, 0.948172990516989, 0.9741643734457509, 0.7832072821045744, 0.9578631876788363, 0.5869565217391305, 0.0, 0.7602876424039574, 0.5747447162194254, 0.6642950791717092, 0.6978602093118107, 0.7122118073263809, 0.21745086578505152, 0.5091171801864137, 0.763416879968237, 0.7220314268720861] | [nan, 0.9656626144746107, 0.9588916966191391, 0.9766109980050623, 0.9234167566678667, 0.9783156758536367, 0.9891284919047324, 0.8876447135391675, 0.9773653302095363, 0.6623721946123896, 0.0, 0.8391697702425289, 0.6185942492012779, 0.6961703584876796, 0.8060121894956657, 0.8277923697200732, 0.24677155234956366, 0.5498060503499884, 0.8475353565667555, 0.8369956852453183] |
| 0.0849 | 5.0 | 3935 | 0.1529 | 0.6489 | 0.7616 | 0.9535 | [0.0, 0.34717493700692625, 0.9200786785121082, 0.9707860061715432, 0.9064316496153364, 0.9571373496125165, 0.9765647396031262, 0.7914886053951578, 0.9636858999629485, 0.5253852888123762, 0.0, 0.7668434757450091, 0.6228696113699357, 0.5646135260344276, 0.7194371537530142, 0.7276571750775304, 0.13134474327628362, 0.5398065590178835, 0.8087983436006237, 0.7371620697069805] | [nan, 0.9673995855258336, 0.9622823082917784, 0.9832096263122092, 0.9590923200613435, 0.9794833291868915, 0.9849481430590119, 0.8741570190973889, 0.9814726613968338, 0.5661042702035389, 0.0, 0.8519369313384734, 0.674888178913738, 0.5955861885708164, 0.7973710835377057, 0.8440933293815855, 0.139191177994735, 0.5807830511082053, 0.8902258318640507, 0.8387304835194164] |
| 0.0652 | 6.0 | 4722 | 0.1776 | 0.6701 | 0.7802 | 0.9598 | [0.0, 0.442020662403383, 0.9221209597093164, 0.9723970198449976, 0.9094898951877407, 0.958969887541612, 0.9774286126326331, 0.8043337900190548, 0.9641322534475246, 0.524194500874002, 0.0, 0.7732021981650511, 0.6714277552419585, 0.6791383524722951, 0.7265590222386986, 0.7252668038047013, 0.25612624095650144, 0.512317443386938, 0.8223912256195354, 0.7602526763224181] | [nan, 0.9667776521571092, 0.968306375662177, 0.9871287057126554, 0.9515142073239339, 0.9800501491032743, 0.9870913605013194, 0.8911998464531551, 0.9789458602211063, 0.5619638504637396, 0.0, 0.8429926328466184, 0.750926517571885, 0.7091730161871252, 0.8058454540303847, 0.8431735260151052, 0.2957320232987169, 0.5489159698031933, 0.8944742469145065, 0.8592366887593968] |
| 0.0516 | 7.0 | 5509 | 0.2204 | 0.6782 | 0.7854 | 0.9562 | [0.0, 0.5972965874238374, 0.9024890361234837, 0.9727685140940331, 0.915582953759141, 0.9598962357171329, 0.9798718588278901, 0.8112726586102719, 0.9047252363294271, 0.6408527982442389, 0.0, 0.7886848740988032, 0.676712646342877, 0.5672950158399087, 0.7336613818739761, 0.7298649456617311, 0.3028603088856569, 0.5060868673401364, 0.8269845785168136, 0.7471687598272396] | [nan, 0.9698273468544609, 0.9632905651879291, 0.9861640741314249, 0.9551792854314081, 0.9817079843391511, 0.9899518141518776, 0.8996100259110301, 0.9832172012468946, 0.6987812984710835, 0.0, 0.8565569379384828, 0.7460702875399361, 0.593452450290354, 0.8111955580377016, 0.848355084979611, 0.3625810998486827, 0.5422458600265925, 0.8997261507296395, 0.834927271918509] |
| 0.1051 | 8.0 | 6296 | 0.1860 | 0.6731 | 0.7789 | 0.9575 | [0.0, 0.44805540920356957, 0.9045125103512419, 0.9742941726927242, 0.9171717803896707, 0.9608739687771942, 0.9806696534895757, 0.8165927346840907, 0.9677688538979997, 0.6195552331193943, 0.0, 0.795984684169727, 0.6862710467443778, 0.573071397774824, 0.7390593444665892, 0.746059006435751, 0.2037963564144674, 0.5303406505500898, 0.8387988518436741, 0.7590468131997875] | [nan, 0.9709112878685233, 0.966379770128131, 0.9872427322752713, 0.9529925896087971, 0.9834568092767589, 0.9900317817435064, 0.8913394344939497, 0.9851288999243455, 0.6704124592447216, 0.0, 0.871338387626268, 0.7448562300319489, 0.5994265432176736, 0.8121846392929121, 0.8435414473616973, 0.2212134402918558, 0.5609595288067426, 0.8906947518475448, 0.8579244695520661] |
| 0.0619 | 9.0 | 7083 | 0.2919 | 0.6996 | 0.7903 | 0.9579 | [0.0, 0.934913158921961, 0.9053172937262943, 0.9749731654503406, 0.8705131863049136, 0.9625421596476281, 0.9801264786114002, 0.8223383305806123, 0.9066864104553713, 0.6468175775129386, 0.0, 0.7950479182280621, 0.7176821075997429, 0.5689160215594734, 0.7424713897302829, 0.7480081111150989, 0.3071719253739231, 0.5035704204000125, 0.8359422295252097, 0.7696666024282135] | [nan, 0.9682325320018036, 0.9702179964865137, 0.9871538608460199, 0.9606411126417358, 0.9816951395784177, 0.9890656141613147, 0.9035010425481796, 0.9836680314909386, 0.689949669209585, 0.0, 0.8547140781629688, 0.7850479233226837, 0.5903872774743949, 0.8138309496636962, 0.8520138583707216, 0.3614203096822337, 0.5292682658813446, 0.9065161120906329, 0.8882611983452693] |
| 0.081 | 10.0 | 7870 | 0.2470 | 0.6804 | 0.7921 | 0.9583 | [0.0, 0.4404433924045006, 0.9318621565838054, 0.9751204660574527, 0.8701648407446415, 0.9625333515302946, 0.9811772580795882, 0.8257730976318673, 0.9694596723226286, 0.6262599628453287, 0.0, 0.8035308913444122, 0.7247258740455824, 0.5731919576321138, 0.7446832704519876, 0.7540709586972932, 0.2964031339031339, 0.5176075672651548, 0.8402309249924604, 0.7699341552529259] | [nan, 0.9683524762943433, 0.9703483634609842, 0.9874040565137937, 0.9560906426120769, 0.9828287794111833, 0.9897414692905638, 0.9071739528715878, 0.9809845681174846, 0.6616061536513564, 0.0, 0.8707555296507566, 0.8066453674121405, 0.5982298533423343, 0.8269010675926151, 0.8575633386818196, 0.3450448769769707, 0.5489928903442743, 0.9145158870090407, 0.8764289844757795] |
| 0.0595 | 11.0 | 8657 | 0.1520 | 0.6754 | 0.7803 | 0.9583 | [0.0, 0.43998949915443775, 0.9316636729918347, 0.974311900634481, 0.90408659589869, 0.9621039259469353, 0.9814528086580536, 0.8173484866921386, 0.9299168519752622, 0.5981595278841879, 0.0, 0.79896542666047, 0.7130791649318979, 0.5767892232828117, 0.7434904893608313, 0.7476740572849074, 0.2818679619421856, 0.5013427236914975, 0.8417679322268942, 0.7636900967723242] | [nan, 0.9604694708457627, 0.9682111157218825, 0.9850226034689381, 0.9629913194164226, 0.9838887233262218, 0.9906282066977372, 0.8790295141463755, 0.9828138682520776, 0.6217973473457631, 0.0, 0.8472869246956067, 0.7660702875399361, 0.601589754313674, 0.8233235396482367, 0.8360910400932068, 0.3211657649814481, 0.5272243772183335, 0.8880687999399782, 0.8793425559361239] |
| 0.0607 | 12.0 | 9444 | 0.1907 | 0.6792 | 0.7814 | 0.9611 | [0.0, 0.4394265102382861, 0.9325678358934418, 0.9751503005414947, 0.9213536629526586, 0.9630218995457999, 0.9808145244188059, 0.8160516650442948, 0.9402095421968347, 0.5678403556289702, 0.0, 0.7897903639847522, 0.717973174366617, 0.6351749265433101, 0.7451406149738536, 0.7539060338307724, 0.2810049109433409, 0.5169863186167534, 0.8447414560224139, 0.7628612943763745] | [nan, 0.964392093449931, 0.9699039597844642, 0.9860071181495944, 0.9689476561441872, 0.9817555601847723, 0.9915172012546744, 0.8703445207331861, 0.9829836512368835, 0.5919660662847014, 0.0, 0.8320126171608817, 0.7695846645367412, 0.6606869598697208, 0.8177192854656857, 0.8353858575122385, 0.31786995004456603, 0.541465665967056, 0.8991915819484563, 0.8640852275254659] |
| 0.054 | 13.0 | 10231 | 0.1756 | 0.6845 | 0.7854 | 0.9633 | [0.0, 0.44063089620853896, 0.9319015227980866, 0.9747420439658205, 0.9230841377589553, 0.9626774348954341, 0.9806204202647846, 0.824089995398513, 0.9682449901582629, 0.6269069221957562, 0.0, 0.7878031759942226, 0.7230044147476434, 0.6870255399578931, 0.7273836360818303, 0.7465091396254238, 0.25750268946841265, 0.5202245077135331, 0.8455619310735664, 0.7623883906475817] | [nan, 0.9684613146338701, 0.9659761462687484, 0.985573907589379, 0.969242630837417, 0.9846717514218756, 0.9904148523034052, 0.8905935109009535, 0.9873657317056209, 0.6548320724256909, 0.0, 0.8321711888159841, 0.7743769968051119, 0.7167465941354711, 0.7672955669410517, 0.8485288256155018, 0.28777231930020936, 0.5469380130325374, 0.8955527628765427, 0.8564788043236511] |
| 0.0908 | 14.0 | 11018 | 0.1677 | 0.6922 | 0.7956 | 0.9641 | [0.0, 0.4710389646938612, 0.9277225664822271, 0.9753445134184554, 0.9250469473155007, 0.9640090632546157, 0.9817333061419466, 0.8297056239192101, 0.970059681920668, 0.647379308685926, 0.0, 0.79693329490141, 0.7458423929012165, 0.6895638439061885, 0.7486849253355593, 0.7520096317485606, 0.30687537928818764, 0.49287677819238446, 0.848826224760963, 0.7700556938025832] | [nan, 0.9666066204807101, 0.9697912533607226, 0.9863864033340946, 0.9658514745108883, 0.9826761492096202, 0.9913739259863396, 0.9020659030037601, 0.9838249561044068, 0.6815485423063531, 0.0, 0.8412997732853904, 0.8109904153354632, 0.7185046709734403, 0.8232134618653327, 0.8490091673735526, 0.35638330949567815, 0.5181697306682197, 0.9016768578609746, 0.8671989680174369] |
| 0.0584 | 15.0 | 11805 | 0.1610 | 0.6952 | 0.8014 | 0.9648 | [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979] | [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ThePixOne/SeconBERTa | ThePixOne | 2022-05-31T19:53:48Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-05-31T19:48:48Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 20799 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4159.8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
eugenecamus/resnet-50-base-beans-demo | eugenecamus | 2022-05-31T17:47:56Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-05-27T21:53:44Z | ---
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: resnet-50-base-beans-demo
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9022556390977443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-base-beans-demo
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2188
- Accuracy: 0.9023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5679 | 1.0 | 130 | 0.2188 | 0.9023 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kabelomalapane/en_tn_ukuxhumana_model2 | kabelomalapane | 2022-05-31T16:59:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-05-30T12:46:13Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en_tn_ukuxhumana_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_tn_ukuxhumana_model2
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tn](https://huggingface.co/Helsinki-NLP/opus-mt-en-tn) on the ukuxhumana dataset.
- Train_data = 12080
- Dev_data = 3000
It achieves the following results on the evaluation set:
After training:
- Loss: 2.6466
- Bleu: 21.8204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
usama98/arabic_poem_gen | usama98 | 2022-05-31T16:55:59Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-25T09:40:56Z |
---
language:
- ar
tags:
- text-generation
license: apache-2.0
datasets:
- Arabic Poem Comprehensive Dataset (APCD)
widget:
- text: "عمرو بنِ قُمَيئَة: خَليلَيَّ لا تَستَعجِلا أَن"
---
# GPTPoet: Pre-training GPT2 for Arabic Poetry Language Understanding
<img src="https://huggingface.co/usama98/arabic_poem_gen/resolve/main/6C76C5D6-A4F2-4443-AB2A-278E87B8E33C.png" width="100" align="left"/>
**GPTPoet** is an Arabic pretrained language model based on [OpenAi GPT2 architechture](https://github.com/openai/gpt-2). We use the same GPT2-Base config. More details are available in the Google Colab [https://colab.research.google.com/drive/1kByhyhvA0JUZRKL-XCG0ZEDyAg45w8AW?usp=sharing].
To save computation time the model used pretrained weights from another [model](https://huggingface.co/elgeish/gpt2-medium-arabic-poetry). This allowed us to fine-tune our model on our specific dataset, which to our knowledge was never used in NLP task before.
This is a poem generator that creates poems based on the style of the targeted poet. The model was trained on different poets and their respective poems, and the model's input is the poet's name and a suggestion that the model will strive to develop something that imitates the style of that specific poet.
#
## What's New!
All models are available in the `HuggingFace` model page under the [usama98](https://huggingface.co/usama98/) name. Checkpoints are available in PyTorch.
Our model adds a newly tried capability of NLP models where we don't just try to generate text but one that imitates a specific style. Our dataset contains poetry gathered from different poets, the data was feed to the model during training in with the aim of teaching the model how to structure arabic poetry. The additional step here was to add a poet name at the beginning of each training example. This training strategy allows the model to not only learn how to write poetry but how to the written poetry relates to that specific poet and their style.
# Dataset
The dataset consists of content scraped mainly from الموسوعة الشعرية and الديوان. After merging both, the total number of verses is 1,831,770 poetic verses. Each verse is labeled by its meter, the poet who wrote it, and the age which it was written in. There are 22 meters, 3701 poets and 11 ages: Pre-Islamic, Islamic, Umayyad, Mamluk, Abbasid, Ayyubid, Ottoman, Andalusian, era between Umayyad and Abbasid, Fatimid, and finally the modern age. We are only interested in the 16 classic meters which are attributed to Al-Farahidi, and they comprise the majority of the dataset with a total number around 1.7M verses. It is important to note that the verses diacritic states are not consistent. This means that a verse can carry full, semi diacritics, or it can carry nothing.
- [APCD](https://hci-lab.github.io/LearningMetersPoems/#PCD)
# Preprocessing
It is recommended to apply our preprocessing tokenizer before training/testing on any dataset.
# Contacts
**Usama Zidan**: [Linkedin](https://huggingface.co/elgeish/gpt2-medium-arabic-poetry) | [Github](https://github.com/usama13o) | <[email protected]> | <[email protected]>
|
juancopi81/distilbert-finetuned-imdb | juancopi81 | 2022-05-31T16:47:14Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-27T14:23:07Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: juancopi81/distilbert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juancopi81/distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8630
- Validation Loss: 2.5977
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8630 | 2.5977 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
joaogante/test_img | joaogante | 2022-05-31T15:44:12Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"vit",
"image-feature-extraction",
"vision",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2022-05-31T15:40:15Z | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
inference: false
---
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Here is how to use this model in JAX/Flax:
```python
from transformers import ViTFeatureExtractor, FlaxViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = FlaxViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = feature_extractor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
malra/segformer-b0-finetuned-segments-sidewalk-4 | malra | 2022-05-31T15:42:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-05-31T15:22:56Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5207
- Mean Iou: 0.1023
- Mean Accuracy: 0.1567
- Overall Accuracy: 0.6612
- Per Category Iou: [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0]
- Per Category Accuracy: [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8255 | 1.0 | 25 | 3.0220 | 0.0892 | 0.1429 | 0.6352 | [0.0, 0.3631053229188519, 0.6874502125236047, 0.0, 0.012635239862746197, 0.001133215250040838, 0.0, 0.00463024415429387, 2.6557099661207286e-05, 0.0, 0.3968535016422742, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4820466790242289, 0.0, 0.00693999220077067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6134928158666486, 0.05160593984758798, 0.5016270369795023, 0.0, 0.0, 0.00023524914354608678, 0.0] | [nan, 0.6625398055826, 0.851744092156527, 0.0, 0.01307675614921835, 0.001170877257777663, nan, 0.004771009467501389, 2.6941417811356193e-05, 0.0, 0.9316713675735513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7310221003907382, 0.0, 0.0070371168820434, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.948375993368795, 0.056265031783493576, 0.5061367774453964, 0.0, 0.0, 0.00023723449281691698, 0.0] |
| 2.5443 | 2.0 | 50 | 2.5207 | 0.1023 | 0.1567 | 0.6612 | [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0] | [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
arrandi/distilbert-base-uncased-finetuned-emotion | arrandi | 2022-05-31T15:20:26Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T15:03:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9341704717427723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2606 | 1.0 | 250 | 0.1780 | 0.9285 | 0.9284 |
| 0.1486 | 2.0 | 500 | 0.1652 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
wuxiaofei/finetuning-sentiment-model-3000-samples | wuxiaofei | 2022-05-31T15:12:52Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T11:19:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8636363636363636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6787
- Accuracy: 0.86
- F1: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-xlnet-base-cased | jkhan447 | 2022-05-31T14:17:58Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-31T08:50:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-xlnet-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-xlnet-base-cased
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1470
- Accuracy: 0.7117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
OneFly/xlm-roberta-base-finetuned-panx-de | OneFly | 2022-05-31T14:01:40Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T08:27:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sarakolding/daT5-base | sarakolding | 2022-05-31T13:18:37Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"da",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-19T08:03:45Z | ---
language:
- da
---
This repository contains a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. |
huggingtweets/botphilosophyq-philosophical_9-philosophy_life | huggingtweets | 2022-05-31T12:56:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T12:54:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/botphilosophyq-philosophical_9-philosophy_life/1654001783159/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503378148544720896/cqXtOCzo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1454403230218080259/l2xRKFYN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1465751420146225152/REt6VnPb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Philosophy Quotes & Philosophy Quotes & philosophy for life</div>
<div style="text-align: center; font-size: 14px;">@botphilosophyq-philosophical_9-philosophy_life</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Philosophy Quotes & Philosophy Quotes & philosophy for life.
| Data | Philosophy Quotes | Philosophy Quotes | philosophy for life |
| --- | --- | --- | --- |
| Tweets downloaded | 1162 | 489 | 1175 |
| Retweets | 377 | 59 | 2 |
| Short tweets | 30 | 0 | 0 |
| Tweets kept | 755 | 430 | 1173 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cvz516e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @botphilosophyq-philosophical_9-philosophy_life's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13d841md) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13d841md/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/botphilosophyq-philosophical_9-philosophy_life')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
batya66/bert-finetuned-ner | batya66 | 2022-05-31T12:02:04Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T11:45:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9287951211471898
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9384628195520027
- name: Accuracy
type: accuracy
value: 0.985915700241361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9288
- Recall: 0.9483
- F1: 0.9385
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0876 | 1.0 | 1756 | 0.0657 | 0.9093 | 0.9349 | 0.9219 | 0.9826 |
| 0.0412 | 2.0 | 3512 | 0.0555 | 0.9357 | 0.9500 | 0.9428 | 0.9867 |
| 0.0205 | 3.0 | 5268 | 0.0622 | 0.9288 | 0.9483 | 0.9385 | 0.9859 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FritzOS/train_NER_M_V1 | FritzOS | 2022-05-31T11:51:44Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-31T11:51:30Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: train_NER_M_V1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# train_NER_M_V1
This model is a fine-tuned version of [FritzOS/train_basic_M_V3](https://huggingface.co/FritzOS/train_basic_M_V3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0025
- Validation Loss: 0.0024
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 204258, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0025 | 0.0024 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/magiceden | huggingtweets | 2022-05-31T11:45:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T11:42:06Z | ---
language: en
thumbnail: http://www.huggingtweets.com/magiceden/1653997534626/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden 🪄</div>
<div style="text-align: center; font-size: 14px;">@magiceden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Eden 🪄.
| Data | Magic Eden 🪄 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 141 |
| Short tweets | 908 |
| Tweets kept | 2200 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9t2x97k9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magiceden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32j65yat) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32j65yat/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magiceden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kamalkraj/bert-base-uncased-squad-v2.0-finetuned | kamalkraj | 2022-05-31T11:44:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-31T10:48:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-uncased-squad-v2.0-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-squad-v2.0-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
huggingtweets/binance-dydx-magiceden | huggingtweets | 2022-05-31T11:34:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-31T11:31:06Z | ---
language: en
thumbnail: http://www.huggingtweets.com/binance-dydx-magiceden/1653996837144/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490589455786573824/M5_HK15F_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364590285255290882/hjnIm9bV_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden 🪄 & Binance & dYdX</div>
<div style="text-align: center; font-size: 14px;">@binance-dydx-magiceden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Eden 🪄 & Binance & dYdX.
| Data | Magic Eden 🪄 | Binance | dYdX |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3250 | 1679 |
| Retweets | 141 | 194 | 463 |
| Short tweets | 908 | 290 | 40 |
| Tweets kept | 2200 | 2766 | 1176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28typldl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @binance-dydx-magiceden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/196gmkng) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/196gmkng/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/binance-dydx-magiceden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chrisvinsen/wav2vec2-15 | chrisvinsen | 2022-05-31T11:13:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-31T08:01:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-15
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 0.8585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6808 | 1.37 | 200 | 3.7154 | 1.0 |
| 3.0784 | 2.74 | 400 | 3.1542 | 1.0 |
| 2.8919 | 4.11 | 600 | 2.9918 | 1.0 |
| 2.8317 | 5.48 | 800 | 2.8971 | 1.0 |
| 2.7958 | 6.85 | 1000 | 2.8409 | 1.0 |
| 2.7699 | 8.22 | 1200 | 2.8278 | 1.0 |
| 2.6365 | 9.59 | 1400 | 2.4657 | 1.0 |
| 2.1096 | 10.96 | 1600 | 1.8358 | 0.9988 |
| 1.6485 | 12.33 | 1800 | 1.4525 | 0.9847 |
| 1.3967 | 13.7 | 2000 | 1.2467 | 0.9532 |
| 1.2492 | 15.07 | 2200 | 1.1261 | 0.9376 |
| 1.1543 | 16.44 | 2400 | 1.0654 | 0.9194 |
| 1.0863 | 17.81 | 2600 | 1.0136 | 0.9161 |
| 1.0275 | 19.18 | 2800 | 0.9601 | 0.8827 |
| 0.9854 | 20.55 | 3000 | 0.9435 | 0.8878 |
| 0.9528 | 21.92 | 3200 | 0.9170 | 0.8807 |
| 0.926 | 23.29 | 3400 | 0.9121 | 0.8783 |
| 0.9025 | 24.66 | 3600 | 0.8884 | 0.8646 |
| 0.8909 | 26.03 | 3800 | 0.8836 | 0.8690 |
| 0.8717 | 27.4 | 4000 | 0.8810 | 0.8646 |
| 0.8661 | 28.77 | 4200 | 0.8623 | 0.8585 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e5 | theojolliffe | 2022-05-31T10:55:17Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-31T10:00:56Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e5
This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8090
- Rouge1: 54.0053
- Rouge2: 35.5018
- Rougel: 37.3204
- Rougelsum: 51.5456
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9935 | 51.9669 | 31.8139 | 34.4748 | 49.5311 | 141.7407 |
| 1.1747 | 2.0 | 796 | 0.8565 | 51.7344 | 31.7341 | 34.3917 | 49.2488 | 141.7222 |
| 0.7125 | 3.0 | 1194 | 0.8252 | 52.829 | 33.2332 | 35.8865 | 50.1883 | 141.5556 |
| 0.4991 | 4.0 | 1592 | 0.8222 | 53.582 | 33.4906 | 35.7232 | 50.589 | 142.0 |
| 0.4991 | 5.0 | 1990 | 0.8090 | 54.0053 | 35.5018 | 37.3204 | 51.5456 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits