modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nlptown/flaubert_small_cased_sentiment | nlptown | 2022-05-17T07:43:58Z | 250 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"flaubert",
"text-classification",
"fr",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-17T06:26:02Z | ---
language:
- fr
datasets:
- amazon_reviews_multi
license: mit
---
# flaubert_small_cased_sentiment
This is a `flaubert_small_cased` model finetuned for sentiment analysis on product reviews in French. It predicts the sentiment of the review, from `very_negative` (1 star) to `very_positive` (5 stars).
This model is intended for direct use as a sentiment analysis model for French product reviews, or for further finetuning on related sentiment analysis tasks.
## Training data
The training data consists of the French portion of `amazon_reviews_multi`, supplemented with another 140,000 similar reviews.
## Accuracy
The finetuned model was evaluated on the French test set of `amazon_reviews_multi`.
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| French | 61.56% | 95.66%
## Contact
[NLP Town](https://www.nlp.town) offers a suite of sentiment models for a wide range of languages, including an improved multilingual model through [RapidAPI](https://rapidapi.com/nlp-town-nlp-town-default/api/multilingual-sentiment-analysis2/).
Feel free to contact us for questions, feedback and/or requests for similar models. |
jeremyccollinsmpi/autotrain-inference_probability_2-840226804 | jeremyccollinsmpi | 2022-05-17T07:41:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"en",
"dataset:jeremyccollinsmpi/autotrain-data-inference_probability_2",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-09T06:54:39Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jeremyccollinsmpi/autotrain-data-inference_probability_2
co2_eq_emissions: 0.02920886926438328
---
# Description
The input structure is:
summarize: [text]. hypothesis: [hypothesis] , and the output is 0 (hypothesis is not supported) or 1 (hypothesis is supported).
This tests whether a hypothesis is true given the preceding text. Currently the model is trained on banking chatbot intent data, such as:
summarize: How old do my kids need to be to use your service?. hypothesis: asking about an age limit
Output: 1
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 840226804
- CO2 Emissions (in grams): 0.02920886926438328
## Validation Metrics
- Loss: 0.09617297351360321
- Rouge1: 91.2874
- Rouge2: 0.0
- RougeL: 91.2874
- RougeLsum: 91.4174
- Gen Len: 2.4915
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jeremyccollinsmpi/autotrain-inference_probability_2-840226804
``` |
syp1229/xlm-roberta-base-finetuned-koidiom-epoch5 | syp1229 | 2022-05-17T07:18:03Z | 3 | 0 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-17T07:05:07Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: syp1229/xlm-roberta-base-finetuned-koidiom-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syp1229/xlm-roberta-base-finetuned-koidiom-epoch5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0826
- Validation Loss: 1.9873
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7703 | 2.0462 | 0 |
| 2.2504 | 2.0178 | 1 |
| 2.1653 | 1.9992 | 2 |
| 2.1310 | 1.9829 | 3 |
| 2.0826 | 1.9873 | 4 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/cryptanime | huggingtweets | 2022-05-17T06:54:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-17T06:52:15Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cryptanime/1652770465803/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1525172827644743680/8mskmqwq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CryptanimeNFT | Minting Now</div>
<div style="text-align: center; font-size: 14px;">@cryptanime</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CryptanimeNFT | Minting Now.
| Data | CryptanimeNFT | Minting Now |
| --- | --- |
| Tweets downloaded | 491 |
| Retweets | 96 |
| Short tweets | 15 |
| Tweets kept | 380 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2066dfxu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cryptanime's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2byq9c2t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2byq9c2t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cryptanime')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
0xrushi/Space-Invaders-PPO | 0xrushi | 2022-05-17T04:50:25Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"ALE/SpaceInvaders-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-17T04:49:31Z | ---
library_name: stable-baselines3
tags:
- ALE/SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 146.00 +/- 78.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ALE/SpaceInvaders-v5
type: ALE/SpaceInvaders-v5
---
# **PPO** Agent playing **ALE/SpaceInvaders-v5**
This is a trained model of a **PPO** agent playing **ALE/SpaceInvaders-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nthanhha26/testmodel1 | nthanhha26 | 2022-05-17T04:15:07Z | 0 | 0 | null | [
"region:us"
] | null | 2022-05-17T03:47:25Z | HI,
Nothing here, just an example model to test
https://docs.google.com/document/d/1Tp39nmCQRlZAOZYcOoXV8NCcQDf31GqarPYT3mCv9CM/edit?usp=sharing
|
gitierrez/rl-LunarLander-v2 | gitierrez | 2022-05-17T04:03:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-17T04:02:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 217.04 +/- 33.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Hijazzi/rare-puppers | Hijazzi | 2022-05-17T02:56:22Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-05-17T02:56:08Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
mostafapasha/ribs-segmentation-model | mostafapasha | 2022-05-17T01:19:44Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"xray-ribs-segmentation",
"arxiv:1911.07067",
"region:us"
] | null | 2022-05-14T06:11:05Z | ---
tags:
- xray-ribs-segmentation
library_name: keras
---
## Model description
The original idea from [ResUNET++](https://arxiv.org/pdf/1911.07067.pdf)
Full credits go to [SynthesisHealthIntelligenceInc](https://synthesishealthinc.com/)
Ribs segmentation is a crucial step for removing the ribs before a diagnosis of x-ray images
## Dataset
[vindr-ribs](https://vindr.ai/datasets/ribcxr) |
ColabPro/PPO-LunarLander-v2-v8 | ColabPro | 2022-05-16T23:32:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T23:31:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 230.68 +/- 19.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ColabPro/PPO-LunarLander-v2-v7 | ColabPro | 2022-05-16T23:32:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T23:31:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 70.32 +/- 76.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ColabPro/PPO-LunarLander-v2-v5 | ColabPro | 2022-05-16T23:02:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T23:01:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 151.84 +/- 64.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Gergoe/mt5-small-finetuned-amazon-en-es | Gergoe | 2022-05-16T22:42:55Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-05-01T19:48:09Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2891
- Rouge1: 15.35
- Rouge2: 6.4925
- Rougel: 14.8921
- Rougelsum: 14.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0622 | 1.0 | 1276 | 3.5617 | 13.2417 | 4.8928 | 12.8258 | 12.8078 |
| 4.0768 | 2.0 | 2552 | 3.4329 | 14.5681 | 6.4922 | 14.0621 | 13.9709 |
| 3.7736 | 3.0 | 3828 | 3.3393 | 15.1942 | 6.5262 | 14.7138 | 14.6049 |
| 3.5951 | 4.0 | 5104 | 3.3122 | 14.8813 | 6.2962 | 14.507 | 14.3477 |
| 3.477 | 5.0 | 6380 | 3.2991 | 15.0992 | 6.3888 | 14.8397 | 14.5606 |
| 3.4084 | 6.0 | 7656 | 3.3035 | 15.1897 | 6.2292 | 14.6686 | 14.4488 |
| 3.3661 | 7.0 | 8932 | 3.2959 | 15.3489 | 6.5702 | 14.9211 | 14.701 |
| 3.3457 | 8.0 | 10208 | 3.2891 | 15.35 | 6.4925 | 14.8921 | 14.6312 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.7.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
bartpotrykus/lunar-lander-v2 | bartpotrykus | 2022-05-16T22:40:08Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-14T12:53:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 289.21 +/- 17.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
0xrushi/LunarLander-v2 | 0xrushi | 2022-05-16T22:07:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T22:07:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 220.36 +/- 65.13
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
evolvingstuff/bert-base-cased-wikitext2 | evolvingstuff | 2022-05-16T22:05:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-16T21:26:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0916 | 1.0 | 2346 | 7.0492 |
| 6.9039 | 2.0 | 4692 | 6.8751 |
| 6.8845 | 3.0 | 7038 | 6.8929 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ColabPro/PPO-LunarLander-v2-v1 | ColabPro | 2022-05-16T22:03:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T22:02:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -4.65 +/- 21.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
espnet/english_male_ryanspeech_conformer_fastspeech2 | espnet | 2022-05-16T22:01:12Z | 4 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ryanspeech",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2022-05-10T18:20:09Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ryanspeech
license: cc-by-nc-4.0
widget:
- text: "This seems a very pleasant place, and I think I shall enjoy myself very much."
---
## RyanSpeech model (based on ESPnet2)
### `espnet/english_male_ryanspeech_conformer_fastspeech2`
This model was trained by [Rohola Zandie](https://scholar.google.com/citations?user=xv0jIe0AAAAJ&hl=en) using ryanspeech recipe in [espnet](https://github.com/espnet/espnet/). For the best results you need to download the vocoder separately from [here](https://drive.google.com/file/d/10GYvB_mIKzXzSjD67tSnBhknZRoBjsNb/view?usp=sharing) and then use the following code:
```
from espnet2.bin.tts_inference import Text2Speech
from scipy.io.wavfile import write
model = Text2Speech.from_pretrained(
model_file="espnet/english_male_ryanspeech_conformer_fastspeech2",
vocoder_file="path_to_vocoder/train_nodev_parallel_wavegan.v1.long/checkpoint-1000000steps.pkl"
)
output = model("This is a simple test.")
write("x.wav", 22050, output['wav'].numpy())
```
## Download the dataset
You can download RyanSpeech dataset from [here](https://www.kaggle.com/datasets/roholazandie/ryanspeech) or here.
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 2400000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/collect_feats/energy.scp
- energy
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/collect_feats/energy.scp
- energy
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- AH0
- T
- N
- S
- R
- D
- L
- K
- IH1
- M
- EH1
- Z
- DH
- UW1
- AE1
- IH0
- AY1
- AH1
- W
- .
- P
- F
- IY1
- V
- ER0
- AA1
- B
- AO1
- HH
- EY1
- IY0
- ','
- Y
- NG
- OW1
- G
- AW1
- TH
- SH
- UH1
- '?'
- ER1
- JH
- CH
- OW0
- OW2
- EH2
- IH2
- EY2
- AA2
- AE2
- AY2
- ''''
- OY1
- UW0
- '!'
- AO2
- EH0
- ZH
- AH2
- AE0
- UW2
- AA0
- AY0
- IY2
- AW2
- AO0
- EY0
- ER2
- UH2
- '...'
- AW0
- UH0
- OY2
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
hop_length: 256
n_fft: 1024
win_length: null
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
encoder_type: conformer
decoder_type: conformer
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz
required:
- output_dir
- token_list
distributed: false
```
</details>
### Citing RyanSpeech
```BibTex
@inproceedings{Zandie2021RyanSpeechAC,
title={RyanSpeech: A Corpus for Conversational Text-to-Speech Synthesis},
author={Rohola Zandie and Mohammad H. Mahoor and Julia Madsen and Eshrat S. Emamian},
booktitle={Interspeech},
year={2021}
}
``` |
espnet/english_male_ryanspeech_fastspeech2 | espnet | 2022-05-16T22:00:14Z | 5 | 4 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ryanspeech",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2022-05-10T18:13:25Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ryanspeech
license: cc-by-nc-4.0
widget:
- text: "This seems a very pleasant place, and I think I shall enjoy myself very much."
---
## RyanSpeech model (based on ESPnet2)
### `espnet/english_male_ryanspeech_fastspeech2`
This model was trained by [Rohola Zandie](https://scholar.google.com/citations?user=xv0jIe0AAAAJ&hl=en) using ryanspeech recipe in [espnet](https://github.com/espnet/espnet/). For the best results you need to download the vocoder separately from [here](https://drive.google.com/file/d/10GYvB_mIKzXzSjD67tSnBhknZRoBjsNb/view?usp=sharing) and then use the following code:
```
from espnet2.bin.tts_inference import Text2Speech
from scipy.io.wavfile import write
model = Text2Speech.from_pretrained(
model_file="espnet/english_male_ryanspeech_fastspeech2",
vocoder_file="path_to_vocoder/train_nodev_parallel_wavegan.v1.long/checkpoint-1000000steps.pkl"
)
output = model("This is a simple test.")
write("x.wav", 22050, output['wav'].numpy())
```
## Download the dataset
You can download RyanSpeech dataset from [here](https://www.kaggle.com/datasets/roholazandie/ryanspeech) or here.
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_fastspeech.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 6
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 800000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- AH0
- T
- N
- S
- R
- D
- L
- K
- IH1
- M
- EH1
- Z
- DH
- UW1
- AE1
- IH0
- AY1
- AH1
- W
- .
- P
- F
- IY1
- V
- ER0
- AA1
- B
- AO1
- HH
- EY1
- IY0
- ','
- Y
- NG
- OW1
- G
- AW1
- TH
- SH
- UH1
- '?'
- ER1
- JH
- CH
- OW0
- OW2
- EH2
- IH2
- EY2
- AA2
- AE2
- AY2
- ''''
- OY1
- UW0
- '!'
- AO2
- EH0
- ZH
- AH2
- AE0
- UW2
- AA0
- AY0
- IY2
- AW2
- AO0
- EY0
- ER2
- UH2
- '...'
- AW0
- UH0
- OY2
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
hop_length: 256
n_fft: 1024
win_length: null
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz
tts: fastspeech
tts_conf:
adim: 384
aheads: 2
elayers: 6
eunits: 1536
dlayers: 6
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 384
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
distributed: false
```
</details>
### Citing RyanSpeech
```BibTex
@inproceedings{Zandie2021RyanSpeechAC,
title={RyanSpeech: A Corpus for Conversational Text-to-Speech Synthesis},
author={Rohola Zandie and Mohammad H. Mahoor and Julia Madsen and Eshrat S. Emamian},
booktitle={Interspeech},
year={2021}
}
``` |
espnet/english_male_ryanspeech_fastspeech | espnet | 2022-05-16T21:58:59Z | 3 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ryanspeech",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2022-05-10T17:28:53Z | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ryanspeech
license: cc-by-nc-4.0
widget:
- text: "This seems a very pleasant place, and I think I shall enjoy myself very much."
---
## RyanSpeech model (based on ESPnet2)
### `espnet/english_male_ryanspeech_fastspeech`
This model was trained by [Rohola Zandie](https://scholar.google.com/citations?user=xv0jIe0AAAAJ&hl=en) using ryanspeech recipe in [espnet](https://github.com/espnet/espnet/). For the best results you need to download the vocoder separately from [here](https://drive.google.com/file/d/10GYvB_mIKzXzSjD67tSnBhknZRoBjsNb/view?usp=sharing) and then use the following code:
```
from espnet2.bin.tts_inference import Text2Speech
from scipy.io.wavfile import write
model = Text2Speech.from_pretrained(
model_file="espnet/english_male_ryanspeech_fastspeech",
vocoder_file="path_to_vocoder/train_nodev_parallel_wavegan.v1.long/checkpoint-1000000steps.pkl"
)
output = model("This is a simple test.")
write("x.wav", 22050, output['wav'].numpy())
```
## Download the dataset
You can download RyanSpeech dataset from [here](https://www.kaggle.com/datasets/roholazandie/ryanspeech) or here.
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_fastspeech.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 6
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 800000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best/stats/train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best/stats/train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best/stats/valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best//tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best//dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- AH0
- T
- N
- S
- R
- D
- L
- K
- IH1
- M
- EH1
- Z
- DH
- UW1
- AE1
- IH0
- AY1
- AH1
- W
- .
- P
- F
- IY1
- V
- ER0
- AA1
- B
- AO1
- HH
- EY1
- IY0
- ','
- Y
- NG
- OW1
- G
- AW1
- TH
- SH
- UH1
- '?'
- ER1
- JH
- CH
- OW0
- OW2
- EH2
- IH2
- EY2
- AA2
- AE2
- AY2
- ''''
- OY1
- UW0
- '!'
- AO2
- EH0
- ZH
- AH2
- AE0
- UW2
- AA0
- AY0
- IY2
- AW2
- AO0
- EY0
- ER2
- UH2
- '...'
- AW0
- UH0
- OY2
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
hop_length: 256
n_fft: 1024
win_length: null
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_g2p_en_no_space/decode_use_teacher_forcingtrue_train.loss.best/stats/train/feats_stats.npz
tts: fastspeech
tts_conf:
adim: 384
aheads: 2
elayers: 6
eunits: 1536
dlayers: 6
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 384
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
distributed: false
```
</details>
### Citing RyanSpeech
```BibTex
@inproceedings{Zandie2021RyanSpeechAC,
title={RyanSpeech: A Corpus for Conversational Text-to-Speech Synthesis},
author={Rohola Zandie and Mohammad H. Mahoor and Julia Madsen and Eshrat S. Emamian},
booktitle={Interspeech},
year={2021}
}
``` |
ATH0/ppo-LunarLander-v2 | ATH0 | 2022-05-16T21:43:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T21:43:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 280.92 +/- 14.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
microsoft/swin-large-patch4-window7-224-in22k | microsoft | 2022-05-16T19:59:30Z | 450 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Ukhushn/DistilHomeDepot-finetuned | Ukhushn | 2022-05-16T19:16:36Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-09T06:37:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Ukhushn/DistilHomeDepot-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ukhushn/DistilHomeDepot-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6502
- Validation Loss: 2.2067
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1437, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6502 | 2.2067 | 0 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
KhariotnovKK/Car_racing_v0 | KhariotnovKK | 2022-05-16T19:02:47Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T18:45:52Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 58.17 +/- 51.28
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
maazmikail/finetuning-sentiment-model-urdu-roberta | maazmikail | 2022-05-16T19:01:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-16T12:46:28Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-urdu-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-urdu-roberta
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Rocketknight1/temp-colab-upload-test2 | Rocketknight1 | 2022-05-16T18:59:35Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-23T17:02:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/temp-colab-upload-test2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6931
- Validation Loss: 0.6931
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6931 | 0.6931 | 0 |
| 0.6931 | 0.6931 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
eglesaks/xlm-roberta-base-finetuned-est | eglesaks | 2022-05-16T18:49:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-16T18:30:25Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-est
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-est
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 4.2576 |
| No log | 2.0 | 104 | 3.8075 |
| No log | 3.0 | 156 | 3.6781 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
microsoft/swin-large-patch4-window12-384-in22k | microsoft | 2022-05-16T18:40:51Z | 1,113 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/swin-base-patch4-window12-384 | microsoft | 2022-05-16T18:32:57Z | 28,937 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (base-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-base-patch4-window12-384")
model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window12-384")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/swin-small-patch4-window7-224 | microsoft | 2022-05-16T18:11:23Z | 5,828 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (small-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-small-patch4-window7-224")
model = SwinForImageClassification.from_pretrained("microsoft/swin-small-patch4-window7-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
nazariinyzhnyk/PPO-lunar | nazariinyzhnyk | 2022-05-16T17:40:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T17:21:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 267.24 +/- 13.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
vukpetar/ppo-CarRacing-v0-v3 | vukpetar | 2022-05-16T16:50:52Z | 19 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T16:49:29Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 862.75 +/- 31.08
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
kingabzpro/Full-Force-MountainCar-v0 | kingabzpro | 2022-05-16T16:40:46Z | 1 | 1 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T16:21:21Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="kingabzpro/Full-Force-MountainCar-v0", filename="Full-Force-MountainCar-v0.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('MountainCar-v0')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = eval_env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = eval_env.step(action)
eval_env.render()
if done:
obs = eval_env.reset()
eval_env.close()
```
|
ThoDum/DQN-LunarLander-v2 | ThoDum | 2022-05-16T16:27:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T16:26:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -123.02 +/- 62.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nouman10/robertabase-finetuned-claim-ltp-full-prompt_ | nouman10 | 2022-05-16T16:23:35Z | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-16T16:09:03Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nouman10/robertabase-finetuned-claim-ltp-full-prompt_
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nouman10/robertabase-finetuned-claim-ltp-full-prompt_
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0334
- Validation Loss: 0.0237
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -427, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1997 | 0.0443 | 0 |
| 0.0334 | 0.0237 | 1 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Rietta/CycleGAN_Sims | Rietta | 2022-05-16T16:13:21Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2022-05-14T16:36:02Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
huggingartists/metallica | huggingartists | 2022-05-16T16:10:22Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/metallica",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/metallica
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b04166fa115f4e8aae2c30f301ae52ba.480x480x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Metallica</div>
<a href="https://genius.com/artists/metallica">
<div style="text-align: center; font-size: 14px;">@metallica</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Metallica.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/metallica).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/metallica")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/30glu695/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Metallica's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2m1o5q6p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2m1o5q6p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/metallica')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/metallica")
model = AutoModelWithLMHead.from_pretrained("huggingartists/metallica")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
kingabzpro/Moonman-Lunar-Landing-v2 | kingabzpro | 2022-05-16T16:07:26Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-11T09:44:34Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.93 +/- 24.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="kingabzpro/Moonman-Lunar-Landing-v2", filename="Moonman-Lunar-Landing-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = eval_env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = eval_env.step(action)
eval_env.render()
if done:
obs = eval_env.reset()
eval_env.close()
```
|
kushaljoseph/bert-to-distilbert-NER | kushaljoseph | 2022-05-16T15:38:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-14T13:24:58Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-to-distilbert-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-to-distilbert-NER
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.9063
- eval_precision: 0.0120
- eval_recall: 0.0069
- eval_f1: 0.0088
- eval_accuracy: 0.7600
- eval_runtime: 8.6309
- eval_samples_per_second: 376.671
- eval_steps_per_second: 3.012
- epoch: 1.0
- step: 110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00023888106906613202
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huawei-noah/AutoTinyBERT-KD-S4 | huawei-noah | 2022-05-16T15:14:43Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T15:09:51Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
huawei-noah/AutoTinyBERT-KD-S1 | huawei-noah | 2022-05-16T15:09:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T14:58:25Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
bartelds/wav2vec2-large-ft-cgn-3hrs | bartelds | 2022-05-16T14:59:59Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"nl",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-16T14:38:39Z | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Large-ft-CGN-3hrs
An English Wav2Vec2 model fine-tuned on Dutch. This model is created by fine-tuning [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on 3 hours of Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). |
huawei-noah/AutoTinyBERT-S2 | huawei-noah | 2022-05-16T14:52:36Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T14:48:46Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
syp1229/bert-base-finetuned-koidiom-epoch5 | syp1229 | 2022-05-16T14:50:54Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-16T14:43:06Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: syp1229/bert-base-finetuned-koidiom-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syp1229/bert-base-finetuned-koidiom-epoch5
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8275
- Validation Loss: 1.7743
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1236 | 1.8454 | 0 |
| 1.9937 | 1.8425 | 1 |
| 1.9016 | 1.7447 | 2 |
| 1.8405 | 1.7540 | 3 |
| 1.8275 | 1.7743 | 4 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huawei-noah/AutoTinyBERT-S1 | huawei-noah | 2022-05-16T14:47:57Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-05-16T14:39:19Z | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
lirondos/anglicisms-spanish-flair-cs | lirondos | 2022-05-16T14:02:15Z | 60 | 0 | flair | [
"flair",
"pytorch",
"anglicisms",
"loanwords",
"borrowing",
"codeswitching",
"token-classification",
"sequence-tagger-model",
"arxiv:2203.16169",
"es",
"dataset:coalas",
"license:cc-by-4.0",
"region:us"
] | token-classification | 2022-03-29T13:09:33Z | ---
language:
- es
license: cc-by-4.0
tags:
- anglicisms # Example: audio
- loanwords # Example: automatic-speech-recognition
- borrowing # Example: speech
- codeswitching # Example to specify a library: allennlp
- flair
- token-classification
- sequence-tagger-model
- arxiv:2203.16169
datasets:
- coalas # Example: common_voice. Use dataset id from https://hf.co/datasets
widget:
- text: "Las fake news sobre la celebrity se reprodujeron por los 'mass media' en prime time."
- text: "En la 'red carpet' lució un look muy urban con chunky shoes de inspiración anime."
- text: "Benching, estar en el banquillo de tu crush mientras otro juega de titular."
- text: "Recetas de noviembre para el batch cooking."
- text: "Buscamos data scientist con conocimientos de machine learning y blockchain."
---
# anglicisms-spanish-flair-cs
This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*.
The model is a BiLSTM-CRF model fed with [Transformer-based embeddings pretrained on codeswitched data](https://huggingface.co/sagorsarker/codeswitch-spaeng-lid-lince) along subword embeddings (BPE and character embeddings). The model was trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings.
The model considers two labels:
* ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*)
* ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*)
The model uses BIO encoding to account for multitoken borrowings.
**⚠ There is another [mBERT -based model](https://huggingface.co/lirondos/anglicisms-spanish-mbert) for this same task trained using the ``Transformers`` library**. That model however produced worse results than this Flair-based model (F1 = 83.55).
## Metrics (on the test set)
Results obtained on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus.
| LABEL | Precision | Recall | F1 |
|:-------|-----:|-----:|---------:|
| ALL | 90.14 | 81.79 | 85.76 |
| ENG | 90.16 | 84.34 | 87.16 |
| OTHER | 85.71 | 13.04 | 22.64 |
## Dataset
This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens).
|Set | Tokens | ENG | OTHER | Unique |
|:-------|-----:|-----:|---------:|---------:|
|Training |231,126 |1,493 | 28 |380 |
|Development |82,578 |306 |49 |316|
|Test |58,997 |1,239 |46 |987|
|**Total** |372,701 |3,038 |123 |1,683 |
## More info
More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*.
## How to use
```
from flair.data import Sentence
from flair.models import SequenceTagger
import pathlib
import os
if os.name == 'nt': # Minor patch needed if you are running from Windows
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath
tagger = SequenceTagger.load("lirondos/anglicisms-spanish-flair-cs")
text = "Las fake news sobre la celebrity se reprodujeron por los mass media en prime time."
sentence = Sentence(text)
# predict tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted borrowing spans
print('The following borrowing were found:')
for entity in sentence.get_spans():
print(entity)
```
## Citation
If you use this model, please cite the following reference:
```
@inproceedings{alvarez-mellado-lignos-2022-detecting,
title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling",
author = "{\'A}lvarez-Mellado, Elena and
Lignos, Constantine",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.268",
pages = "3868--3888",
abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.",
}
```
|
Manaranjan/TEST2ppo-LunarLander-v2 | Manaranjan | 2022-05-16T13:24:37Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T12:49:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 196.09 +/- 31.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
scasutt/wav2vec2-large-xlsr-53_full_train_full_train | scasutt | 2022-05-16T13:22:05Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-13T11:57:25Z | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_full_train_full_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_full_train_full_train
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8369
- Wer: 0.5052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.533 | 1.35 | 1000 | 0.3547 | 0.3483 |
| 0.4531 | 2.69 | 2000 | 0.8369 | 0.5052 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Yarn007/autotrain-Napkin-872827783 | Yarn007 | 2022-05-16T13:01:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Yarn007/autotrain-data-Napkin",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-16T12:59:13Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Yarn007/autotrain-data-Napkin
co2_eq_emissions: 0.020162211418903533
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 872827783
- CO2 Emissions (in grams): 0.020162211418903533
## Validation Metrics
- Loss: 0.25198695063591003
- Accuracy: 0.9325714285714286
- Macro F1: 0.9254931094274171
- Micro F1: 0.9325714285714286
- Weighted F1: 0.9323540959391766
- Macro Precision: 0.9286720054236212
- Micro Precision: 0.9325714285714286
- Weighted Precision: 0.9324375609546055
- Macro Recall: 0.9227549386201338
- Micro Recall: 0.9325714285714286
- Weighted Recall: 0.9325714285714286
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yarn007/autotrain-Napkin-872827783
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
subhasisj/zh-kd-XLM-minilmv2-4 | subhasisj | 2022-05-16T12:40:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T19:07:20Z | Multilingual MiniLMv2 fine-tuned using Knowledge Distillation with a XLM Roberta Base Teacher Model on ZH Language |
leumastai/CarRacing-v0-TestModel | leumastai | 2022-05-16T12:02:04Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-16T11:59:10Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -71.85 +/- 1.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune | ml6team | 2022-05-16T11:41:05Z | 44 | 12 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"bart",
"summarization",
"nl",
"dataset:ml6team/cnn_dailymail_nl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- nl
tags:
- mbart
- bart
- summarization
datasets:
- ml6team/cnn_dailymail_nl
pipeline_tag: summarization
widget:
- text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.'
- text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.'
---
# mbart-large-cc25-cnn-dailymail-nl
## Model description
Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97)
## Intended uses & limitations
It's meant for summarizing Dutch news articles.
#### How to use
```python
import transformers
undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained(
"ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune"
)
tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
summarization_pipeline = transformers.pipeline(
task="summarization",
model=undisputed_best_model,
tokenizer=tokenizer,
)
summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[
"nl_XX"
]
article = "Kan je dit even samenvatten alsjeblief." # Dutch
summarization_pipeline(
article,
do_sample=True,
top_p=0.75,
top_k=50,
# num_beams=4,
min_length=50,
early_stopping=True,
truncation=True,
)[0]["summary_text"]
```
## Training data
Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) and another smaller dataset that we can't open source because we scraped it from the internet. For more information check out our blog post [here](https://blog.ml6.eu/). |
arxyzan/data2vec-wav2vec2-base | arxyzan | 2022-05-16T09:00:23Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"arxiv:2202.03555",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-05-05T10:56:22Z | A Wav2Vec2 model trained using Data2Vec based on the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555).<br>
This model is provided here for [this repo](https://github.com/AryanShekarlaban/data2vec-pytorch) but was NOT trained using that codebase but instead, copied from `facebook/data2vec-wav2vec2-base` for convenience and reproducibility.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
SreyanG-NVIDIA/bert-base-cased-finetuned-squad | SreyanG-NVIDIA | 2022-05-16T08:39:41Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-13T13:39:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0337 | 1.0 | 5546 | 1.0150 |
| 0.7546 | 2.0 | 11092 | 1.0015 |
| 0.5537 | 3.0 | 16638 | 1.0848 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
withU/kogpt2-emotion-chatbot | withU | 2022-05-16T07:58:01Z | 237 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-12T05:21:44Z | # KoGPT2-emotion-chatbot
kogpt2 on hugging face Transformers for Psychological Counseling
- [full project link](https://github.com/jiminAn/Capstone_2022)
## how to use
```
from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast
model = GPT2LMHeadModel.from_pretrained("withU/kogpt2-emotion-chatbot")
tokenizer = PreTrainedTokenizerFast.from_pretrained("withU/kogpt2-emotion-chatbot")
input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt")
output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=80, num_return_sequences=4)
for generated_sequence in output_sequences:
generated_sequence = generated_sequence.tolist()
print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)))
```
## dataset finetuned on
- [wellness dataset](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006)
- [emotion corpus of conversations](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-010)
- [chatbot data](https://jeongukjae.github.io/tfds-korean/datasets/korean_chatbot_qa_data.html)
## references
- [WelllnessConversation-LanguageModel](https://github.com/nawnoes/WellnessConversation-LanguageModel)
- [KoGPT2: SKT-AI](https://github.com/SKT-AI/KoGPT2) |
madatnlp/sk-kogptv2-kormath-causal | madatnlp | 2022-05-16T07:56:43Z | 8 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-13T11:28:16Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/sk-kogptv2-kormath-causal
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/sk-kogptv2-kormath-causal
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3184
- Validation Loss: 1.4046
- Epoch: 15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2.2999999e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7142 | 1.8683 | 0 |
| 1.6077 | 1.4417 | 1 |
| 1.2458 | 1.3161 | 2 |
| 1.0396 | 1.2704 | 3 |
| 0.8848 | 1.2818 | 4 |
| 0.7634 | 1.2579 | 5 |
| 0.6699 | 1.2724 | 6 |
| 0.5948 | 1.2718 | 7 |
| 0.5306 | 1.3300 | 8 |
| 0.4832 | 1.3377 | 9 |
| 0.4401 | 1.3038 | 10 |
| 0.4053 | 1.3622 | 11 |
| 0.3782 | 1.3577 | 12 |
| 0.3550 | 1.3696 | 13 |
| 0.3347 | 1.3682 | 14 |
| 0.3184 | 1.4046 | 15 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kompactss/JeBERT_je_ko | kompactss | 2022-05-16T06:11:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-01T15:03:18Z | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 제주어 -> 표준어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 8 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 79.0
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
kompactss/JeBERT_ko_je_v2 | kompactss | 2022-05-16T06:10:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-02T17:30:31Z | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 표준어 -> 제주어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)_v2
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 67.6
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum-v2 | yogeshchandrasekharuni | 2022-05-16T05:52:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-16T05:06:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-paraphrase-finetuned-xsum-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum-v2
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2329
- Rouge1: 100.0
- Rouge2: 100.0
- Rougel: 100.0
- Rougelsum: 100.0
- Gen Len: 9.2619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 21 | 1.2954 | 66.7012 | 60.8612 | 66.5163 | 66.4352 | 13.2857 |
| No log | 2.0 | 42 | 0.6866 | 86.8284 | 82.7835 | 86.7208 | 86.784 | 9.5238 |
| No log | 3.0 | 63 | 0.4652 | 95.1892 | 93.5619 | 95.2567 | 95.1657 | 10.3095 |
| No log | 4.0 | 84 | 0.4280 | 97.7463 | 97.1782 | 97.8708 | 97.718 | 9.5 |
| No log | 5.0 | 105 | 0.3712 | 99.6435 | 99.5767 | 99.6435 | 99.6435 | 9.3571 |
| No log | 6.0 | 126 | 0.4451 | 99.2695 | 98.9418 | 99.1883 | 99.3506 | 9.3095 |
| No log | 7.0 | 147 | 0.3169 | 99.246 | 99.0232 | 99.246 | 99.4048 | 9.619 |
| No log | 8.0 | 168 | 0.2942 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4048 |
| No log | 9.0 | 189 | 0.3105 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 |
| No log | 10.0 | 210 | 0.3035 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 |
| No log | 11.0 | 231 | 0.2983 | 100.0 | 100.0 | 100.0 | 100.0 | 10.5714 |
| No log | 12.0 | 252 | 0.2497 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4286 |
| No log | 13.0 | 273 | 0.2911 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 |
| No log | 14.0 | 294 | 0.2619 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 |
| No log | 15.0 | 315 | 0.2510 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2381 |
| No log | 16.0 | 336 | 0.2647 | 100.0 | 100.0 | 100.0 | 100.0 | 9.9048 |
| No log | 17.0 | 357 | 0.2438 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 |
| No log | 18.0 | 378 | 0.2324 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 |
| No log | 19.0 | 399 | 0.2296 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 |
| No log | 20.0 | 420 | 0.2329 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
fancyerii/bert-finetuned-ner | fancyerii | 2022-05-16T05:35:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-16T05:00:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9387755102040817
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9454423928481912
- name: Accuracy
type: accuracy
value: 0.9869606169423677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9388
- Recall: 0.9522
- F1: 0.9454
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0857 | 1.0 | 1756 | 0.0635 | 0.9121 | 0.9359 | 0.9238 | 0.9830 |
| 0.0318 | 2.0 | 3512 | 0.0586 | 0.9245 | 0.9465 | 0.9354 | 0.9857 |
| 0.0222 | 3.0 | 5268 | 0.0592 | 0.9388 | 0.9522 | 0.9454 | 0.9870 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nttoanh/t5vi-finetuned-en-to-vi | nttoanh | 2022-05-15T22:20:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:mt_eng_vietnamese",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T17:03:36Z | ---
tags:
- generated_from_trainer
datasets:
- mt_eng_vietnamese
metrics:
- bleu
model-index:
- name: t5vi-finetuned-en-to-vi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mt_eng_vietnamese
type: mt_eng_vietnamese
args: iwslt2015-en-vi
metrics:
- name: Bleu
type: bleu
value: 13.547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5vi-finetuned-en-to-vi
This model is a fine-tuned version of [imthanhlv/t5vi](https://huggingface.co/imthanhlv/t5vi) on the mt_eng_vietnamese dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3827
- Bleu: 13.547
- Gen Len: 17.3719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8026 | 1.0 | 6666 | 1.5907 | 10.9756 | 17.3231 |
| 1.6217 | 2.0 | 13332 | 1.4635 | 12.375 | 17.3444 |
| 1.5087 | 3.0 | 19998 | 1.4131 | 13.1828 | 17.3924 |
| 1.4446 | 4.0 | 26664 | 1.3915 | 13.5217 | 17.3617 |
| 1.4076 | 5.0 | 33330 | 1.3827 | 13.547 | 17.3719 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
mateotyz/tf-xml-r-base-ape-swm | mateotyz | 2022-05-15T21:19:18Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-05-15T18:47:41Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mateotyz/tf-xml-r-base-ape-swm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mateotyz/tf-xml-r-base-ape-swm
This model is a fine-tuned version of [jplu/tf-xlm-roberta-base](https://huggingface.co/jplu/tf-xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1811
- Validation Loss: 1.0441
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3563 | 1.0668 | 0 |
| 1.1682 | 1.0687 | 1 |
| 1.1811 | 1.0441 | 2 |
### Framework versions
- Transformers 4.19.1
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
KhariotnovKK/luna_lender_v1 | KhariotnovKK | 2022-05-15T18:37:37Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-06T08:33:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 260.20 +/- 20.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
send-it/TEST5ppo-LunarLander-v2 | send-it | 2022-05-15T18:30:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T18:30:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.57 +/- 10.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
prashanth/mbart-large-cc25-ge-en-to-hi | prashanth | 2022-05-15T17:11:05Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:hindi_english_machine_translation",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-14T23:04:55Z | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
metrics:
- bleu
model-index:
- name: mbart-large-cc25-ge-en-to-hi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hindi_english_machine_translation
type: hindi_english_machine_translation
args: hi-en
metrics:
- name: Bleu
type: bleu
value: 4.5974
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-ge-en-to-hi
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3397
- Bleu: 4.5974
- Gen Len: 66.244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 1.4602 | 1.0 | 135739 | 1.3397 | 4.5974 | 66.244 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
huggingtweets/dclblogger-loopifyyy | huggingtweets | 2022-05-15T15:32:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-15T15:28:31Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dclblogger-loopifyyy/1652628765621/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1472740175130230784/L7Xcs7wJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480550067564163078/D90SnyUa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matty & Loopify 🧙♂️</div>
<div style="text-align: center; font-size: 14px;">@dclblogger-loopifyyy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matty & Loopify 🧙♂️.
| Data | Matty | Loopify 🧙♂️ |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3250 |
| Retweets | 62 | 117 |
| Short tweets | 494 | 867 |
| Tweets kept | 2694 | 2266 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pq5pxck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dclblogger-loopifyyy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/as5uacn5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dclblogger-loopifyyy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kktoto/kt_punc | kktoto | 2022-05-15T15:16:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:chn_senti_corp",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-15T13:47:21Z | ---
tags:
- generated_from_trainer
datasets:
- chn_senti_corp
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: kt_punc
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: chn_senti_corp
type: chn_senti_corp
args: default
metrics:
- name: Precision
type: precision
value: 0.7078651685393258
- name: Recall
type: recall
value: 0.7313662547821116
- name: F1
type: f1
value: 0.7194238380517767
- name: Accuracy
type: accuracy
value: 0.957316742326961
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kt_punc
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the chn_senti_corp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Precision: 0.7079
- Recall: 0.7314
- F1: 0.7194
- Accuracy: 0.9573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1661 | 1.0 | 600 | 0.1351 | 0.6566 | 0.6833 | 0.6697 | 0.9498 |
| 0.1246 | 2.0 | 1200 | 0.1330 | 0.6854 | 0.6665 | 0.6758 | 0.9521 |
| 0.1121 | 3.0 | 1800 | 0.1303 | 0.6885 | 0.6994 | 0.6939 | 0.9537 |
| 0.1008 | 4.0 | 2400 | 0.1359 | 0.6836 | 0.7248 | 0.7036 | 0.9543 |
| 0.0809 | 5.0 | 3000 | 0.1404 | 0.7035 | 0.7082 | 0.7059 | 0.9559 |
| 0.0696 | 6.0 | 3600 | 0.1449 | 0.6986 | 0.7224 | 0.7103 | 0.9560 |
| 0.0628 | 7.0 | 4200 | 0.1563 | 0.7063 | 0.7214 | 0.7138 | 0.9567 |
| 0.0561 | 8.0 | 4800 | 0.1618 | 0.7024 | 0.7333 | 0.7175 | 0.9568 |
| 0.0525 | 9.0 | 5400 | 0.1669 | 0.7083 | 0.7335 | 0.7207 | 0.9574 |
| 0.0453 | 10.0 | 6000 | 0.1703 | 0.7079 | 0.7314 | 0.7194 | 0.9573 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
traxes/repos | traxes | 2022-05-15T15:03:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T15:03:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -130.18 +/- 34.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
umbertospazio/1500000_PPO-LunarLander-v2 | umbertospazio | 2022-05-15T15:03:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T15:02:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.46 +/- 17.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Zohar/distilgpt2-finetuned-negative-restaurant-reviews-clean | Zohar | 2022-05-15T14:12:08Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-15T11:47:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-negative-restaurant-reviews-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-negative-restaurant-reviews-clean
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6841 | 1.0 | 3105 | 3.5793 |
| 3.6184 | 2.0 | 6210 | 3.5313 |
| 3.5943 | 3.0 | 9315 | 3.5187 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
KrusHan/PPO-LunarLander-v2 | KrusHan | 2022-05-15T13:18:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T13:18:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 260.52 +/- 27.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
robert1003/LunarLander-v2-ppo | robert1003 | 2022-05-15T13:15:46Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T05:03:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 280.07 +/- 14.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingtweets/medvedevrussia | huggingtweets | 2022-05-15T12:26:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-15T12:26:21Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2348558617/x0vh6bui3sq97vt4jd2n_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Дмитрий Медведев</div>
<div style="text-align: center; font-size: 14px;">@medvedevrussia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Дмитрий Медведев.
| Data | Дмитрий Медведев |
| --- | --- |
| Tweets downloaded | 1740 |
| Retweets | 300 |
| Short tweets | 48 |
| Tweets kept | 1392 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s7c3vz9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medvedevrussia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e00s9pz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/medvedevrussia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
FollishBoi/dqn-MountainCar-v0-try3 | FollishBoi | 2022-05-15T12:01:35Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T12:01:12Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -104.00 +/- 2.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
NikiTricky/ffhq-autoencoder-16dim | NikiTricky | 2022-05-15T12:01:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-05-15T11:29:08Z | ---
license: apache-2.0
---
# FFHQ Autoencoder
An autoencoder train on the **F**lickr-**F**aces-**HQ** Dataset with 16 latent dimensions for 1000 epochs. **Note:** The images trained on were 128x128.
It was meant for the [Latent Space Explorer](https://github.com/NikiTricky2/Latent-space-vizualizer) |
anas-awadalla/splinter-base-finetuned-squad | anas-awadalla | 2022-05-15T11:49:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T10:55:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-base-finetuned-squad
This model is a fine-tuned version of [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mubikan/xlm-roberta-base-finetuned-panx-de | mubikan | 2022-05-15T11:48:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-14T15:57:44Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8588964027959312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1383
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2631 | 1.0 | 525 | 0.1596 | 0.8218 |
| 0.1296 | 2.0 | 1050 | 0.1353 | 0.8479 |
| 0.0821 | 3.0 | 1575 | 0.1383 | 0.8589 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
harikp20/hkp24 | harikp20 | 2022-05-15T11:34:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T08:30:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: hkp24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hkp24
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2249 | 1.0 | 5533 | 1.1675 |
| 0.961 | 2.0 | 11066 | 1.1376 |
| 0.7581 | 3.0 | 16599 | 1.1619 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
meln1k/ppo-BipedalWalker-v3 | meln1k | 2022-05-15T11:11:47Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T11:11:23Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 312.05 +/- 1.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
anas-awadalla/splinter-large-finetuned-squad | anas-awadalla | 2022-05-15T10:51:43Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-15T08:20:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-finetuned-squad
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ZaynSu99/weibo_senti_cls | ZaynSu99 | 2022-05-15T10:46:21Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | 2022-05-15T10:31:02Z | ---
license: afl-3.0
---
this model is for Weibo comment sentiment analysis |
FumaNet/TEST1PPO-CartPole-v1 | FumaNet | 2022-05-15T10:24:11Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T10:23:40Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 397.00 +/- 103.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
atsanda/ppo-LunarLander-v2 | atsanda | 2022-05-15T09:28:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T09:27:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 241.67 +/- 9.99
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
esh/MountainCar-v0 | esh | 2022-05-15T09:23:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T09:10:58Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -169.90 +/- 36.95
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Metformin/BART_medFineTune | Metformin | 2022-05-15T09:11:06Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-15T05:39:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Metformin/BART_medFineTune
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Metformin/BART_medFineTune
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7982
- Validation Loss: 0.9953
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 7820, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 100, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1563 | 1.3468 | 0 |
| 1.4157 | 1.2090 | 1 |
| 1.2579 | 1.1387 | 2 |
| 1.1819 | 1.0888 | 3 |
| 1.1438 | 1.0848 | 4 |
| 1.0629 | 1.0512 | 5 |
| 1.0163 | 1.0454 | 6 |
| 0.9801 | 1.0248 | 7 |
| 0.9530 | 1.0171 | 8 |
| 0.9262 | 1.0108 | 9 |
| 0.9124 | 1.0116 | 10 |
| 0.8853 | 1.0043 | 11 |
| 0.8658 | 1.0023 | 12 |
| 0.8511 | 0.9987 | 13 |
| 0.8394 | 0.9988 | 14 |
| 0.8298 | 0.9994 | 15 |
| 0.8175 | 0.9985 | 16 |
| 0.8105 | 0.9936 | 17 |
| 0.8033 | 0.9974 | 18 |
| 0.8012 | 0.9948 | 19 |
| 0.7997 | 0.9948 | 20 |
| 0.7970 | 0.9957 | 21 |
| 0.7956 | 0.9958 | 22 |
| 0.8002 | 0.9954 | 23 |
| 0.7951 | 0.9957 | 24 |
| 0.7994 | 0.9948 | 25 |
| 0.7964 | 0.9958 | 26 |
| 0.7948 | 0.9957 | 27 |
| 0.7979 | 0.9956 | 28 |
| 0.7982 | 0.9953 | 29 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.3
- Datasets 2.0.0
- Tokenizers 0.12.1
|
esh/ppo-LunarLander-v2 | esh | 2022-05-15T09:01:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-09T16:40:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.69 +/- 23.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
anas-awadalla/roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2 | anas-awadalla | 2022-05-15T07:40:11Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T05:02:33Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
meln1k/ppo-CarRacing-v0 | meln1k | 2022-05-15T07:31:25Z | 11 | 2 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-05-15T07:19:11Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 840.32 +/- 21.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ruselkomp/xlm-roberta | ruselkomp | 2022-05-15T07:26:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T22:18:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta
This model is a fine-tuned version of [AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru](https://huggingface.co/AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0083 | 1.0 | 15104 | 0.9420 |
| 0.8093 | 2.0 | 30208 | 0.9264 |
| 0.5576 | 3.0 | 45312 | 1.1842 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-2 | anas-awadalla | 2022-05-15T07:13:31Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T04:42:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | 2022-05-15T07:06:17Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T04:38:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0 | anas-awadalla | 2022-05-15T06:45:01Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-05-15T03:10:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 400
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
questgen/all-mpnet-base-v2-feature-extraction-pipeline | questgen | 2022-05-15T06:29:59Z | 8 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-05-15T06:25:37Z | ---
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
danieleV9H/hubert-base-libri-clean-ft100h | danieleV9H | 2022-05-15T05:47:23Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-05-14T19:09:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: hubert-base-libri-clean-ft100h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-libri-clean-ft100h
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1324
- Wer: 0.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.14 | 250 | 4.1508 | 1.0000 |
| 4.4345 | 0.28 | 500 | 3.8766 | 1.0000 |
| 4.4345 | 0.42 | 750 | 3.4376 | 1.0000 |
| 2.8475 | 0.56 | 1000 | 2.7380 | 1.0 |
| 2.8475 | 0.7 | 1250 | 0.8803 | 0.6766 |
| 1.1877 | 0.84 | 1500 | 0.5671 | 0.5102 |
| 1.1877 | 0.98 | 1750 | 0.4537 | 0.4388 |
| 0.5802 | 1.12 | 2000 | 0.3566 | 0.3740 |
| 0.5802 | 1.26 | 2250 | 0.2925 | 0.3209 |
| 0.4301 | 1.4 | 2500 | 0.2613 | 0.2952 |
| 0.4301 | 1.54 | 2750 | 0.2363 | 0.2715 |
| 0.3591 | 1.68 | 3000 | 0.2155 | 0.2552 |
| 0.3591 | 1.82 | 3250 | 0.2062 | 0.2418 |
| 0.3015 | 1.96 | 3500 | 0.1951 | 0.2308 |
| 0.3015 | 2.1 | 3750 | 0.1842 | 0.2207 |
| 0.2698 | 2.24 | 4000 | 0.1900 | 0.2112 |
| 0.2698 | 2.38 | 4250 | 0.1745 | 0.2048 |
| 0.2561 | 2.52 | 4500 | 0.1718 | 0.2040 |
| 0.2561 | 2.66 | 4750 | 0.1625 | 0.1939 |
| 0.2348 | 2.8 | 5000 | 0.1568 | 0.1867 |
| 0.2348 | 2.94 | 5250 | 0.1517 | 0.1855 |
| 0.2278 | 3.08 | 5500 | 0.1501 | 0.1807 |
| 0.2278 | 3.22 | 5750 | 0.1445 | 0.1772 |
| 0.2166 | 3.36 | 6000 | 0.1422 | 0.1752 |
| 0.2166 | 3.5 | 6250 | 0.1418 | 0.1741 |
| 0.2017 | 3.64 | 6500 | 0.1404 | 0.1695 |
| 0.2017 | 3.78 | 6750 | 0.1356 | 0.1674 |
| 0.1922 | 3.92 | 7000 | 0.1350 | 0.1688 |
| 0.1922 | 4.06 | 7250 | 0.1346 | 0.1638 |
| 0.1979 | 4.2 | 7500 | 0.1359 | 0.1638 |
| 0.1979 | 4.34 | 7750 | 0.1336 | 0.1612 |
| 0.1836 | 4.48 | 8000 | 0.1324 | 0.1613 |
| 0.1836 | 4.62 | 8250 | 0.1320 | 0.1606 |
| 0.1891 | 4.76 | 8500 | 0.1325 | 0.1598 |
| 0.1891 | 4.9 | 8750 | 0.1324 | 0.1597 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ahmeddbahaa/mbart-large-50-finetuned-persian | ahmeddbahaa | 2022-05-15T04:01:56Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"summarization",
"persian",
"MBart50",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-05-14T13:40:15Z | ---
tags:
- summarization
- persian
- MBart50
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mbart-large-50-finetuned-persian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-persian
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1932
- Rouge-1: 26.11
- Rouge-2: 8.11
- Rouge-l: 21.09
- Gen Len: 37.29
- Bertscore: 71.08
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 5.5612 | 1.0 | 1476 | 4.5015 | 17.07 | 3.14 | 13.54 | 47.49 | 66.83 |
| 4.3049 | 2.0 | 2952 | 4.1055 | 22.63 | 5.89 | 18.03 | 40.43 | 69.23 |
| 3.8154 | 3.0 | 4428 | 3.9822 | 24.57 | 7.15 | 19.74 | 37.35 | 70.36 |
| 3.3401 | 4.0 | 5904 | 4.0088 | 25.84 | 7.96 | 20.95 | 37.56 | 70.83 |
| 2.8879 | 5.0 | 7380 | 4.1932 | 26.24 | 8.26 | 21.23 | 37.78 | 71.05 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
smc/electric | smc | 2022-05-15T00:19:16Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-05-15T00:13:48Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: electric
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9166666865348816
---
# electric
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T23:53:22Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T23:32:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-4 | anas-awadalla | 2022-05-14T23:53:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T23:32:52Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-1024-finetuned-squad-seed-0 | anas-awadalla | 2022-05-14T23:09:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T22:49:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-512-finetuned-squad-seed-2 | anas-awadalla | 2022-05-14T22:32:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-05-14T22:19:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits