modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
qualitydatalab/autotrain-car-review-project-966432120
|
qualitydatalab
| 2022-06-09T12:36:14Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:qualitydatalab/autotrain-data-car-review-project",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T12:30:01Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.061185706621337065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432120
- CO2 Emissions (in grams): 0.061185706621337065
## Validation Metrics
- Loss: 0.6066656112670898
- Accuracy: 0.724822695035461
- Macro F1: 0.7077087000886584
- Micro F1: 0.7248226950354609
- Weighted F1: 0.7077087000886584
- Macro Precision: 0.7143184427227084
- Micro Precision: 0.724822695035461
- Weighted Precision: 0.7143184427227083
- Macro Recall: 0.7248226950354609
- Micro Recall: 0.724822695035461
- Weighted Recall: 0.724822695035461
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432120
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
RalphX1/q-FrozenLake-v1-4x4-noSlippery
|
RalphX1
| 2022-06-09T12:04:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T12:02:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RalphX1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-09T11:54:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T05:36:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.8441
- Rouge2: 0.0894
- Rougel: 0.8428
- Rougelsum: 0.844
- Gen Len: 6.338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 89332 | nan | 0.8441 | 0.0894 | 0.8428 | 0.844 | 6.338 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
allegro/herbert-base-cased
|
allegro
| 2022-06-09T11:36:39Z | 80,570 | 16 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"herbert",
"pl",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: pl
tags:
- herbert
license: cc-by-4.0
---
# HerBERT
**[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish corpora
using Masked Language Modelling (MLM) and Sentence Structural Objective (SSO) with dynamic masking of whole words. For more details, please refer to: [HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish](https://www.aclweb.org/anthology/2021.bsnlp-1.1/).
Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.9.
## Corpus
HerBERT was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with
a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library.
We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-base-cased")
model = AutoModel.from_pretrained("allegro/herbert-base-cased")
output = model(
**tokenizer.batch_encode_plus(
[
(
"A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.",
"A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy."
)
],
padding='longest',
add_special_tokens=True,
return_tensors='pt'
)
)
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{mroczkowski-etal-2021-herbert,
title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish",
author = "Mroczkowski, Robert and
Rybak, Piotr and
Wr{\\'o}blewska, Alina and
Gawlik, Ireneusz",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1",
pages = "1--10",
}
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:[email protected]">[email protected]</a>
|
FritzOS/TEdetection_distiBERT_mLM_V2_shuffleplus3
|
FritzOS
| 2022-06-09T11:28:40Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-09T11:28:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_V2_shuffleplus3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_V2_shuffleplus3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/politifact
|
huggingtweets
| 2022-06-09T11:14:17Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T11:13:06Z |
---
language: en
thumbnail: http://www.huggingtweets.com/politifact/1654773253130/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1286766140115517441/8rq6ZxZm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PolitiFact</div>
<div style="text-align: center; font-size: 14px;">@politifact</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PolitiFact.
| Data | PolitiFact |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 680 |
| Short tweets | 14 |
| Tweets kept | 2556 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vfo2t7i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @politifact's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7h3iptm6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7h3iptm6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/politifact')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
resul-ai/bert-finetuned-ner
|
resul-ai
| 2022-06-09T11:11:34Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T10:50:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9344479390829333
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9421680714345323
- name: Accuracy
type: accuracy
value: 0.9859745687878966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0644
- Precision: 0.9344
- Recall: 0.9500
- F1: 0.9422
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0854 | 1.0 | 1756 | 0.0632 | 0.9080 | 0.9352 | 0.9214 | 0.9822 |
| 0.0401 | 2.0 | 3512 | 0.0605 | 0.9302 | 0.9485 | 0.9393 | 0.9856 |
| 0.0204 | 3.0 | 5268 | 0.0644 | 0.9344 | 0.9500 | 0.9422 | 0.9860 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
i8pxgd2s/q-FrozenLake-v1-4x4-Slippery
|
i8pxgd2s
| 2022-06-09T10:29:25Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T10:29:18Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.75 +/- 0.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/osanseviero
|
huggingtweets
| 2022-06-09T10:20:54Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T10:15:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/osanseviero/1654769951427/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1106315906165157889/0Hxb1ESL_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Omar Sanseviero</div>
<div style="text-align: center; font-size: 14px;">@osanseviero</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Omar Sanseviero.
| Data | Omar Sanseviero |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 1158 |
| Short tweets | 224 |
| Tweets kept | 1862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29bkab0t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @osanseviero's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/osanseviero')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pourmand1376/arabic-quran-nahj-sahife
|
pourmand1376
| 2022-06-09T10:18:17Z | 44 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ar",
"license:gpl-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-09T08:51:28Z |
---
license: gpl-2.0
language: ar
---
A model which is jointly trained and fine-tuned on Quran, Saheefa and nahj-al-balaqa. All Datasets are available [Here](https://github.com/language-ml/course-nlp-ir-1-text-exploring/tree/main/exploring-datasets/religious_text). Code will be available soon ...
Some Examples for filling the mask:
- ```
ذَلِكَ [MASK] لَا رَيْبَ فِيهِ هُدًى لِلْمُتَّقِينَ
```
- ```
يَا أَيُّهَا النَّاسُ اعْبُدُوا رَبَّكُمُ الَّذِي خَلَقَكُمْ وَالَّذِينَ مِنْ قَبْلِكُمْ لَعَلَّكُمْ [MASK]
```
This model is fine-tuned on [Bert Base Arabic](https://huggingface.co/asafaya/bert-base-arabic) for 30 epochs. We have used `Masked Language Modeling` to fine-tune the model. Also, after each 5 epochs, we have completely masked the words again for the model to learn the embeddings very well and not overfit the data.
|
yogeshkulkarni/MidcurveNN
|
yogeshkulkarni
| 2022-06-09T09:47:16Z | 0 | 0 | null |
[
"arxiv:1904.0429",
"region:us"
] | null | 2022-06-06T10:55:33Z |
# MidcurveNN
Midcurve by Neural Networks

---
license: apache-2.0
---
## Description
- Goal: Given a 2D closed shape (closed polygon) find its midcurve (polyline, closed or open)
- Input: set of points or set of connected lines, non-intersecting, simple, convex, closed polygon
- Output: another set of points or set of connected lines, open/branched polygons possible
## ToDos
- Based on code at https://github.com/yogeshhk/MidcurveNN/tree/master/src/simpleencoderdecoder prepare Trainer class to train model using dataset uploaded here. Push model here
- Prepare Gradio demo space here as well as inferencing API which takes profile image and generates midcurve image
## Publications/Talks
- Vixra paper MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon, viXra.org e-Print archive, viXra:1904.0429 http://vixra.org/abs/1904.0429
- ODSC proposal https://confengine.com/odsc-india-2019/proposal/10090/midcurvenn-encoder-decoder-neural-network-for-computing-midcurve-of-a-thin-polygon
- CAD Conference 2021, Barcelona, pages 223-225 http://www.cad-conference.net/files/CAD21/CAD21_223-225.pdf
- CAD & Applications 2022 Journal paper 19(6) http://www.cad-journal.net/files/vol_19/CAD_19(6)_2022_1154-1161.pdf
- Google Developers Dev Library https://devlibrary.withgoogle.com/products/ml/repos/yogeshhk-MidcurveNN
## Citation
```
@article{MidcurveNN,
doi = {https://doi.org/10.14733/cadaps.2022.1154-1161},
url = {https://www.cad-journal.net/files/vol_19/CAD_19(6)_2022_1154-1161.pdf},
author = {Kulkarni, Yogesh H.},
keywords = {Midcurve, Encoder-Decoder, Neural Network},
title = {MidcurveNN: Neural Network for Computing Midcurve of a Thin Polygon},
publisher = {CAD Solutions, LLC},
journal={Computer-Aided Design & Applications},
volume={19},
issue={6},
pages={1154-1161},
year = {2022}
}
```
|
Skil-Internal/bart-paraphrase-finetuned-xsum-v5
|
Skil-Internal
| 2022-06-09T09:42:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-09T09:13:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-finetuned-xsum-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum-v5
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 263 | 0.4728 | 38.7072 | 38.5333 | 38.6391 | 38.6212 | 7.0513 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RalphX1/TEST2ppo-LunarLander-v2
|
RalphX1
| 2022-06-09T09:01:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T09:01:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.09 +/- 19.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edonath/pegasus-samsum
|
edonath
| 2022-06-09T07:56:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-15T21:05:00Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
victorlee071200/distilbert-base-cased-finetuned-squad_v2
|
victorlee071200
| 2022-06-09T07:51:00Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-08T17:41:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-cased-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squad_v2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2416 | 1.0 | 8255 | 1.2973 |
| 0.9689 | 2.0 | 16510 | 1.3242 |
| 0.7803 | 3.0 | 24765 | 1.4225 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
auriolar/dqn-SpaceInvadersNoFrameskip-v4
|
auriolar
| 2022-06-09T07:35:49Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-09T07:35:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 15.50 +/- 12.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga auriolar -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga auriolar
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
victorlee071200/bert-base-cased-finetuned-squad
|
victorlee071200
| 2022-06-09T06:28:47Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-02T21:56:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0302 | 1.0 | 5546 | 1.0068 |
| 0.7597 | 2.0 | 11092 | 0.9976 |
| 0.5483 | 3.0 | 16638 | 1.0835 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
victorlee071200/distilroberta-base-finetuned-squad
|
victorlee071200
| 2022-06-09T04:57:17Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-02T23:21:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilroberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0927 | 1.0 | 5536 | 1.0290 |
| 0.87 | 2.0 | 11072 | 0.9683 |
| 0.7335 | 3.0 | 16608 | 1.0014 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-09T04:30:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-08T05:57:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_162754.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.8027
- Rouge2: 0.0915
- Rougel: 0.802
- Rougelsum: 0.8026
- Gen Len: 6.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 276732 | nan | 0.8027 | 0.0915 | 0.802 | 0.8026 | 6.3401 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/usao926
|
huggingtweets
| 2022-06-09T03:57:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T03:57:41Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1329004510161694722/DkD9DvBN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USAO@山奥</div>
<div style="text-align: center; font-size: 14px;">@usao926</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USAO@山奥.
| Data | USAO@山奥 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 1041 |
| Short tweets | 1987 |
| Tweets kept | 221 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21po1181/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usao926's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jl5e9yl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jl5e9yl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/usao926')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/verizon
|
huggingtweets
| 2022-06-09T00:33:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T23:20:44Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1496892874276880389/ndAolYWm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Verizon</div>
<div style="text-align: center; font-size: 14px;">@verizon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Verizon.
| Data | Verizon |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 408 |
| Short tweets | 188 |
| Tweets kept | 2650 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rssnlth/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @verizon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17qcsqw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17qcsqw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/verizon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mizoru/wav2vec2-large-xls-r-300m-chuvash-colab
|
mizoru
| 2022-06-09T00:19:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-14T20:13:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-chuvash-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chuvash-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6998
- eval_wer: 0.7356
- eval_runtime: 233.6193
- eval_samples_per_second: 3.373
- eval_steps_per_second: 0.424
- epoch: 9.75
- step: 400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/oddapt
|
huggingtweets
| 2022-06-09T00:08:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T00:06:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/oddapt/1654733319638/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468077034169458690/gt5Iv_y7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Steve Hoyt</div>
<div style="text-align: center; font-size: 14px;">@oddapt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Steve Hoyt.
| Data | Steve Hoyt |
| --- | --- |
| Tweets downloaded | 2861 |
| Retweets | 615 |
| Short tweets | 192 |
| Tweets kept | 2054 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8pfy3hb1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oddapt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fphl051) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fphl051/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oddapt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/nu-wave-x2
|
nateraw
| 2022-06-08T23:03:59Z | 0 | 2 |
pytorch-lightning
|
[
"pytorch-lightning",
"audio-to-audio",
"en",
"dataset:vctk",
"arxiv:2104.02321",
"license:bsd-3-clause",
"region:us"
] |
audio-to-audio
| 2022-06-08T21:12:53Z |
---
language: en
license: bsd-3-clause
library_name: pytorch-lightning
tags:
- pytorch-lightning
- audio-to-audio
datasets: vctk
model_name: nu-wave-x2
---
# nu-wave-x2
## Model description
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling
- [GitHub Repo](https://github.com/mindslab-ai/nuwave)
- [Paper](https://arxiv.org/pdf/2104.02321.pdf)
This model was trained by contributor [Frederico S. Oliveira](https://huggingface.co/freds0), who graciously [provided the checkpoint](https://github.com/mindslab-ai/nuwave/issues/18) in the original author's GitHub repo.
This model was trained using source code written by Junhyeok Lee and Seungu Han under the BSD 3.0 License. All credit goes to them for this work.
This model takes in audio at 24kHz and upsamples it to 48kHz.
## Intended uses & limitations
#### How to use
You can try out this model here: [](https://colab.research.google.com/gist/nateraw/bd78af284ef78a960e18a75cb13deab1/nu-wave-x2.ipynb)
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
You can check out the authors' results at [their project page](https://mindslab-ai.github.io/nuwave/). The project page contains many samples of upsampled audio from the authors' models.
### BibTeX entry and citation info
```bibtex
@inproceedings{lee21nuwave,
author={Junhyeok Lee and Seungu Han},
title={{NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling}},
year=2021,
booktitle={Proc. Interspeech 2021},
pages={1634--1638},
doi={10.21437/Interspeech.2021-36}
}
```
|
Anjoe/kant-gpt2
|
Anjoe
| 2022-06-08T22:08:06Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-07T18:51:18Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kant-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kant-gpt2
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 22
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3257 | 1.0 | 1825 | 3.2231 |
| 2.9885 | 2.0 | 3650 | 3.0069 |
| 2.7955 | 3.0 | 5475 | 2.8440 |
| 2.5748 | 4.0 | 7300 | 2.7059 |
| 2.3545 | 5.0 | 9125 | 2.5806 |
| 2.1759 | 6.0 | 10950 | 2.4618 |
| 1.9697 | 7.0 | 12775 | 2.3553 |
| 1.7778 | 8.0 | 14600 | 2.2517 |
| 1.6192 | 9.0 | 16425 | 2.1599 |
| 1.4675 | 10.0 | 18250 | 2.0895 |
| 1.3195 | 11.0 | 20075 | 2.0138 |
| 1.2012 | 12.0 | 21900 | 1.9602 |
| 1.0828 | 13.0 | 23725 | 1.9097 |
| 0.9926 | 14.0 | 25550 | 1.8720 |
| 0.9076 | 15.0 | 27375 | 1.8426 |
| 0.8336 | 16.0 | 29200 | 1.8214 |
| 0.7649 | 17.0 | 31025 | 1.8058 |
| 0.7208 | 18.0 | 32850 | 1.7980 |
| 0.6798 | 19.0 | 34675 | 1.7938 |
| 0.647 | 20.0 | 36500 | 1.7969 |
| 0.6226 | 21.0 | 38325 | 1.7975 |
| 0.601 | 22.0 | 40150 | 1.8022 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
huggingtweets/sun_soony-unjaded_jade-veganhollyg
|
huggingtweets
| 2022-06-08T21:45:56Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-30T21:50:31Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sun_soony-unjaded_jade-veganhollyg/1654724750416/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1105554414427885569/XkyfcoMJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1290809762637131776/uwGH2mYu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/900359049061036032/LYf3Ouv__400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jade Bowler & soony & Holly Gabrielle</div>
<div style="text-align: center; font-size: 14px;">@sun_soony-unjaded_jade-veganhollyg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jade Bowler & soony & Holly Gabrielle.
| Data | Jade Bowler | soony | Holly Gabrielle |
| --- | --- | --- | --- |
| Tweets downloaded | 3170 | 815 | 1802 |
| Retweets | 121 | 260 | 276 |
| Short tweets | 120 | 47 | 253 |
| Tweets kept | 2929 | 508 | 1273 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/afi2j4p2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sun_soony-unjaded_jade-veganhollyg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uiqxuec) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uiqxuec/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sun_soony-unjaded_jade-veganhollyg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/neiltyson
|
huggingtweets
| 2022-06-08T21:26:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/neiltyson/1654723603504/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/74188698/NeilTysonOriginsA-Crop_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Neil deGrasse Tyson</div>
<div style="text-align: center; font-size: 14px;">@neiltyson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Neil deGrasse Tyson.
| Data | Neil deGrasse Tyson |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 10 |
| Short tweets | 87 |
| Tweets kept | 3137 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1v949iob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @neiltyson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kjzq9tjy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kjzq9tjy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/neiltyson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Sohaibsyed/wav2vec2-large-xls-r-300m-turkish-colab
|
Sohaibsyed
| 2022-06-08T20:48:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-08T16:53:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3717
- Wer: 0.2972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0139 | 3.67 | 400 | 0.7020 | 0.7112 |
| 0.4129 | 7.34 | 800 | 0.4162 | 0.4503 |
| 0.1869 | 11.01 | 1200 | 0.4174 | 0.3959 |
| 0.1273 | 14.68 | 1600 | 0.4020 | 0.3695 |
| 0.0959 | 18.35 | 2000 | 0.4026 | 0.3545 |
| 0.0771 | 22.02 | 2400 | 0.3904 | 0.3361 |
| 0.0614 | 25.69 | 2800 | 0.3736 | 0.3127 |
| 0.0486 | 29.36 | 3200 | 0.3717 | 0.2972 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
valurank/distilroberta-bias
|
valurank
| 2022-06-08T20:44:39Z | 3,074 | 9 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:valurank/wikirev-bias",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
language: en
datasets:
- valurank/wikirev-bias
---
# DistilROBERTA fine-tuned for bias detection
This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).
## Training data
The dataset used to fine-tune the model is [wikirev-bias](https://huggingface.co/datasets/valurank/wikirev-bias), extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.
## Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
|
valurank/distilroberta-propaganda-2class
|
valurank
| 2022-06-08T20:39:15Z | 11 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-propaganda-2class
results: []
---
# distilroberta-propaganda-2class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the QCRI propaganda dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5087
- Acc: 0.7424
## Training and evaluation data
Training data is the 19-class QCRI propaganda data, with all propaganda classes collapsed to a single catch-all 'prop' class.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5737 | 1.0 | 493 | 0.5998 | 0.6515 |
| 0.4954 | 2.0 | 986 | 0.5530 | 0.7080 |
| 0.4774 | 3.0 | 1479 | 0.5331 | 0.7258 |
| 0.4846 | 4.0 | 1972 | 0.5247 | 0.7339 |
| 0.4749 | 5.0 | 2465 | 0.5392 | 0.7199 |
| 0.502 | 6.0 | 2958 | 0.5124 | 0.7466 |
| 0.457 | 7.0 | 3451 | 0.5167 | 0.7432 |
| 0.4899 | 8.0 | 3944 | 0.5160 | 0.7428 |
| 0.4833 | 9.0 | 4437 | 0.5280 | 0.7339 |
| 0.5114 | 10.0 | 4930 | 0.5112 | 0.7436 |
| 0.4419 | 11.0 | 5423 | 0.5060 | 0.7525 |
| 0.4743 | 12.0 | 5916 | 0.5031 | 0.7547 |
| 0.4597 | 13.0 | 6409 | 0.5043 | 0.7517 |
| 0.4861 | 14.0 | 6902 | 0.5055 | 0.7487 |
| 0.499 | 15.0 | 7395 | 0.5091 | 0.7419 |
| 0.501 | 16.0 | 7888 | 0.5037 | 0.7521 |
| 0.4659 | 17.0 | 8381 | 0.5087 | 0.7424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
valurank/distilroberta-mbfc-bias
|
valurank
| 2022-06-08T20:34:29Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-mbfc-bias
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-mbfc-bias
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the Proppy dataset, using political bias from mediabiasfactcheck.com as labels.
It achieves the following results on the evaluation set:
- Loss: 1.4130
- Acc: 0.6348
## Training and evaluation data
The training data used is the [proppy corpus](https://zenodo.org/record/3271522). Articles are labeled for political bias using the political bias of the source publication, as scored by mediabiasfactcheck.com. See [Proppy: Organizing the News Based on Their Propagandistic Content](https://propaganda.qcri.org/papers/elsarticle-template.pdf) for details.
To create a more balanced training set, common labels are downsampled to have a maximum of 2000 articles. The resulting label distribution in the training data is as follows:
```
extremeright 689
leastbiased 2000
left 783
leftcenter 2000
right 1260
rightcenter 1418
unknown 2000
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9493 | 1.0 | 514 | 1.2765 | 0.4730 |
| 0.7376 | 2.0 | 1028 | 1.0003 | 0.5812 |
| 0.6702 | 3.0 | 1542 | 1.1294 | 0.5631 |
| 0.6161 | 4.0 | 2056 | 1.0439 | 0.6058 |
| 0.4934 | 5.0 | 2570 | 1.1196 | 0.6028 |
| 0.4558 | 6.0 | 3084 | 1.0993 | 0.5977 |
| 0.4717 | 7.0 | 3598 | 1.0308 | 0.6373 |
| 0.3961 | 8.0 | 4112 | 1.1291 | 0.6234 |
| 0.3829 | 9.0 | 4626 | 1.1554 | 0.6316 |
| 0.3442 | 10.0 | 5140 | 1.1548 | 0.6465 |
| 0.2505 | 11.0 | 5654 | 1.3605 | 0.6169 |
| 0.2105 | 12.0 | 6168 | 1.3310 | 0.6297 |
| 0.262 | 13.0 | 6682 | 1.2706 | 0.6383 |
| 0.2031 | 14.0 | 7196 | 1.3658 | 0.6378 |
| 0.2021 | 15.0 | 7710 | 1.4130 | 0.6348 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
hiranhsw/q-FrozenLake-v1-4x4-noSlippery
|
hiranhsw
| 2022-06-08T20:33:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T20:33:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hiranhsw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
valurank/paraphrase-mpnet-base-v2-offensive
|
valurank
| 2022-06-08T20:33:14Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
license: other
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# valurank/paraphrase-mpnet-base-v2-offensive
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('valurank/paraphrase-mpnet-base-v2-offensive')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('valurank/paraphrase-mpnet-base-v2-offensive')
model = AutoModel.from_pretrained('valurank/paraphrase-mpnet-base-v2-offensive')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=valurank/paraphrase-mpnet-base-v2-offensive)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1280 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
valurank/distilroberta-offensive
|
valurank
| 2022-06-08T20:31:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-offensive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-offensive
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4526
- Acc: 0.8975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2321 | 1.0 | 1030 | 0.2404 | 0.9044 |
| 0.2539 | 2.0 | 2060 | 0.2139 | 0.9098 |
| 0.1997 | 3.0 | 3090 | 0.2561 | 0.9090 |
| 0.1663 | 4.0 | 4120 | 0.2409 | 0.9030 |
| 0.1515 | 5.0 | 5150 | 0.3000 | 0.9055 |
| 0.1035 | 6.0 | 6180 | 0.4170 | 0.9027 |
| 0.0466 | 7.0 | 7210 | 0.4526 | 0.8975 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
valurank/distilroberta-hatespeech
|
valurank
| 2022-06-08T20:30:04Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-hatespeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-hatespeech
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3619
- Acc: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3096 | 1.0 | 4021 | 0.3375 | 0.8540 |
| 0.3711 | 2.0 | 8042 | 0.3305 | 0.8574 |
| 0.322 | 3.0 | 12063 | 0.3398 | 0.8534 |
| 0.3197 | 4.0 | 16084 | 0.3444 | 0.8504 |
| 0.3332 | 5.0 | 20105 | 0.3619 | 0.8423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
valurank/distilroberta-mbfc-bias-4class
|
valurank
| 2022-06-08T20:29:05Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-mbfc-bias-4class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-mbfc-bias-4class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Acc: 0.8503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.488 | 1.0 | 584 | 0.3702 | 0.8519 |
| 0.3544 | 2.0 | 1168 | 0.3531 | 0.8575 |
| 0.3602 | 3.0 | 1752 | 0.3068 | 0.8896 |
| 0.2555 | 4.0 | 2336 | 0.3560 | 0.8715 |
| 0.1695 | 5.0 | 2920 | 0.3896 | 0.8704 |
| 0.117 | 6.0 | 3504 | 0.5336 | 0.8503 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
valurank/distilbert-allsides
|
valurank
| 2022-06-08T20:21:18Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilbert-allsides
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-allsides
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9138
- Acc: 0.7094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7667 | 1.0 | 822 | 0.7003 | 0.6820 |
| 0.6893 | 2.0 | 1644 | 0.6619 | 0.6981 |
| 0.6177 | 3.0 | 2466 | 0.6736 | 0.7064 |
| 0.595 | 4.0 | 3288 | 0.6642 | 0.7091 |
| 0.5179 | 5.0 | 4110 | 0.6936 | 0.7121 |
| 0.4698 | 6.0 | 4932 | 0.7670 | 0.7106 |
| 0.463 | 7.0 | 5754 | 0.8537 | 0.7121 |
| 0.4345 | 8.0 | 6576 | 0.9138 | 0.7094 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
valurank/MiniLM-L6-Keyword-Extraction
|
valurank
| 2022-06-08T20:17:38Z | 11,026 | 13 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:other",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-20T16:37:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: other
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
joniponi/TEST2ppo-LunarLander-v2
|
joniponi
| 2022-06-08T20:00:34Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T20:00:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 253.16 +/- 21.62
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cutten/wav2vec2-large-multilang-cv-ru-night
|
cutten
| 2022-06-08T19:58:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-08T14:24:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-multilang-cv-ru-night
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-multilang-cv-ru-night
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
- Wer: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 8.725 | 1.58 | 500 | 3.2788 | 1.0 |
| 3.1184 | 3.15 | 1000 | 2.4018 | 1.0015 |
| 1.2393 | 4.73 | 1500 | 0.6213 | 0.7655 |
| 0.6899 | 6.31 | 2000 | 0.5518 | 0.6811 |
| 0.5532 | 7.89 | 2500 | 0.5102 | 0.6467 |
| 0.4604 | 9.46 | 3000 | 0.4887 | 0.6213 |
| 0.4095 | 11.04 | 3500 | 0.4874 | 0.6042 |
| 0.3565 | 12.62 | 4000 | 0.4810 | 0.5893 |
| 0.3238 | 14.2 | 4500 | 0.5028 | 0.5890 |
| 0.3011 | 15.77 | 5000 | 0.5475 | 0.5808 |
| 0.2827 | 17.35 | 5500 | 0.5289 | 0.5720 |
| 0.2659 | 18.93 | 6000 | 0.5496 | 0.5733 |
| 0.2445 | 20.5 | 6500 | 0.5354 | 0.5737 |
| 0.2366 | 22.08 | 7000 | 0.5357 | 0.5686 |
| 0.2181 | 23.66 | 7500 | 0.5491 | 0.5611 |
| 0.2146 | 25.24 | 8000 | 0.5591 | 0.5597 |
| 0.2006 | 26.81 | 8500 | 0.5625 | 0.5631 |
| 0.1912 | 28.39 | 9000 | 0.5577 | 0.5647 |
| 0.1821 | 29.97 | 9500 | 0.5684 | 0.5519 |
| 0.1744 | 31.55 | 10000 | 0.5639 | 0.5551 |
| 0.1691 | 33.12 | 10500 | 0.5596 | 0.5425 |
| 0.1577 | 34.7 | 11000 | 0.5770 | 0.5551 |
| 0.1522 | 36.28 | 11500 | 0.5634 | 0.5560 |
| 0.1468 | 37.85 | 12000 | 0.5815 | 0.5453 |
| 0.1508 | 39.43 | 12500 | 0.6053 | 0.5490 |
| 0.1394 | 41.01 | 13000 | 0.6193 | 0.5504 |
| 0.1291 | 42.59 | 13500 | 0.5930 | 0.5424 |
| 0.1345 | 44.16 | 14000 | 0.6283 | 0.5442 |
| 0.1296 | 45.74 | 14500 | 0.6063 | 0.5560 |
| 0.1286 | 47.32 | 15000 | 0.6248 | 0.5378 |
| 0.1231 | 48.9 | 15500 | 0.6106 | 0.5405 |
| 0.1189 | 50.47 | 16000 | 0.6164 | 0.5342 |
| 0.1127 | 52.05 | 16500 | 0.6269 | 0.5359 |
| 0.112 | 53.63 | 17000 | 0.6170 | 0.5390 |
| 0.1113 | 55.21 | 17500 | 0.6489 | 0.5385 |
| 0.1023 | 56.78 | 18000 | 0.6826 | 0.5490 |
| 0.1069 | 58.36 | 18500 | 0.6147 | 0.5296 |
| 0.1008 | 59.94 | 19000 | 0.6414 | 0.5332 |
| 0.1018 | 61.51 | 19500 | 0.6454 | 0.5288 |
| 0.0989 | 63.09 | 20000 | 0.6603 | 0.5303 |
| 0.0944 | 64.67 | 20500 | 0.6350 | 0.5288 |
| 0.0905 | 66.25 | 21000 | 0.6386 | 0.5247 |
| 0.0837 | 67.82 | 21500 | 0.6563 | 0.5298 |
| 0.0868 | 69.4 | 22000 | 0.6375 | 0.5208 |
| 0.0827 | 70.98 | 22500 | 0.6401 | 0.5271 |
| 0.0797 | 72.56 | 23000 | 0.6723 | 0.5191 |
| 0.0847 | 74.13 | 23500 | 0.6610 | 0.5213 |
| 0.0818 | 75.71 | 24000 | 0.6774 | 0.5254 |
| 0.0793 | 77.29 | 24500 | 0.6543 | 0.5250 |
| 0.0758 | 78.86 | 25000 | 0.6607 | 0.5218 |
| 0.0755 | 80.44 | 25500 | 0.6599 | 0.5160 |
| 0.0722 | 82.02 | 26000 | 0.6683 | 0.5196 |
| 0.0714 | 83.6 | 26500 | 0.6941 | 0.5180 |
| 0.0684 | 85.17 | 27000 | 0.6581 | 0.5167 |
| 0.0686 | 86.75 | 27500 | 0.6651 | 0.5172 |
| 0.0712 | 88.33 | 28000 | 0.6547 | 0.5208 |
| 0.0697 | 89.91 | 28500 | 0.6555 | 0.5162 |
| 0.0696 | 91.48 | 29000 | 0.6678 | 0.5107 |
| 0.0686 | 93.06 | 29500 | 0.6630 | 0.5124 |
| 0.0671 | 94.64 | 30000 | 0.6675 | 0.5143 |
| 0.0668 | 96.21 | 30500 | 0.6602 | 0.5107 |
| 0.0666 | 97.79 | 31000 | 0.6611 | 0.5097 |
| 0.0664 | 99.37 | 31500 | 0.6617 | 0.5097 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
renjithks/layoutlmv2-er-ner
|
renjithks
| 2022-06-08T19:37:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-05T15:40:30Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv2-er-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-er-ner
This model is a fine-tuned version of [renjithks/layoutlmv2-cord-ner](https://huggingface.co/renjithks/layoutlmv2-cord-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
- Precision: 0.7810
- Recall: 0.8085
- F1: 0.7945
- Accuracy: 0.9747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 41 | 0.5441 | 0.0 | 0.0 | 0.0 | 0.8851 |
| No log | 2.0 | 82 | 0.4660 | 0.1019 | 0.0732 | 0.0852 | 0.8690 |
| No log | 3.0 | 123 | 0.2506 | 0.4404 | 0.4828 | 0.4606 | 0.9240 |
| No log | 4.0 | 164 | 0.1725 | 0.6120 | 0.6076 | 0.6098 | 0.9529 |
| No log | 5.0 | 205 | 0.1387 | 0.7204 | 0.7245 | 0.7225 | 0.9671 |
| No log | 6.0 | 246 | 0.1237 | 0.7742 | 0.7747 | 0.7745 | 0.9722 |
| No log | 7.0 | 287 | 0.1231 | 0.7619 | 0.7554 | 0.7586 | 0.9697 |
| No log | 8.0 | 328 | 0.1199 | 0.7994 | 0.7719 | 0.7854 | 0.9738 |
| No log | 9.0 | 369 | 0.1197 | 0.7937 | 0.8113 | 0.8024 | 0.9741 |
| No log | 10.0 | 410 | 0.1284 | 0.7581 | 0.7597 | 0.7589 | 0.9690 |
| No log | 11.0 | 451 | 0.1172 | 0.7792 | 0.7848 | 0.7820 | 0.9738 |
| No log | 12.0 | 492 | 0.1192 | 0.7913 | 0.7970 | 0.7941 | 0.9743 |
| 0.1858 | 13.0 | 533 | 0.1175 | 0.7960 | 0.8006 | 0.7983 | 0.9753 |
| 0.1858 | 14.0 | 574 | 0.1184 | 0.7724 | 0.8034 | 0.7876 | 0.9740 |
| 0.1858 | 15.0 | 615 | 0.1171 | 0.7882 | 0.8142 | 0.8010 | 0.9756 |
| 0.1858 | 16.0 | 656 | 0.1195 | 0.7829 | 0.8070 | 0.7948 | 0.9745 |
| 0.1858 | 17.0 | 697 | 0.1209 | 0.7810 | 0.8006 | 0.7906 | 0.9743 |
| 0.1858 | 18.0 | 738 | 0.1241 | 0.7806 | 0.7963 | 0.7884 | 0.9740 |
| 0.1858 | 19.0 | 779 | 0.1222 | 0.7755 | 0.8027 | 0.7889 | 0.9742 |
| 0.1858 | 20.0 | 820 | 0.1217 | 0.7810 | 0.8085 | 0.7945 | 0.9747 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kalmufti/q-Taxi-v3
|
kalmufti
| 2022-06-08T19:29:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T19:29:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.50 +/- 2.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kalmufti/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/makimasdoggy
|
huggingtweets
| 2022-06-08T19:17:06Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T19:15:48Z |
---
language: en
thumbnail: http://www.huggingtweets.com/makimasdoggy/1654715821978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1534537330014445569/ql3I-npY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vanser</div>
<div style="text-align: center; font-size: 14px;">@makimasdoggy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vanser.
| Data | Vanser |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 1548 |
| Short tweets | 346 |
| Tweets kept | 1355 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/66wk3fyw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @makimasdoggy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2di8hgps) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2di8hgps/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/makimasdoggy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
renjithks/layoutlmv1-er-ner
|
renjithks
| 2022-06-08T18:53:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-08T17:45:15Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv1-er-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv1-er-ner
This model is a fine-tuned version of [renjithks/layoutlmv1-cord-ner](https://huggingface.co/renjithks/layoutlmv1-cord-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Precision: 0.7202
- Recall: 0.7238
- F1: 0.7220
- Accuracy: 0.9639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 41 | 0.2444 | 0.4045 | 0.3996 | 0.4020 | 0.9226 |
| No log | 2.0 | 82 | 0.1640 | 0.5319 | 0.6098 | 0.5682 | 0.9455 |
| No log | 3.0 | 123 | 0.1531 | 0.6324 | 0.6614 | 0.6466 | 0.9578 |
| No log | 4.0 | 164 | 0.1440 | 0.6927 | 0.6743 | 0.6834 | 0.9620 |
| No log | 5.0 | 205 | 0.1520 | 0.6750 | 0.6958 | 0.6853 | 0.9613 |
| No log | 6.0 | 246 | 0.1597 | 0.6840 | 0.6987 | 0.6913 | 0.9605 |
| No log | 7.0 | 287 | 0.1910 | 0.7002 | 0.6887 | 0.6944 | 0.9605 |
| No log | 8.0 | 328 | 0.1860 | 0.6834 | 0.6923 | 0.6878 | 0.9609 |
| No log | 9.0 | 369 | 0.1665 | 0.6785 | 0.7102 | 0.6940 | 0.9624 |
| No log | 10.0 | 410 | 0.1816 | 0.7016 | 0.7052 | 0.7034 | 0.9624 |
| No log | 11.0 | 451 | 0.1808 | 0.6913 | 0.7166 | 0.7038 | 0.9638 |
| No log | 12.0 | 492 | 0.2165 | 0.712 | 0.7023 | 0.7071 | 0.9628 |
| 0.1014 | 13.0 | 533 | 0.2135 | 0.6979 | 0.7109 | 0.7043 | 0.9613 |
| 0.1014 | 14.0 | 574 | 0.2154 | 0.6906 | 0.7109 | 0.7006 | 0.9612 |
| 0.1014 | 15.0 | 615 | 0.2118 | 0.6902 | 0.7016 | 0.6958 | 0.9615 |
| 0.1014 | 16.0 | 656 | 0.2091 | 0.6985 | 0.7080 | 0.7032 | 0.9623 |
| 0.1014 | 17.0 | 697 | 0.2104 | 0.7118 | 0.7123 | 0.7121 | 0.9630 |
| 0.1014 | 18.0 | 738 | 0.2081 | 0.7129 | 0.7231 | 0.7179 | 0.9638 |
| 0.1014 | 19.0 | 779 | 0.2093 | 0.7205 | 0.7231 | 0.7218 | 0.9638 |
| 0.1014 | 20.0 | 820 | 0.2092 | 0.7202 | 0.7238 | 0.7220 | 0.9639 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aspis/swin-base-finetuned-snacks
|
aspis
| 2022-06-08T18:43:00Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:snacks",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-08T18:26:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- snacks
metrics:
- accuracy
model-index:
- name: swin-base-finetuned-snacks
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: snacks
type: snacks
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9455497382198953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-finetuned-snacks
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the snacks dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2404
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0044 | 1.0 | 38 | 0.2981 | 0.9309 |
| 0.0023 | 2.0 | 76 | 0.2287 | 0.9445 |
| 0.0012 | 3.0 | 114 | 0.2404 | 0.9455 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ksabeh/roberta-base-attribute-correction-mlm
|
ksabeh
| 2022-06-08T17:55:09Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-08T09:38:06Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/roberta-base-mlm-electronics-attrs-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/roberta-base-mlm-electronics-attrs-correction
This model is a fine-tuned version of [ksabeh/roberta-base-mlm-electronics](https://huggingface.co/ksabeh/roberta-base-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1009
- Validation Loss: 0.0936
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1915 | 0.1100 | 0 |
| 0.1009 | 0.0936 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pere/multi-sentencefix-mt5-large
|
pere
| 2022-06-08T17:06:33Z | 11 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-25T09:05:25Z |
---
language: no
tags:
- translation
widget:
- text: "moscow says deployments in eastern europe increase tensions at the same time nato says russia has moved troops to belarus"
- text: "dette er en liten test som er laget av per egil kummervold han er en forsker som tidligere jobbet ved nasjonalbiblioteket"
- text: "tirsdag var travel for ukrainas president volodymyr zelenskyj på morgenen tok han imot polens statsminister mateusz morawiecki"
- text: "el presidente de estados unidos aprovecha su visita al país fronterizo con ucrania para reunirse con los ministros de defensa y exteriores en un encuentro con refugiados el mandatario calificó al líder ruso como carnicero "
license: cc-by-4.0
---
# DeUnCaser
The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text.
The DeUnCaser is a sequence-to-sequence model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer.
It is using based on the multi-lingual T5 model. It is finetuned for 130,000 steps on a TPU v4-16 using T5X starting from the mT5.1.1 pretrained model. The finetuning scripts is based on up to 1,000,000 training examples (or as many as exists in OSCAR) from each of the 42 languages with Latin alphabet that is both part of OSCAR and the mT5 training set: Afrikaans, Albanian, Basque, Catalan, Cebuano, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, Galician, German, Hungarian, Icelandic, Indonesian, Irish, Italian, Kurdish, Latin, Latvian, Lithuanian, Luxembourgish, Malagasy, Malay, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Polish, Portuguese, Romanian, Slovak, Spanish, Swahili, Swedish, Turkish, Uzbek, Vietnamese, Welsh, West Frisian.
A Notebook for creating the training corpus is available [here](https://colab.research.google.com/drive/1bkH94z-0wIQP8Pz0qXFndhoQsokU-78x?usp=sharing).
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned
|
ajtamayoh
| 2022-06-08T17:00:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-08T16:35:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.8961
- Recall: 0.7009
- F1: 0.7865
- Accuracy: 0.9898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 94 | 0.0484 | 0.9002 | 0.6340 | 0.7440 | 0.9876 |
| No log | 2.0 | 188 | 0.0436 | 0.9095 | 0.6599 | 0.7649 | 0.9887 |
| No log | 3.0 | 282 | 0.0462 | 0.8545 | 0.7043 | 0.7722 | 0.9883 |
| No log | 4.0 | 376 | 0.0456 | 0.9058 | 0.6761 | 0.7743 | 0.9894 |
| No log | 5.0 | 470 | 0.0447 | 0.9194 | 0.6836 | 0.7841 | 0.9900 |
| 0.0426 | 6.0 | 564 | 0.0480 | 0.8917 | 0.7026 | 0.7859 | 0.9897 |
| 0.0426 | 7.0 | 658 | 0.0501 | 0.8961 | 0.7009 | 0.7865 | 0.9898 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nateraw/autoencoder-keras-rm-history-pr-review
|
nateraw
| 2022-06-08T16:58:21Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-08T16:56:15Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa
|
ahmeddbahaa
| 2022-06-08T15:51:15Z | 49 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"fa",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:pn_summary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-29T17:01:06Z |
---
tags:
- summarization
- fa
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: mT5_multilingual_XLSum-finetuned-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the pn_summary dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5703
- Rouge-1: 45.12
- Rouge-2: 26.25
- Rouge-l: 39.96
- Gen Len: 48.72
- Bertscore: 79.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
FritzOS/TEdetection_distiBERT_mLM_V3
|
FritzOS
| 2022-06-08T14:36:53Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-03T14:28:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_V3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_V3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/elukkaj
|
huggingtweets
| 2022-06-08T14:01:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T13:58:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elukkaj/1654696881260/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/996279759570169856/vqZiiVns_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elukka</div>
<div style="text-align: center; font-size: 14px;">@elukkaj</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elukka.
| Data | Elukka |
| --- | --- |
| Tweets downloaded | 1113 |
| Retweets | 1 |
| Short tweets | 22 |
| Tweets kept | 1090 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3de86afj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elukkaj's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/scw34f55) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/scw34f55/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elukkaj')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
YeRyeongLee/bert-base-uncased-finetuned-filtered-0608_test
|
YeRyeongLee
| 2022-06-08T14:00:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-08T13:05:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-finetuned-filtered-0608_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-filtered-0608_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1009
- Accuracy: 0.9777
- Precision: 0.9778
- Recall: 0.9777
- F1: 0.9777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1184 | 1.0 | 3180 | 0.1009 | 0.9777 | 0.9778 | 0.9777 | 0.9777 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
fusing/ddim-lsun-bedroom
|
fusing
| 2022-06-08T13:10:21Z | 41 | 0 |
transformers
|
[
"transformers",
"ddim_diffusion",
"arxiv:2010.02502",
"endpoints_compatible",
"region:us"
] | null | 2022-06-08T12:42:50Z |
---
tags:
- ddim_diffusion
---
# Denoising Diffusion Implicit Models (DDIM)
**Paper**: [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502)
**Abstract**:
*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
**Explanation on `eta` and `num_inference_steps`**
- `num_inference_steps` is called *S* in the following table
- `eta` is called *η* in the following table

## Usage
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
import PIL.Image
import numpy as np
model_id = "fusing/ddim-lsun-bedroom"
# load model and scheduler
ddpm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddpm(eta=0.0, num_inference_steps=50)
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
4. 
|
naveenk903/TEST2ppo-LunarLander-v2
|
naveenk903
| 2022-06-08T12:37:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T12:10:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 237.66 +/- 43.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
FabianWillner/distilbert-base-uncased-finetuned-triviaqa
|
FabianWillner
| 2022-06-08T12:22:36Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-10T12:20:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-triviaqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-triviaqa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0391 | 1.0 | 11195 | 1.0133 |
| 0.8425 | 2.0 | 22390 | 0.9949 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
anas-awadalla/spanbert-large-cased-lora-squad
|
anas-awadalla
| 2022-06-08T12:06:33Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"region:us"
] | null | 2022-06-08T09:27:27Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-large-cased-lora-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-large-cased-lora-squad
This model is a fine-tuned version of [SpanBERT/spanbert-large-cased](https://huggingface.co/SpanBERT/spanbert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jppaolim/v62_Large_2E
|
jppaolim
| 2022-06-08T11:30:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T10:39:12Z |
# My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in his beach chair. He is walking along the beach when he starts feeling a pain in his back. Arthur rushes to the doctor. The doctor says he needs a special cast. Arthur is so relieved he tears up.
Arthur goes to the beach. Arthur takes his wife to the beach. Arthur's wife has sore muscles. He takes her to the local doctor. The doctor gives her medicine. Arthur and his wife enjoy the beach.
Arthur goes to the beach. Arthur always wished he could go to the beach. He always wanted to go by himself. This time he went with his family. When they arrived the place was busy. Arthur was happy to be able to go to the beach.
Arthur goes to the beach. Arthur has never been to the beach. When he finally gets there it is very hot. He decides to go to the beach. He enjoys his vacation at the beach. Arthur is happy that he has never been to the beach.
Arthur goes to the beach. Arthur goes to the beach. He walks up the beach. He sits on the sand. Arthur lies on the beach. He falls asleep.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur wants to get sandals. He buys a sandal and a pair of sandals. He gets in the water and does some stretching. He finds a few nice waves to surf. After surfing, he is able to buy some new sandals.
Arthur goes to the beach. Arthur has been playing in the sand all day long. He decides to go swimming. He spends all afternoon in the water. Finally, Arthur heads home. Arthur finally has a fun beach day!
Arthur goes to the beach. Arthur is very happy on vacation. However, Arthur is not excited to be on the beach. He has no idea how to swim. When he finally gets his board out, Arthur begins to get excited. Arthur swam his first time at the beach!
Arthur goes to the beach. Arthur decided to go to the beach one day. He was very excited for it and got in the water. He was afraid of the waves and never went in. When he finally got out, he saw that he had gotten bit. Arthur still cried the rest of the day and went home.
Arthur goes to the beach. Arthur was going to the beach with his girlfriend. Suddenly, he got a text from his girl that they were getting married. Arthur was so excited and was excited for the big day. When he saw the beach, he saw a beautiful woman. He went on the beach to thank her.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur decides he wants to go swimming. He buys his favorite swimsuit and head for the water. As Arthur is about to enter the water, his friends show up. They tell Arthur that he has to pay for the day's swimwear. Arthur still feels guilty, but he doesn't want to be rude.
Arthur goes to the beach. Arthur wants to go to the beach. He decides he has to get a job. He goes to work and gets hired. The next day he leaves his house. Arthur returns home and is happy.
Arthur goes to the beach. Arthur is out with friends. He decides to go to the beach for a swim. At first he doesn't like the water. Then his friends make fun of him for being so skinny. Arthur finally decides to go swimming after all.
Arthur goes to the beach. Arthur has never been to a beach before. He decides to go anyway. On his first day at the beach he gets seasick. He doesn't get any sun on his first day. Afterwards Arthur decides to not go to the beach for another year.
Arthur goes to the beach. Arthur is going to the beach for a swim. He has never been to the beach before. As he is taking his first swim, a wave hits him in the head. His mother rushes over and tells him that he got hit by a wave. Arthur is glad he didn't go swimming.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. It was Arthur's first time going to the beach. He went by himself and didn't know anyone. The water was very cold. After a few minutes, he decided to join a group of people. They had fun at the beach.
Arthur goes to the beach. He decides to go for a swim at the beach. The water is very warm and Arthur feels very comfortable in his bathing suit. Suddenly, he notices something strange on the shore. It turns out that someone has been swimming there all day! Arthur is relieved when he finds out who it was.
Arthur goes to the beach. Arthur is on a vacation with his family. They go to the beach and swim in the ocean. A shark jumps out at Arthur. He throws the water over it and it gets scared. Arthur decides not to go back to the beach for another year.
Arthur goes to the beach. He wants to go for a swim. He doesn't want to get wet. He decides to use a towel. The towel gets soaked and he has to walk home. Arthur is glad he took his time.
Arthur goes to the beach. Arthur is on vacation in Hawaii. He decides he wants to go surfing. Arthur takes a day off of work and heads out for the day. When Arthur gets there, he sees that it's very crowded! Arthur decides not to go back home for another two days.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur is on a trip with his family. He decides to go for a swim at the ocean. The water is very cold and Arthur feels very hot. His parents take him back home. They tell him that he should have stayed in the house.
Arthur goes to the beach. He decides he wants a vacation. He buys his ticket and flies out of town. When he arrives, he is surprised by how beautiful it was! The weather was perfect for him as well. He enjoyed himself immensely at the beach.
Arthur goes to the beach. He decides he wants a nice day on the sand. The sun is shining and it's very hot. His friends come over to play with him. They all have fun playing in the water. Arthur feels much better after his day of fun.
Arthur goes to the beach. Arthur is on vacation in Florida. He decides he wants to go to the beach. His friends tell him they can't make it for a few days. Arthur agrees and heads out with his friends. They all have fun at the beach.
Arthur goes to the beach. He is going for a swim in the ocean. The water was very cold and Arthur didn't want to go. His friends convinced him to go anyway. When he got there, it was freezing! But he still went because he wanted to be with his friends.
|
Jawaher/Covid19-fake-news-bert-uncased
|
Jawaher
| 2022-06-08T11:02:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-08T09:52:44Z |
Domain adaptation is the process of fine-tuning pre-trained language models (PLMs) on domain-specific datasets to produce predictions that are better suited to the new datasets. Here, we re-train the BERT-base-uncased model on an unlabelled COVID-19 fake news dataset (Constraint@AAAI2021) using the masked language modeling (MLM) objective, where 15% of input text is masked, and the model is expected to predict the masked tokens.
|
huggingtweets/conspiracymill
|
huggingtweets
| 2022-06-08T10:46:08Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T10:44:11Z |
---
language: en
thumbnail: http://www.huggingtweets.com/conspiracymill/1654685163989/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447765226376638469/EuvZlKan_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Conspiracy Mill</div>
<div style="text-align: center; font-size: 14px;">@conspiracymill</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Conspiracy Mill.
| Data | Conspiracy Mill |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 626 |
| Short tweets | 869 |
| Tweets kept | 1701 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yowpn7j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @conspiracymill's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39srf3ca) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39srf3ca/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/conspiracymill')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
epsil/dqn-BreakoutNoFrameskip-v4
|
epsil
| 2022-06-08T10:32:50Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T10:32:02Z |
---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 57.90 +/- 21.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga epsil -f logs/
python enjoy.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga epsil
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
epsil/dqn-s2-CartPole-v1
|
epsil
| 2022-06-08T09:02:30Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T09:02:06Z |
---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 117.00 +/- 2.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **DQN** Agent playing **CartPole-v1**
This is a trained model of a **DQN** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env CartPole-v1 -orga epsil -f logs/
python enjoy.py --algo dqn --env CartPole-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env CartPole-v1 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env CartPole-v1 -f logs/ -orga epsil
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('exploration_final_eps', 0.04),
('exploration_fraction', 0.16),
('gamma', 0.99),
('gradient_steps', 128),
('learning_rate', 0.0023),
('learning_starts', 1000),
('n_timesteps', 50000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256])'),
('target_update_interval', 10),
('train_freq', 256),
('normalize', False)])
```
|
pinku/q-FrozenLake-v1-4x4-noSlippery
|
pinku
| 2022-06-08T08:52:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T08:52:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pinku/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
eunbeee/ainize-kobart-news-eb-finetuned-xsum
|
eunbeee
| 2022-06-08T08:34:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-06T10:01:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ainize-kobart-news-eb-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ainize-kobart-news-eb-finetuned-xsum
This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Rouge1: 60.732
- Rouge2: 39.1933
- Rougel: 60.6507
- Rougelsum: 60.6712
- Gen Len: 19.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0649 | 1.0 | 749 | 0.5502 | 56.6571 | 36.5992 | 56.6185 | 56.6364 | 19.2929 |
| 0.7103 | 2.0 | 1498 | 0.3904 | 59.1212 | 38.3611 | 59.093 | 59.1191 | 19.31 |
| 0.4723 | 3.0 | 2247 | 0.2922 | 60.1133 | 38.7819 | 60.0439 | 60.0572 | 19.2659 |
| 0.3841 | 4.0 | 2996 | 0.2367 | 60.4405 | 39.0176 | 60.366 | 60.4057 | 19.3397 |
| 0.3091 | 5.0 | 3745 | 0.2147 | 60.732 | 39.1933 | 60.6507 | 60.6712 | 19.3417 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
larryboy825/distilbert-base-uncased-finetuned-imdb
|
larryboy825
| 2022-06-08T07:32:12Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-08T07:26:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6836 | 1.0 | 2 | 3.3110 |
| 3.9035 | 2.0 | 4 | 3.2560 |
| 3.9928 | 3.0 | 6 | 2.4306 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
epsil/dqn-SpaceInvadersNoFrameskip-v4
|
epsil
| 2022-06-08T06:13:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-08T06:13:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 637.50 +/- 139.13
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga epsil -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga epsil
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
anas-awadalla/roberta-base-lora-squad
|
anas-awadalla
| 2022-06-08T05:44:03Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-06-08T04:47:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-lora-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-lora-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/qiamast
|
huggingtweets
| 2022-06-08T05:42:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-08T05:40:27Z |
---
language: en
thumbnail: http://www.huggingtweets.com/qiamast/1654666925668/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515664770996715524/UJ44tEP7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mahdi🪐</div>
<div style="text-align: center; font-size: 14px;">@qiamast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mahdi🪐.
| Data | Mahdi🪐 |
| --- | --- |
| Tweets downloaded | 1183 |
| Retweets | 17 |
| Short tweets | 101 |
| Tweets kept | 1065 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t2yplvw1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @qiamast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2oiurss1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2oiurss1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/qiamast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ayush1701/my-deberta
|
ayush1701
| 2022-06-08T04:49:14Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-08T04:49:00Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: my-deberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-deberta
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
steven123/teeth_verify
|
steven123
| 2022-06-08T04:02:20Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-08T04:02:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: teeth_verify
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6666666865348816
---
# teeth_verify
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth

|
amehta633/cifar-10-vgg-pretrained
|
amehta633
| 2022-06-08T04:01:09Z | 25 | 0 |
transformers
|
[
"transformers",
"image-classification",
"pytorch",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-08T03:50:59Z |
---
tags:
- image-classification
- pytorch
---
|
qbhy/model-example
|
qbhy
| 2022-06-08T02:25:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-08T02:23:37Z |
# 这是一个测试模型
language:
- "List of ISO 639-1 code for your language"
- zh
thumbnail: "url to a thumbnail used in social sharing"
tags:
- example
- qbhy
license: "any valid license identifier"
datasets:
- qbhy/dataset-example
metrics:
- metric1
|
huggingtweets/dwr-elonmusk-maccaw
|
huggingtweets
| 2022-06-07T23:37:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-07T23:37:10Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1418421541054918657/ng4Kyv5G_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518670972559130624/-G9gNsOp_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Alex MacCaw & Dan Romero</div>
<div style="text-align: center; font-size: 14px;">@dwr-elonmusk-maccaw</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Alex MacCaw & Dan Romero.
| Data | Elon Musk | Alex MacCaw | Dan Romero |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3244 | 3126 |
| Retweets | 146 | 255 | 2 |
| Short tweets | 956 | 258 | 333 |
| Tweets kept | 2098 | 2731 | 2791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ritkn2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dwr-elonmusk-maccaw's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o2qtjkw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o2qtjkw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dwr-elonmusk-maccaw')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anas-awadalla/roberta-base-prefix-tuning-squad
|
anas-awadalla
| 2022-06-07T22:58:46Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-06-07T22:20:24Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-prefix-tuning-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-prefix-tuning-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-compacter-squad
|
anas-awadalla
| 2022-06-07T22:57:36Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-06-07T21:35:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-compacter-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-compacter-squad
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Anery/bert-finetuned-ner
|
Anery
| 2022-06-07T22:48:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-07T20:44:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
- Precision: 0.7368
- Recall: 0.4
- F1: 0.5185
- Accuracy: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.0598 | 0.0 | 0.0 | 0.0 | 0.9870 |
| No log | 2.0 | 28 | 0.0357 | 0.0 | 0.0 | 0.0 | 0.9894 |
| No log | 3.0 | 42 | 0.0256 | 0.75 | 0.2571 | 0.3830 | 0.9910 |
| No log | 4.0 | 56 | 0.0244 | 0.7368 | 0.4 | 0.5185 | 0.9919 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Cristian-dcg/beto-sentiment-analysis-finetuned-onpremise
|
Cristian-dcg
| 2022-06-07T22:36:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-30T21:10:37Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beto-sentiment-analysis-finetuned-onpremise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-sentiment-analysis-finetuned-onpremise
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7939
- Accuracy: 0.8301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4573 | 1.0 | 1250 | 0.4375 | 0.8191 |
| 0.2191 | 2.0 | 2500 | 0.5367 | 0.8288 |
| 0.1164 | 3.0 | 3750 | 0.7939 | 0.8301 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
anas-awadalla/roberta-base-compacter-squad
|
anas-awadalla
| 2022-06-07T21:31:53Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"region:us"
] | null | 2022-06-07T21:00:14Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-compacter-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-compacter-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/afraidofwasps-dril-senn_spud
|
huggingtweets
| 2022-06-07T21:10:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-28T00:36:09Z |
---
language: en
thumbnail: http://www.huggingtweets.com/afraidofwasps-dril-senn_spud/1654636210975/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1387151448203358209/HKNuKY7L_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1182478458552832000/xqEwluRJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Will Sennett & Boots, 'with the fur'</div>
<div style="text-align: center; font-size: 14px;">@afraidofwasps-dril-senn_spud</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Will Sennett & Boots, 'with the fur'.
| Data | wint | Will Sennett | Boots, 'with the fur' |
| --- | --- | --- | --- |
| Tweets downloaded | 3230 | 3228 | 3217 |
| Retweets | 487 | 312 | 504 |
| Short tweets | 297 | 622 | 434 |
| Tweets kept | 2446 | 2294 | 2279 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156iladp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afraidofwasps-dril-senn_spud's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afraidofwasps-dril-senn_spud')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ksabeh/bert-base-uncased-attribute-correction
|
ksabeh
| 2022-06-07T21:01:05Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-07T12:44:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/bert-base-uncased-attribute-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-attribute-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0541
- Validation Loss: 0.0579
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1513 | 0.0671 | 0 |
| 0.0541 | 0.0579 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
renjithks/layoutlmv1-cord-ner
|
renjithks
| 2022-06-07T20:59:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-07T20:44:15Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv1-cord-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv1-cord-ner
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1438
- Precision: 0.9336
- Recall: 0.9453
- F1: 0.9394
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 113 | 0.1251 | 0.9054 | 0.9184 | 0.9119 | 0.9651 |
| No log | 2.0 | 226 | 0.1343 | 0.9002 | 0.9261 | 0.9130 | 0.9635 |
| No log | 3.0 | 339 | 0.1264 | 0.9189 | 0.9357 | 0.9272 | 0.9647 |
| No log | 4.0 | 452 | 0.1235 | 0.9122 | 0.9376 | 0.9248 | 0.9681 |
| 0.1371 | 5.0 | 565 | 0.1353 | 0.9378 | 0.9405 | 0.9391 | 0.9717 |
| 0.1371 | 6.0 | 678 | 0.1431 | 0.9233 | 0.9357 | 0.9295 | 0.9709 |
| 0.1371 | 7.0 | 791 | 0.1473 | 0.9289 | 0.9405 | 0.9347 | 0.9759 |
| 0.1371 | 8.0 | 904 | 0.1407 | 0.9473 | 0.9491 | 0.9482 | 0.9784 |
| 0.0106 | 9.0 | 1017 | 0.1440 | 0.9301 | 0.9453 | 0.9376 | 0.9769 |
| 0.0106 | 10.0 | 1130 | 0.1438 | 0.9336 | 0.9453 | 0.9394 | 0.9767 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anas-awadalla/bert-large-uncased-compacter-squad
|
anas-awadalla
| 2022-06-07T20:53:57Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"region:us"
] | null | 2022-06-07T19:12:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-uncased-compacter-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-compacter-squad
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/jpegmafia
|
huggingtweets
| 2022-06-07T20:33:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-07T20:33:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jpegmafia/1654634032817/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510648677995581453/13zowZ1f_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JPEGMAFIA</div>
<div style="text-align: center; font-size: 14px;">@jpegmafia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JPEGMAFIA.
| Data | JPEGMAFIA |
| --- | --- |
| Tweets downloaded | 3114 |
| Retweets | 1181 |
| Short tweets | 495 |
| Tweets kept | 1438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ub5q17i2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jpegmafia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ihd6e39h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ihd6e39h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jpegmafia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ishansharma1320/wav2vec2-large-xls-r-300m-finetuned-hindi-common-voice-9-0
|
ishansharma1320
| 2022-06-07T20:08:08Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-07T09:32:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-finetuned-hindi-common-voice-9-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-finetuned-hindi-common-voice-9-0
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7392
- Wer: 1.0141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.42184e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.2217 | 3.03 | 400 | 4.0314 | 1.0 |
| 3.2902 | 6.06 | 800 | 2.1356 | 1.0001 |
| 0.9858 | 9.09 | 1200 | 0.8566 | 1.0037 |
| 0.5131 | 12.12 | 1600 | 0.7481 | 1.0074 |
| 0.3781 | 15.15 | 2000 | 0.7437 | 1.008 |
| 0.2998 | 18.18 | 2400 | 0.7310 | 1.0162 |
| 0.2553 | 21.21 | 2800 | 0.7384 | 1.0159 |
| 0.2216 | 24.24 | 3200 | 0.7537 | 1.0100 |
| 0.2048 | 27.27 | 3600 | 0.7392 | 1.0141 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
pylemountain/distilbert-base-uncased-finetuned-imdb
|
pylemountain
| 2022-06-07T19:33:15Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-07T18:59:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pylemountain/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pylemountain/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8553
- Validation Loss: 2.5640
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8553 | 2.5640 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mariastull/q-FrozenLake-v1-4x4-noSlippery
|
mariastull
| 2022-06-07T19:18:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-07T18:44:52Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mariastull/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
theachyuttiwari/lfqa
|
theachyuttiwari
| 2022-06-07T19:15:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-07T09:26:20Z |
---
title: Wikipedia Assistant
emoji: 🌖
colorFrom: green
colorTo: yellow
sdk: streamlit
app_file: app.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: `streamlit`
Can be either `gradio` or `streamlit`
`sdk_version` : `1.2.0`
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
0xrushi/neural-machine-translation-model_1
|
0xrushi
| 2022-06-07T19:02:17Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-07T19:02:00Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
anas-awadalla/spanbert-base-cased-prefix-tuning-squad
|
anas-awadalla
| 2022-06-07T18:36:13Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:squad",
"region:us"
] | null | 2022-06-07T17:49:46Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-prefix-tuning-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-prefix-tuning-squad
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
memyprokotow/rut5-REBEL-base
|
memyprokotow
| 2022-06-07T17:37:00Z | 30 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"seq2seq",
"relation-extraction",
"ru",
"dataset:memyprokotow/rebel-dataset-rus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-01T15:52:53Z |
---
language:
- ru
tags:
- seq2seq
- relation-extraction
- t5
license: apache-2.0
datasets:
- memyprokotow/rebel-dataset-rus
widget:
- text: "За последние 9 месяцев инвесторы в азиатские долларовые долговые обязательства потеряли 155 миллиардов долларов, пострадав от слабости Китая в дополнение к глобальной распродаже фиксированного дохода, наблюдаемой во всем мире по мере роста процентных ставок."
---
# REBEL-ru
Based on russian part of wikipedia (scrapped with CROCODILE).
Model trained for 3 epochs on russian ruT5-base
# How to use
Same code as REBEL-large (https://huggingface.co/Babelscape/rebel-large)
```
text = '''За последние 9 месяцев инвесторы в азиатские долларовые долговые обязательства потеряли 155 миллиардов долларов, пострадав от слабости Китая в дополнение к глобальной распродаже фиксированного дохода, наблюдаемой во всем мире по мере роста процентных ставок. '''
model_path = r"memyprokotow/rut5-REBEL-base"
triplet_extractor = pipeline('text2text-generation', model=model_path,
tokenizer=model_path,
#device=0
)
# We need to use the tokenizer manually since we need special tokens.
extracted_text = triplet_extractor.tokenizer.batch_decode([triplet_extractor(text, return_tensors=True, return_text=False, max_length=500)[0]["generated_token_ids"]])
print(extracted_text[0])
# Function to parse the generated text and extract the triplets
def extract_triplets(text):
triplets = []
relation, subject, relation, object_ = '', '', '', ''
text = text.strip()
current = 'x'
for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split():
if token == "<triplet>":
current = 't'
if relation != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
relation = ''
subject = ''
elif token == "<subj>":
current = 's'
if relation != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
object_ = ''
elif token == "<obj>":
current = 'o'
relation = ''
else:
if current == 't':
subject += ' ' + token
elif current == 's':
object_ += ' ' + token
elif current == 'o':
relation += ' ' + token
if subject != '' and relation != '' and object_ != '':
triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
return triplets
extracted_triplets = extract_triplets(extracted_text[0])
print(extracted_triplets)
```
|
inokufu/bert-base-uncased-xnli-sts-finetuned-education
|
inokufu
| 2022-06-07T16:39:43Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"Education",
"en",
"xnli",
"stsb_multi_mt",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1810.04805",
"arxiv:1809.05053",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-07T15:36:19Z |
---
pipeline_tag: sentence-similarity
language: en
tags:
- sentence-similarity
- transformers
- Education
- en
- bert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/bertheo-en
A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the English bert-base-uncased pre-trained model [1, 2].
It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [2]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.
It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS data) [4]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Learn to code in python", "Become an expert in accounting"]
model = SentenceTransformer('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Learn to code in python", "Become an expert in accounting"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
model = AutoModel.from_pretrained('inokufu/bert-base-uncased-xnli-sts-finetuned-education')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (en) score: 84.61%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://huggingface.co/bert-base-uncased <br>
[2] https://arxiv.org/abs/1810.04805 <br>
[3] https://arxiv.org/abs/1809.05053 <br>
[4] https://huggingface.co/datasets/stsb_multi_mt <br>
|
elena-soare/bat-table-aug
|
elena-soare
| 2022-06-07T16:15:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-21T21:23:22Z |
# Text2SQL Task T5-Base + Fine-tuning on Spider + Table Augumentation
This is our T5 model fine-tuned on Spider using a schema serialization, which includes a table description for injecting domain knowledge into T5
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding a table description to the question and serialized schema:
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... description * [table] : <meaning of table>; [table] : <meaning of table> ; ....
```
|
huggingtweets/mizefian
|
huggingtweets
| 2022-06-07T16:10:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-07T16:10:37Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488896240083517453/Bu0lDApj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mizefian 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@mizefian</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mizefian 🇺🇦.
| Data | Mizefian 🇺🇦 |
| --- | --- |
| Tweets downloaded | 1265 |
| Retweets | 188 |
| Short tweets | 355 |
| Tweets kept | 722 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x49ahgym/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mizefian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xdjgjn3p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xdjgjn3p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mizefian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
harsha163/CutMix_data_augmentation_for_image_classification
|
harsha163
| 2022-06-07T16:06:55Z | 0 | 0 |
keras
|
[
"keras",
"tensorboard",
"tf-keras",
"data-augmentation",
"image-classification",
"region:us"
] |
image-classification
| 2022-06-07T15:06:28Z |
---
library_name: keras
tags:
- data-augmentation
- image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
mmillet/rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear
|
mmillet
| 2022-06-07T15:52:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-07T15:44:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3902
- Accuracy: 0.8727
- F1: 0.8720
- Precision: 0.8718
- Recall: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.3497 | 1.0 | 69 | 1.2944 | 0.5376 | 0.4665 | 0.6374 | 0.5376 |
| 1.2023 | 2.0 | 138 | 1.0370 | 0.7056 | 0.6745 | 0.7458 | 0.7056 |
| 0.9289 | 3.0 | 207 | 0.7437 | 0.8121 | 0.8082 | 0.8117 | 0.8121 |
| 0.6932 | 4.0 | 276 | 0.5717 | 0.8445 | 0.8428 | 0.8434 | 0.8445 |
| 0.5613 | 5.0 | 345 | 0.4888 | 0.8580 | 0.8572 | 0.8573 | 0.8580 |
| 0.469 | 6.0 | 414 | 0.4401 | 0.8633 | 0.8625 | 0.8623 | 0.8633 |
| 0.4176 | 7.0 | 483 | 0.4156 | 0.8653 | 0.8646 | 0.8644 | 0.8653 |
| 0.3724 | 8.0 | 552 | 0.4001 | 0.8706 | 0.8700 | 0.8699 | 0.8706 |
| 0.3427 | 9.0 | 621 | 0.3972 | 0.8706 | 0.8698 | 0.8701 | 0.8706 |
| 0.3243 | 10.0 | 690 | 0.3898 | 0.8737 | 0.8729 | 0.8736 | 0.8737 |
| 0.3039 | 11.0 | 759 | 0.3887 | 0.8716 | 0.8710 | 0.8717 | 0.8716 |
| 0.2803 | 12.0 | 828 | 0.3841 | 0.8716 | 0.8709 | 0.8709 | 0.8716 |
| 0.264 | 13.0 | 897 | 0.3872 | 0.8758 | 0.8753 | 0.8758 | 0.8758 |
| 0.2607 | 14.0 | 966 | 0.3837 | 0.8747 | 0.8743 | 0.8741 | 0.8747 |
| 0.2437 | 15.0 | 1035 | 0.3893 | 0.8716 | 0.8710 | 0.8712 | 0.8716 |
| 0.2358 | 16.0 | 1104 | 0.3867 | 0.8695 | 0.8691 | 0.8690 | 0.8695 |
| 0.2278 | 17.0 | 1173 | 0.3886 | 0.8737 | 0.8732 | 0.8732 | 0.8737 |
| 0.2143 | 18.0 | 1242 | 0.3902 | 0.8727 | 0.8720 | 0.8718 | 0.8727 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PontifexMaximus/mt5-small-parsinlu-opus-translation_fa_en-finetuned-fa-to-en
|
PontifexMaximus
| 2022-06-07T15:17:41Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_infopankki",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-03T10:59:17Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
metrics:
- bleu
model-index:
- name: mt5-small-parsinlu-opus-translation_fa_en-finetuned-fa-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_infopankki
type: opus_infopankki
args: en-fa
metrics:
- name: Bleu
type: bleu
value: 15.1329
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-parsinlu-opus-translation_fa_en-finetuned-fa-to-en
This model is a fine-tuned version of [persiannlp/mt5-small-parsinlu-opus-translation_fa_en](https://huggingface.co/persiannlp/mt5-small-parsinlu-opus-translation_fa_en) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9193
- Bleu: 15.1329
- Gen Len: 13.4603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 3.1182 | 1.0 | 1807 | 2.5985 | 10.6445 | 13.7938 |
| 2.8377 | 2.0 | 3614 | 2.3799 | 11.852 | 13.6168 |
| 2.6644 | 3.0 | 5421 | 2.2426 | 12.877 | 13.5768 |
| 2.5286 | 4.0 | 7228 | 2.1521 | 13.5342 | 13.5567 |
| 2.4523 | 5.0 | 9035 | 2.0801 | 14.0355 | 13.5387 |
| 2.4026 | 6.0 | 10842 | 2.0197 | 14.4284 | 13.4956 |
| 2.317 | 7.0 | 12649 | 1.9691 | 14.7776 | 13.4325 |
| 2.3174 | 8.0 | 14456 | 1.9373 | 15.189 | 13.4261 |
| 2.3374 | 9.0 | 16263 | 1.9393 | 15.1149 | 13.4087 |
| 2.3131 | 10.0 | 18070 | 1.9304 | 15.0654 | 13.4234 |
| 2.295 | 11.0 | 19877 | 1.9239 | 15.102 | 13.4443 |
| 2.3017 | 12.0 | 21684 | 1.9203 | 15.1676 | 13.4575 |
| 2.3153 | 13.0 | 23491 | 1.9193 | 15.1329 | 13.4603 |
| 2.2939 | 14.0 | 25298 | 1.9193 | 15.1329 | 13.4603 |
| 2.3241 | 15.0 | 27105 | 1.9193 | 15.1329 | 13.4603 |
| 2.3376 | 16.0 | 28912 | 1.9193 | 15.1329 | 13.4603 |
| 2.2859 | 17.0 | 30719 | 1.9193 | 15.1329 | 13.4603 |
| 2.3016 | 18.0 | 32526 | 1.9193 | 15.1329 | 13.4603 |
| 2.3101 | 19.0 | 34333 | 1.9193 | 15.1329 | 13.4603 |
| 2.3088 | 20.0 | 36140 | 1.9193 | 15.1329 | 13.4603 |
| 2.2833 | 21.0 | 37947 | 1.9193 | 15.1329 | 13.4603 |
| 2.2986 | 22.0 | 39754 | 1.9193 | 15.1329 | 13.4603 |
| 2.3254 | 23.0 | 41561 | 1.9193 | 15.1329 | 13.4603 |
| 2.3165 | 24.0 | 43368 | 1.9193 | 15.1329 | 13.4603 |
| 2.289 | 25.0 | 45175 | 1.9193 | 15.1329 | 13.4603 |
| 2.3212 | 26.0 | 46982 | 1.9193 | 15.1329 | 13.4603 |
| 2.2902 | 27.0 | 48789 | 1.9193 | 15.1329 | 13.4603 |
| 2.3026 | 28.0 | 50596 | 1.9193 | 15.1329 | 13.4603 |
| 2.2949 | 29.0 | 52403 | 1.9193 | 15.1329 | 13.4603 |
| 2.3152 | 30.0 | 54210 | 1.9193 | 15.1329 | 13.4603 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.7.1+cu110
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingnft/alpacadabraz
|
huggingnft
| 2022-06-07T14:20:28Z | 3 | 1 |
transformers
|
[
"transformers",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"unconditional-image-generation",
"dataset:huggingnft/alpacadabraz",
"license:mit",
"endpoints_compatible",
"region:us"
] |
unconditional-image-generation
| 2022-04-14T22:08:45Z |
---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
- unconditional-image-generation
datasets:
- huggingnft/alpacadabraz
license: mit
---
# Hugging NFT: alpacadabraz
## Disclaimer
All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright
holder.
## Model description
LightWeight GAN model for unconditional generation.
NFT collection available [here](https://opensea.io/collection/alpacadabraz).
Dataset is available [here](https://huggingface.co/datasets/huggingnft/alpacadabraz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
[](https://github.com/AlekseyKorshuk/huggingnft)
## Intended uses & limitations
#### How to use
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
#### Limitations and bias
Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft).
## Training data
Dataset is available [here](https://huggingface.co/datasets/huggingnft/alpacadabraz).
## Training procedure
Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft).
## Generated Images
Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
### BibTeX entry and citation info
```bibtex
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
|
jppaolim/v59_Large_2E
|
jppaolim
| 2022-06-07T13:01:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-07T12:11:41Z |
# My Story model
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1}
Arthur goes to the beach. Arthur is in love with his girlfriend. They go to the beach together. Arthur falls off the beach. Arthur needs medical attention. Arthur gets a broken leg from the fall.
Arthur goes to the beach. Arthur is feeling cold. He looks at the weather report. He knows he needs to get out of the house. He decides to walk to the local beach. Arthur is happy he got out of the house.
Arthur goes to the beach. Arthur always hated going to the beach. His parents always made him go, even if it was just to swim. His father finally convinced him to go to the beach with him. Arthur was not happy, but he had to go anyway. At the beach, Arthur met lots of people he was interested in.
Arthur goes to the beach. Arthur has never been to the beach. His friends tell him that it is very hot. He decides to go to the beach. He enjoys his day at the beach. Now Arthur loves the beach.
Arthur goes to the beach. Arthur is so bored one day. He decides to go to the beach. He sees a nice, sunny beach. Arthur enjoys his day at the beach. Arthur is happy that he found a good day to be bored.
{'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05}
Arthur goes to the beach. Arthur is out on a day of vacation. He decides to take his girlfriend out to the beach. The two surf. They surf all day long. After the sun comes up they relax on a beach blanket.
Arthur goes to the beach. Arthur was feeling very bored one day. He decided he wanted to swim in the ocean. He went to the beach to feel like he was in the ocean. When he got to the beach he was surprised how warm it was. Arthur immediately went back home and went to bed.
Arthur goes to the beach. Arthur has never been to the beach before. He is excited but also nervous about swimming. He boards his car and goes to the ocean. At first he does not like it. However, after a while, he loves the beach.
Arthur goes to the beach. Arthur was planning on going to the beach with friends. Arthur decided that he would go to the beach. When Arthur arrived, there were too many cars for him. Arthur could not see where his friends were. Arthur realized he forgot his sunscreen.
Arthur goes to the beach. Arthur is on vacation. He heads out to the ocean. Arthur spends most of the time swimming. Arthur falls asleep on the beach. He gets up the next day and heads home.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1}
Arthur goes to the beach. Arthur is going on a trip. He decides to take his girlfriend Mary with him. They decide to go to the beach. When Arthur gets there he realizes that it's too hot. His girlfriend has no choice but to stay home.
Arthur goes to the beach. Arthur is on vacation in the beach. He enjoys taking his swim. However, a storm comes and knocks Arthur's umbrella off of him. Arthur rushes to get it back. He can't swim after that.
Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up all his money for a trip to the beach. Arthur finally decided to go on vacation. While at the beach he fell in love with the water. When he got home, he was happy he went.
Arthur goes to the beach. Arthur was bored one day so he decided to go to the beach. He got a towel and swimsuit to wear and went out on the water. When Arthur arrived at the beach it was very hot. However, when he stepped into the ocean, it was a beautiful sunny day. Arthur was glad that he chose to spend his day at the beach.
Arthur goes to the beach. Arthur is on a long plane trip. He has been waiting for a very long time to finally go to the beach. Finally the plane lands and Arthur boards the plane. On board he sees beautiful ocean and decides to stay there. After landing he spends the rest of the day relaxing by the water.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15}
Arthur goes to the beach. Arthur is on a vacation with his family. His family decides to go to the beach. They spend a lot of time at the beach. Arthur has a great day at the beach. He will never forget that trip!
Arthur goes to the beach. Arthur is bored on a rainy day at work. He decides he needs some fun time. He heads out to the ocean. At first Arthur does not like it. However, after a while he finds that the water is very relaxing.
Arthur goes to the beach. Arthur is bored on a Friday night. He decides he would like to go to the beach. He calls his friend and asks him if he wants to come with him. His friend agrees to take Arthur to the beach. They have a great time at the beach.
Arthur goes to the beach. Arthur loved the ocean. One day, he decided to go for a walk on the beach. He walked down the beach and saw many beautiful flowers. Then, he noticed a seagull flying overhead. Arthur went back home and told his mother about the bird.
Arthur goes to the beach. Arthur loved going to the beach. He had a lot of fun at the beach. One day, Arthur went to the beach and got sand in his eyes. Arthur realized that he was not wearing sunscreen. Arthur went home with red spots on his face from the sand.
{'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2}
Arthur goes to the beach. Arthur was a very happy boy who loved going to the beach. One day, Arthur's mom told him she had an idea for him. She said that he could take his favorite toy and play in the ocean! He went to the beach with his favorite toy and played all day long. Now, Arthur loves the beach just as much as ever.
Arthur goes to the beach. Arthur was a very lazy boy who never did anything. One day his mom took him to the beach. He played in the water and sunbathed for hours. When it was time to go home, he went with his mother. His mom brought him back home and Arthur slept all day!
Arthur goes to the beach. Arthur is bored one day and decides he needs a vacation. He calls his friends up to go with him to the beach. They all agree that it would be fun to spend time together. When they get there, Arthur spends most of his time swimming. He had a great trip at the beach!
Arthur goes to the beach. Arthur is bored one day and decides to go to the beach. He gets his towel, sunscreen and some sunblock. When he arrives at the beach, it's very hot outside. Finally Arthur finds a spot on the sand that isn't so hot. Now Arthur can enjoy the rest of his day!
Arthur goes to the beach. Arthur is bored at home. He decides he needs a change of scenery. He calls his friend and asks if they can go to the beach. His friends agree to go with him. They spend the day playing in the ocean together.
|
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base
|
nestoralvaro
| 2022-06-07T12:57:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-07T10:31:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t55_403.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.9647
- Rouge2: 0.1331
- Rougel: 0.9633
- Rougelsum: 0.9627
- Gen Len: 6.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.9647 | 0.1331 | 0.9633 | 0.9627 | 6.4489 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
clement-w/PPO-FrozenLakeV1-rlclass
|
clement-w
| 2022-06-07T12:54:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-07T12:45:23Z |
---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 0.80 +/- 0.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.