modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jimmy880219/bert-base-chinese-finetuned-squad | jimmy880219 | 2022-10-30T13:25:52Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-30T12:22:01Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-squad
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.3796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7051 | 1.0 | 6911 | 11.3796 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
shrdlu9/bert-base-cased-ud-NER | shrdlu9 | 2022-10-30T12:02:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"ner",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-30T11:01:00Z | ---
language:
- en
tags:
- ner
metrics:
- seqeval
---
## Overview
This model consists of a bert-base-cased model fine-tuned for Named Entity Recognition (NER) with 18 NE tags on the Universal Dependencies English dataset.
\
https://universaldependencies.org/en/index.html
\
The recognized NE tags are:
| CARDINAL | cardinal value |
|-----------------------|------------------------|
| DATE | date value |
| EVENT | event name |
| FAC | building name |
| GPE | geo-political entity |
| LANGUAGE | language name |
| LAW | law name |
| LOC | location name |
| MONEY | money name |
| NORP | affiliation |
| ORDINAL | ordinal value |
| ORG | organization name |
| PERCENT | percent value |
| PERSON | person name |
| PRODUCT | product name |
| QUANTITY | quantity value |
| TIME | time value |
| WORK_OF_ART | name of work of art |
A fine-tuned bert-base-uncased model is also available. |
tlttl/tluo_xml_roberta_base_amazon_review_sentiment_v3 | tlttl | 2022-10-30T11:23:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-30T07:54:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tluo_xml_roberta_base_amazon_review_sentiment_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment_v3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9456
- Accuracy: 0.6023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.056 | 0.33 | 5000 | 0.9885 | 0.5642 |
| 0.944 | 0.67 | 10000 | 0.9574 | 0.5913 |
| 0.9505 | 1.0 | 15000 | 0.9674 | 0.579 |
| 0.8902 | 1.33 | 20000 | 0.9660 | 0.5945 |
| 0.8851 | 1.67 | 25000 | 0.9470 | 0.5888 |
| 0.8714 | 2.0 | 30000 | 0.9456 | 0.6023 |
| 0.7967 | 2.33 | 35000 | 0.9662 | 0.5978 |
| 0.767 | 2.67 | 40000 | 0.9738 | 0.5987 |
| 0.7595 | 3.0 | 45000 | 0.9740 | 0.5988 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
NlpHUST/vi-word-segmentation | NlpHUST | 2022-10-30T09:45:24Z | 140 | 4 | transformers | [
"transformers",
"pytorch",
"electra",
"token-classification",
"word segmentation",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-30T04:48:30Z | ---
widget:
- text: "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội"
tags:
- word segmentation
language:
- vi
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: vi-word-segmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-word-segmentation
This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on an vlsp 2013 vietnamese word segmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.9833
- Recall: 0.9838
- F1: 0.9835
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("NlpHUST/vi-word-segmentation")
model = AutoModelForTokenClassification.from_pretrained("NlpHUST/vi-word-segmentation")
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer)
example = "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội"
ner_results = nlp(example)
example_tok = ""
for e in ner_results:
if "##" in e["word"]:
example_tok = example_tok + e["word"].replace("##","")
elif e["entity"] =="I":
example_tok = example_tok + "_" + e["word"]
else:
example_tok = example_tok + " " + e["word"]
print(example_tok)
Phát_biểu tại phiên thảo_luận về tình_hình kinh_tế xã_hội của Quốc_hội sáng 28 / 10 , Bộ_trưởng Bộ LĐ - TB [UNK] XH Đào_Ngọc_Dung khái_quát , tại phiên khai_mạc kỳ họp , lãnh_đạo chính_phủ đã báo_cáo , đề_cập tương_đối rõ_ràng về việc thực_hiện các chính_sách an_sinh xã_hội
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0168 | 1.0 | 4712 | 0.0284 | 0.9813 | 0.9825 | 0.9819 | 0.9904 |
| 0.0107 | 2.0 | 9424 | 0.0350 | 0.9789 | 0.9814 | 0.9802 | 0.9895 |
| 0.005 | 3.0 | 14136 | 0.0364 | 0.9826 | 0.9843 | 0.9835 | 0.9909 |
| 0.0033 | 4.0 | 18848 | 0.0434 | 0.9830 | 0.9831 | 0.9830 | 0.9908 |
| 0.0017 | 5.0 | 23560 | 0.0501 | 0.9833 | 0.9838 | 0.9835 | 0.9911 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fumi13/q-Taxi-v3 | fumi13 | 2022-10-30T09:40:15Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-30T09:40:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fumi13/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
fumi13/q-FrozenLake-v1-4x4-noSlippery | fumi13 | 2022-10-30T09:27:39Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-30T09:27:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fumi13/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Tritkoman/English2Sardinian | Tritkoman | 2022-10-30T07:41:31Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"it",
"dataset:Tritkoman/autotrain-data-gatvotva",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| translation | 2022-10-30T07:31:37Z | ---
tags:
- autotrain
- translation
language:
- en
- it
datasets:
- Tritkoman/autotrain-data-gatvotva
co2_eq_emissions:
emissions: 14.908336657166226
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1931765297
- CO2 Emissions (in grams): 14.9083
## Validation Metrics
- Loss: 2.666
- SacreBLEU: 17.990
- Gen len: 64.922 |
g30rv17ys/ddpm-hkuoct-dr-256-200ep | g30rv17ys | 2022-10-30T06:16:58Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-29T19:28:18Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-dr-256-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-dr-256-200ep/tensorboard?#scalars)
|
hsc748NLP/TfhBERT | hsc748NLP | 2022-10-30T05:37:15Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-21T14:52:14Z | ---
license: apache-2.0
---
https://github.com/hsc748NLP/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing |
hsc748NLP/BtfhBERT | hsc748NLP | 2022-10-30T05:36:55Z | 162 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-21T14:52:37Z | ---
license: apache-2.0
---
https://github.com/hsc748NLP/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing |
bharadwajkg/sample-beauty-cardiffnlp-twitter-roberta-base-sentiment | bharadwajkg | 2022-10-30T05:01:44Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T07:45:57Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: sample-beauty-cardiffnlp-twitter-roberta-base-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sample-beauty-cardiffnlp-twitter-roberta-base-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3954
- Accuracy: 0.9
- F1: 0.6805
- Recall: 0.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Ankit15nov/xlm-roberta-base-finetuned-panx-it | Ankit15nov | 2022-10-30T03:24:38Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-30T03:22:50Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8199834847233691
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2484
- F1: 0.8200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7739 | 1.0 | 70 | 0.3264 | 0.7482 |
| 0.3054 | 2.0 | 140 | 0.2655 | 0.7881 |
| 0.1919 | 3.0 | 210 | 0.2484 | 0.8200 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.5.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/leif-jones | sd-concepts-library | 2022-10-30T01:21:56Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-30T01:21:52Z | ---
license: mit
---
### leif jones on Stable Diffusion
This is the `<leif-jones>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
tlttl/tluo_xml_roberta_base_amazon_review_sentiment_v2 | tlttl | 2022-10-30T00:51:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T15:21:12Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tluo_xml_roberta_base_amazon_review_sentiment_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment_v2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9630
- Accuracy: 0.6057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0561 | 0.33 | 5000 | 0.9954 | 0.567 |
| 0.948 | 0.67 | 10000 | 0.9641 | 0.5862 |
| 0.9557 | 1.0 | 15000 | 0.9605 | 0.589 |
| 0.8891 | 1.33 | 20000 | 0.9420 | 0.5875 |
| 0.8889 | 1.67 | 25000 | 0.9397 | 0.592 |
| 0.8777 | 2.0 | 30000 | 0.9236 | 0.6042 |
| 0.778 | 2.33 | 35000 | 0.9612 | 0.5972 |
| 0.7589 | 2.67 | 40000 | 0.9728 | 0.5995 |
| 0.7593 | 3.0 | 45000 | 0.9630 | 0.6057 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
nqhuy/ASR_Phimtailieu_WithLM | nqhuy | 2022-10-30T00:09:00Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-26T01:38:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: ASR_Phimtailieu_WithLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR_Phimtailieu_WithLM
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5235
- eval_wer: 0.2531
- eval_runtime: 570.9035
- eval_samples_per_second: 15.467
- eval_steps_per_second: 1.934
- epoch: 2.58
- step: 39000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.42184e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ifrz/wav2vec2-large-xlsr-galician | ifrz | 2022-10-29T23:47:47Z | 4,518 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-04-29T08:55:46Z | # wav2vec2-large-xlsr-galician
---
language: gl
datasets:
- OpenSLR 77
- mozilla-foundation common_voice_8_0
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Galician wav2vec2-large-xlsr-galician
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset_1:
name: OpenSLR
type: openslr
args: gl
dataset_2:
name: mozilla-foundation
type: common voice
args: gl
metrics:
- name: Test WER
type: wer
value: 7.12
---
# Model
Fine-tuned model for Galician language
Based on the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) self-supervised model
Fine-tune with audio labelled from [OpenSLR](https://openslr.org/77/) and Mozilla [Common_Voice](https://commonvoice.mozilla.org/gl) (both datasets previously refined)
Check training metrics to see results
# Testing
Make sure that the audio speech input is sampled at 16kHz (mono).
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model = Wav2Vec2ForCTC.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")
processor = Wav2Vec2Processor.from_pretrained("ifrz/wav2vec2-large-xlsr-galician")
# Reading taken audio clip
import librosa, torch
audio, rate = librosa.load("./gl_test_1.wav", sr = 16000)
# Taking an input value
input_values = processor(audio, sampling_rate=16_000, return_tensors = "pt", padding="longest").input_values
# Storing logits (non-normalized prediction values)
logits = model(input_values).logits
# Storing predicted ids
prediction = torch.argmax(logits, dim = -1)
# Passing the prediction to the tokenzer decode to get the transcription
transcription = processor.batch_decode(prediction)[0]
print(transcription)
``` |
prakharz/DIAL-T0 | prakharz | 2022-10-29T23:39:24Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"arxiv:2205.12673",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-29T23:35:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DIAL_T0
results: []
widget:
- text: "Instruction: Edit the provided response into a response that is fluent and coherent to the dialogue context. \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [RESPONSE] Can describe itit , sir ? It will help us find [ENDOFDIALOGUE] [QUESTION] Given this context and response provided, the edited response is"
- text: "Instruction: Generate a response that starts with the provided initial phrase. \n\nInput: [INITIAL_PHRASE] Please describe [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase is"
- text: "Instruction: Generate a response that starts with the provided initial phrase and contains the provided keywords. \n\nInput: [INITIAL PHRASE] Please describe [KEYWORDS] color, any documents [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase and keywords is"
- text: "Instruction: What is the intent of the response \n\nInput: [CONTEXT] How may I help you? [RESPONSE] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [OPTIONS] booking, reservation change, checkout, lost&found, time information, security, schedules [QUESTION] The intent of the response is"
- text: "Instruction: Generate a summary for the following dialog context. \n\nInput: [CONTEXT] Ann: Wanna go out? [ENDOFTURN] Kate: Not really, I feel sick. [ENDOFTURN] Ann: Drink mint tea, they say it helps. Ok, so we'll meet up another time. Take care! [ENDOFTURN] Kate: Thanks! [ENDOFDIALOGUE] [QUESTION] For this dialogue, the summary is: "
- text: "Instruction: Consider the context of the conversation and a document and generate an answer accordingly \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] What is the response of the following question: Where was the person going to?"
- text: "Instruction: Generate a response using the provided background knowledge. \n\nInput: [KNOWLEDGE] Emailid for cases related to lost and found is [email protected] [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] Generate a response using the information from the background knowledge."
---
# InstructDial
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.
[Paper](https://arxiv.org/abs/2205.12673)
# Dial_T0
T5-xl 3B type model trained on InstructDial tasks. This model is a fine-tuned version of bigscience/T0_3B model
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
All tasks in InstructDial framework (including all dialogue eval tasks)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
psdwizzard/Boredape-Diffusion | psdwizzard | 2022-10-29T23:14:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-29T22:41:39Z | ---
license: creativeml-openrail-m
---
Boredape Diffusion
This is the fine-tuned Stable Diffusion model trained on images Bored Ape Yacht Club. Make your own sometimes busted looking Bored Apes.
Use keyword boredape |
sd-concepts-library/edgerunners-style-v2 | sd-concepts-library | 2022-10-29T23:01:46Z | 0 | 6 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-29T23:01:35Z | ---
license: mit
---
### Edgerunners Style v2 on Stable Diffusion
This is the `<edgerunners-style-av-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









|
beautifulpichai/all-MiniLM-L12-v2-ledgar-contrastive | beautifulpichai | 2022-10-29T22:45:34Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-29T22:45:25Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2451 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2451,
"warmup_steps": 246,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Alt41r/gpt-simpson | Alt41r | 2022-10-29T22:44:18Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Text Generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T20:35:04Z | ---
language:
- en
tags:
- Text Generation
- conversational
widget:
- text: "Do you like beer?"
example_title: "Example 1"
- text: "Who are you?"
example_title: "Example 2"
--- |
sergiocannata/convnext-tiny-224-finetuned-brs | sergiocannata | 2022-10-29T22:41:21Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-29T22:16:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: convnext-tiny-224-finetuned-brs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8235294117647058
- name: F1
type: f1
value: 0.7272727272727272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-brs
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8667
- Accuracy: 0.8235
- F1: 0.7273
- Precision (ppv): 0.8
- Recall (sensitivity): 0.6667
- Specificity: 0.9091
- Npv: 0.8333
- Auc: 0.7879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:|
| 0.6766 | 6.25 | 100 | 0.7002 | 0.4706 | 0.5263 | 0.3846 | 0.8333 | 0.2727 | 0.75 | 0.5530 |
| 0.6408 | 12.49 | 200 | 0.6770 | 0.6471 | 0.5714 | 0.5 | 0.6667 | 0.6364 | 0.7778 | 0.6515 |
| 0.464 | 18.74 | 300 | 0.6624 | 0.5882 | 0.5882 | 0.4545 | 0.8333 | 0.4545 | 0.8333 | 0.6439 |
| 0.4295 | 24.98 | 400 | 0.6938 | 0.5294 | 0.5 | 0.4 | 0.6667 | 0.4545 | 0.7143 | 0.5606 |
| 0.3952 | 31.25 | 500 | 0.5974 | 0.7059 | 0.6154 | 0.5714 | 0.6667 | 0.7273 | 0.8 | 0.6970 |
| 0.1082 | 37.49 | 600 | 0.6163 | 0.6471 | 0.5 | 0.5 | 0.5 | 0.7273 | 0.7273 | 0.6136 |
| 0.1997 | 43.74 | 700 | 0.6155 | 0.7059 | 0.6154 | 0.5714 | 0.6667 | 0.7273 | 0.8 | 0.6970 |
| 0.1267 | 49.98 | 800 | 0.9063 | 0.6471 | 0.5714 | 0.5 | 0.6667 | 0.6364 | 0.7778 | 0.6515 |
| 0.1178 | 56.25 | 900 | 0.8672 | 0.7059 | 0.6667 | 0.5556 | 0.8333 | 0.6364 | 0.875 | 0.7348 |
| 0.2008 | 62.49 | 1000 | 0.7049 | 0.8235 | 0.7692 | 0.7143 | 0.8333 | 0.8182 | 0.9 | 0.8258 |
| 0.0996 | 68.74 | 1100 | 0.4510 | 0.8235 | 0.7692 | 0.7143 | 0.8333 | 0.8182 | 0.9 | 0.8258 |
| 0.0115 | 74.98 | 1200 | 0.7561 | 0.8235 | 0.7692 | 0.7143 | 0.8333 | 0.8182 | 0.9 | 0.8258 |
| 0.0177 | 81.25 | 1300 | 1.0400 | 0.7059 | 0.6667 | 0.5556 | 0.8333 | 0.6364 | 0.875 | 0.7348 |
| 0.0261 | 87.49 | 1400 | 0.9139 | 0.8235 | 0.7692 | 0.7143 | 0.8333 | 0.8182 | 0.9 | 0.8258 |
| 0.028 | 93.74 | 1500 | 0.7367 | 0.7647 | 0.7143 | 0.625 | 0.8333 | 0.7273 | 0.8889 | 0.7803 |
| 0.0056 | 99.98 | 1600 | 0.8667 | 0.8235 | 0.7273 | 0.8 | 0.6667 | 0.9091 | 0.8333 | 0.7879 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
RafboOrg/ppo-LunarLander-v2 | RafboOrg | 2022-10-29T22:04:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-29T21:32:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 216.33 +/- 18.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SirVeggie/Aeolian | SirVeggie | 2022-10-29T21:50:20Z | 0 | 4 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-17T17:36:12Z | ---
license: creativeml-openrail-m
---
# Aeolian stable diffusion model
Original artist: WLOP\
Patreon: https://www.patreon.com/wlop/posts
An original character created and drawn by WLOP for his webcomic Ghostblade.
## Basic explanation
Token and Class words are what guide the AI to produce images similar to the trained style/object/character.
Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect.
There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one.
Adding token word/phrase class word/phrase at the start of the prompt in that order produces results most similar to the trained concept, but they can be included elsewhere as well. Some models produce better results when not including all token/class words.
3k models are are more flexible, while 5k models produce images closer to the trained concept.
I recommend 2k/3k models for normal use, and 5k/6k models for model merging and use without token/class words.
However it can be also very prompt specific. I highly recommend self-experimentation.
## Comparison
Aeolian and aeolian_3000 are quite similar with slight differences.
Epoch 5 and 6 versions were earlier in the waifu diffusion 1.3 training process, so it is easier to produce more varied, non anime, results.
## aeolian
```
token: m_aeolian
class: §¶•
base: waifu diffusion 1.2-e5
notes: 2020 step training
```
## aeolian_3000
```
token: m_aeolian
class: §¶•
base: waifu diffusion 1.2-e6
notes: 3000 step training
```
## aeolian_v2
```
token: m_concept
class: §
base: waifu diffusion 1.3
notes: 1.3 model, which may give some benefits over 1.2-e5
```
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
beautifulpichai/all-MiniLM-L6-v2-ledgar-contrastive | beautifulpichai | 2022-10-29T21:15:08Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-29T21:14:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2451 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2451,
"warmup_steps": 246,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
NikitaBaramiia/dqn-SpaceInvadersNoFrameskip-v4 | NikitaBaramiia | 2022-10-29T21:11:12Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-29T21:10:39Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 451.00 +/- 99.62
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NikitaBaramiia -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NikitaBaramiia -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NikitaBaramiia
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huggingtweets/mcpeachpies | huggingtweets | 2022-10-29T20:45:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T20:33:46Z | ---
language: en
thumbnail: http://www.huggingtweets.com/mcpeachpies/1667076223314/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1396209493415845888/vye-v8UP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mcpeachpies 🍑</div>
<div style="text-align: center; font-size: 14px;">@mcpeachpies</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mcpeachpies 🍑.
| Data | mcpeachpies 🍑 |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 208 |
| Short tweets | 1076 |
| Tweets kept | 1955 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ys0xeox/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mcpeachpies's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/d1x4t5yn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/d1x4t5yn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mcpeachpies')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Athithya/finetuning-sentiment-model-3000-samples | Athithya | 2022-10-29T19:52:08Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T19:31:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ankur-gupta/dummy | ankur-gupta | 2022-10-29T18:36:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-10-27T21:35:24Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Stancld/long-t5-local-large | Stancld | 2022-10-29T18:18:34Z | 13 | 0 | transformers | [
"transformers",
"tf",
"longt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-29T18:13:19Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: long-t5-local-large
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-local-large
This model is a fine-tuned version of [google/long-t5-local-large](https://huggingface.co/google/long-t5-local-large) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Stancld/long-t5-local-base | Stancld | 2022-10-29T18:13:08Z | 7 | 0 | transformers | [
"transformers",
"tf",
"longt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-29T18:11:08Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: long-t5-local-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-local-base
This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Stancld/long-t5-tglobal-large | Stancld | 2022-10-29T18:11:04Z | 12 | 0 | transformers | [
"transformers",
"tf",
"longt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-29T18:04:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: long-t5-tglobal-large
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-large
This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
ViktorDo/SciBERT-WIKI_Epiphyte_Finetuned | ViktorDo | 2022-10-29T17:39:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T16:21:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Epiphyte_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0782 | 1.0 | 2094 | 0.0624 |
| 0.0591 | 2.0 | 4188 | 0.0481 |
| 0.0278 | 3.0 | 6282 | 0.0530 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/wayneradiotv | huggingtweets | 2022-10-29T17:30:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T17:30:00Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511060623072927747/xvz5xYEj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wayneradiotv</div>
<div style="text-align: center; font-size: 14px;">@wayneradiotv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wayneradiotv.
| Data | wayneradiotv |
| --- | --- |
| Tweets downloaded | 3227 |
| Retweets | 1142 |
| Short tweets | 365 |
| Tweets kept | 1720 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nfxw79q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wayneradiotv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dhlzg3t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dhlzg3t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wayneradiotv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/socpens | huggingtweets | 2022-10-29T17:04:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T17:03:12Z | ---
language: en
thumbnail: http://www.huggingtweets.com/socpens/1667063063525/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404907635934216205/unH2FvUy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">scorpy</div>
<div style="text-align: center; font-size: 14px;">@socpens</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from scorpy.
| Data | scorpy |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 758 |
| Short tweets | 423 |
| Tweets kept | 2055 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xewzfqo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @socpens's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1u64kl11) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1u64kl11/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/socpens')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ViktorDo/SciBERT-WIKI_Growth_Form_Finetuned | ViktorDo | 2022-10-29T16:06:48Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T14:41:58Z | ---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Growth_Form_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Growth_Form_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.259 | 1.0 | 2320 | 0.2713 |
| 0.195 | 2.0 | 4640 | 0.2513 |
| 0.149 | 3.0 | 6960 | 0.2853 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
wskhanh/bert-finetuned-squad | wskhanh | 2022-10-29T15:05:55Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-28T13:24:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/donvesh | huggingtweets | 2022-10-29T11:48:30Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T11:47:07Z | ---
language: en
thumbnail: http://www.huggingtweets.com/donvesh/1667044106194/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1396435744416178186/awVZj7eG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DONVESH Ω</div>
<div style="text-align: center; font-size: 14px;">@donvesh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DONVESH Ω.
| Data | DONVESH Ω |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 0 |
| Short tweets | 917 |
| Tweets kept | 2330 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/78vg6mnn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donvesh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cueqqyt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cueqqyt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donvesh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
VEG3/TLDR-Vegan-Studies | VEG3 | 2022-10-29T11:36:42Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"autotrain",
"summarization",
"en",
"dataset:vegancreativecompass/autotrain-data-scitldr-for-vegan-studies",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-10-29T10:48:26Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "Positivity towards meat consumption remains strong, despite evidence of negative environmental and ethical outcomes. Although awareness of these repercussions is rising, there is still public resistance to removing meat from our diets. One potential method to alleviate these effects is to produce in vitro meat: meat grown in a laboratory that does not carry the same environmental or ethical concerns. However, there is limited research examining public attitudes towards in vitro meat, thus we know little about the capacity for it be accepted by consumers. This study aimed to examine perceptions of in vitro meat and identify potential barriers that might prevent engagement. Through conducting an online survey with US participants, we identified that although most respondents were willing to try in vitro meat, only one third were definitely or probably willing to eat in vitro meat regularly or as a replacement for farmed meat. Men were more receptive to it than women, as were politically liberal respondents compared with conservative ones. Vegetarians and vegans were more likely to perceive benefits compared to farmed meat, but they were less likely to want to try it than meat eaters. The main concerns were an anticipated high price, limited taste and appeal and a concern that the product was unnatural. It is concluded that people in the USA are likely to try in vitro meat, but few believed that it would replace farmed meat in their diet."
datasets:
- vegancreativecompass/autotrain-data-scitldr-for-vegan-studies
co2_eq_emissions:
emissions: 57.779835625872906
---
# About This Model
This model has been trained to take abstracts of scientific studies about veganism & animal rights and turn them into single-sentence takeaways for vegan businesses and animal activists to apply to their activism. The dataset was curated by scraping TLDRs and abstracts from Semantic Scholar and having vegan activists and marketing professionals from VEG3 review the usefulness of a random sample of the dataset to ensure their relevance to vegan businesses and animal activists.
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1923365100
- CO2 Emissions (in grams): 57.7798
## Validation Metrics
- Loss: 0.711
- Rouge1: 44.317
- Rouge2: 30.335
- RougeL: 41.369
- RougeLsum: 41.198
- Gen Len: 17.855
## Usage
You can use cURL to access this model:
```
curl https://api-inference.huggingface.co/models/VEG3/TLDR-Vegan-Studies \
-X POST \
-d '{"inputs":"ABSTRACT"}' \
-H "Authorization: Bearer YOURAPIKEY"
``` |
shuaifan/SentiWSP | shuaifan | 2022-10-29T11:02:55Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-19T09:53:50Z | # SentiWSP
## For paper: Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
We propose **SentiWSP**, a novel **Senti**ment-aware pre-trained language model with combined **W**ord-level and **S**entence-level **P**re-training tasks.
The word level pre-training task detects replaced sentiment words, via a generator-discriminator framework, to enhance the PLM's knowledge about sentiment words.
The sentence level pre-training task further strengthens the discriminator via a contrastive learning framework, with similar sentences as negative samples, to encode sentiments in a sentence.
## Fine-tunning
You can also load our model in huggingface ([https://huggingface.co/shuaifan/SentiWSP](https://huggingface.co/shuaifan/SentiWSP)) to fine-tunning in sentiment analysis tasks:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("shuaifan/SentiWSP")
model = AutoModelForSequenceClassification.from_pretrained("shuaifan/SentiWSP")
```
|
musika/musika-s3rl-happy-hardcore | musika | 2022-10-29T10:57:39Z | 0 | 4 | null | [
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
]
| null | 2022-10-29T10:57:16Z | ---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika_s3rl_happy_hardcore
## Model provided by: Broccaloo
Pretrained musika_s3rl_happy_hardcore model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_s3rl_happy_hardcore model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
Tkelley1990/Ddd | Tkelley1990 | 2022-10-29T10:50:16Z | 0 | 0 | null | [
"doi:10.57967/hf/0071",
"region:us"
]
| null | 2022-10-29T10:48:40Z | My wife getting her vagina lick by another women |
bekirbakar/wav2vec2-large-xlsr-53-tr-fine-tuning-deprecated | bekirbakar | 2022-10-29T10:06:09Z | 36 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-06-01T09:50:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-tr-fine-tuning-deprecated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-tr-fine-tuning-02
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
|
Pablo94/racism-finetuned-detests-29-10-2022 | Pablo94 | 2022-10-29T08:53:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T08:34:05Z | ---
license: cc
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: racism-finetuned-detests-29-10-2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# racism-finetuned-detests-29-10-2022
This model is a fine-tuned version of [davidmasip/racism](https://huggingface.co/davidmasip/racism) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0150
- Accuracy: 0.8560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2659 | 1.0 | 153 | 0.3250 | 0.8429 |
| 0.1191 | 2.0 | 306 | 0.5344 | 0.8380 |
| 0.0074 | 3.0 | 459 | 0.8188 | 0.8396 |
| 0.0001 | 4.0 | 612 | 0.9264 | 0.8462 |
| 0.0001 | 5.0 | 765 | 0.9551 | 0.8462 |
| 0.0001 | 6.0 | 918 | 0.9771 | 0.8527 |
| 0.0001 | 7.0 | 1071 | 0.9937 | 0.8527 |
| 0.0001 | 8.0 | 1224 | 1.0054 | 0.8560 |
| 0.0 | 9.0 | 1377 | 1.0126 | 0.8560 |
| 0.0001 | 10.0 | 1530 | 1.0150 | 0.8560 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sirui/bert-base-chinese-finetuned-own | sirui | 2022-10-29T08:33:49Z | 156 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-29T07:59:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-own
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-own
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the Myown Car_information dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 120 | 1.7141 |
| No log | 2.0 | 240 | 1.6677 |
| No log | 3.0 | 360 | 1.7976 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pepa/bigbird-roberta-large-snli | pepa | 2022-10-29T06:20:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"big_bird",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T06:18:46Z | ---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: bigbird-roberta-large-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-large-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2437
- eval_p: 0.9216
- eval_r: 0.9214
- eval_f1: 0.9215
- eval_runtime: 22.8545
- eval_samples_per_second: 429.849
- eval_steps_per_second: 26.866
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
pepa/deberta-v3-large-snli | pepa | 2022-10-29T06:18:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T06:16:18Z | ---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: deberta-v3-large-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2545
- eval_p: 0.9169
- eval_r: 0.9164
- eval_f1: 0.9166
- eval_runtime: 30.4321
- eval_samples_per_second: 322.817
- eval_steps_per_second: 20.176
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
pepa/roberta-large-snli | pepa | 2022-10-29T06:16:04Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-29T06:14:17Z | ---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: roberta-large-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3151
- eval_p: 0.9017
- eval_r: 0.9010
- eval_f1: 0.9012
- eval_runtime: 23.1208
- eval_samples_per_second: 424.898
- eval_steps_per_second: 26.556
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
google/maxim-s3-deblurring-reds | google | 2022-10-29T05:00:10Z | 0 | 6 | keras | [
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:reds",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
]
| image-to-image | 2022-10-18T18:35:22Z | ---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- reds
---
# MAXIM pre-trained on REDS for image deblurring
MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 28.93 and an SSIM of 0.865.
## Intended uses & limitations
You can use the raw model for image deblurring tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/blob/main/images/Deblurring/input/109fromGOPR1096.MP4.png?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-deblurring-reds")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
``` |
sd-concepts-library/warhammer-40k-drawing-style | sd-concepts-library | 2022-10-29T03:55:44Z | 0 | 5 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-29T03:29:27Z | ---
license: mit
---
### Warhammer 40k Drawing style on Stable Diffusion
This is the `<warhammer40k-drawing-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










Here are images generated with this style:



 |
divamgupta/stable-diffusion-tensorflow | divamgupta | 2022-10-29T02:04:36Z | 0 | 6 | null | [
"region:us"
]
| null | 2022-09-17T04:06:46Z | Weights for the TF implementation of stable diffusion.
License : creativeml-openrail-m |
NeelNanda/SoLU_12L_v23_old | NeelNanda | 2022-10-29T01:21:18Z | 111 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-15T01:27:20Z | A GPT-2 Medium sized SoLU model trained on 11.7B tokens of the Pile (training crashed because of dodgy data loaders at 11B, and wasn't resumed, so this is shorter than the others). 12 layers, d_model=1536. |
huggingtweets/davidad | huggingtweets | 2022-10-29T00:38:44Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-29T00:35:53Z | ---
language: en
thumbnail: http://www.huggingtweets.com/davidad/1667003842158/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580233178266091521/E1XjQ5xZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">davidad 🎇</div>
<div style="text-align: center; font-size: 14px;">@davidad</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from davidad 🎇.
| Data | davidad 🎇 |
| --- | --- |
| Tweets downloaded | 3213 |
| Retweets | 155 |
| Short tweets | 276 |
| Tweets kept | 2782 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fmrw5sa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidad's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/f4jmon3b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/f4jmon3b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/davidad')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jhakaran1/bert-essay-concat | jhakaran1 | 2022-10-29T00:00:25Z | 156 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-28T02:20:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-essay-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-essay-concat
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0735
- Accuracy: 0.6331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7024 | 1.0 | 3677 | 0.9159 | 0.6329 |
| 0.6413 | 2.0 | 7354 | 1.0267 | 0.6346 |
| 0.5793 | 3.0 | 11031 | 1.0735 | 0.6331 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
christyli/vit-base-beans | christyli | 2022-10-28T21:59:17Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-28T21:55:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3930
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0349 | 1.0 | 17 | 0.8167 | 0.9323 |
| 0.7502 | 2.0 | 34 | 0.6188 | 0.9699 |
| 0.5508 | 3.0 | 51 | 0.4856 | 0.9774 |
| 0.4956 | 4.0 | 68 | 0.4109 | 0.9774 |
| 0.4261 | 5.0 | 85 | 0.3930 | 0.9774 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu102
- Tokenizers 0.12.1
|
sd-concepts-library/anime-background-style-v2 | sd-concepts-library | 2022-10-28T19:56:39Z | 0 | 24 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-28T19:45:11Z | ---
license: mit
---
### Anime Background style (v2) on Stable Diffusion
This is the `<anime-background-style-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:














Here are images generated with this style:



 |
hsuvaskakoty/bart_def_gen_40k | hsuvaskakoty | 2022-10-28T19:18:37Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-26T17:53:02Z | This is a fine-tuned BART model for Definition Generation. It is still in the prototype stage, fine-tuned only with 40k Training Instances of (definition, context) pairs for 3 epochs. The eval_loss is still in 2.30. The beam Size is 4. |
ViktorDo/SciBERT-POWO_Lifecycle_Finetuned | ViktorDo | 2022-10-28T19:12:38Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-28T18:06:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-POWO_Lifecycle_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-POWO_Lifecycle_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0899 | 1.0 | 1704 | 0.0795 |
| 0.0845 | 2.0 | 3408 | 0.0836 |
| 0.0684 | 3.0 | 5112 | 0.0812 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
leslyarun/grammatical-error-correction-quantized | leslyarun | 2022-10-28T17:55:05Z | 14 | 1 | transformers | [
"transformers",
"onnx",
"t5",
"text2text-generation",
"grammar",
"en",
"dataset:leslyarun/c4_200m_gec_train100k_test25k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-28T13:10:29Z | ---
language: en
tags:
- grammar
- text2text-generation
datasets:
- leslyarun/c4_200m_gec_train100k_test25k
---
# Get Grammatical corrections on your English text, trained on a subset of c4-200m dataset - ONNX Quantized Model
# Use the below code for running the model
``` python
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSeq2SeqLM
from optimum.pipelines import pipeline
tokenizer = AutoTokenizer.from_pretrained("leslyarun/grammatical-error-correction-quantized")
model = ORTModelForSeq2SeqLM.from_pretrained("leslyarun/grammatical-error-correction-quantized",
encoder_file_name="encoder_model_quantized.onnx",
decoder_file_name="decoder_model_quantized.onnx",
decoder_with_past_file_name="decoder_with_past_model_quantized.onnx")
text2text_generator = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
output = text2text_generator("grammar: " + sentence)
print(output[0]["generated_text"])
```
|
ybelkada/switch-base-8-xsum | ybelkada | 2022-10-28T17:54:45Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"switch_transformers",
"text2text-generation",
"en",
"dataset:c4",
"dataset:xsum",
"arxiv:2101.03961",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-28T13:29:07Z | ---
language:
- en
tags:
- text2text-generation
widget:
- text: "summarize: Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving."
example_title: "Summarization"
datasets:
- c4
- xsum
license: apache-2.0
---
# Model Card for Switch Transformers Base - 8 experts

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto", torch_dtype=torch.float16)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-base-8", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
> SwitchTransformers should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained SwitchTransformers and are not fine-tuned. It is normal if they perform well on zero-shot tasks.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
ivanzidov/setfit-occupation | ivanzidov | 2022-10-28T17:48:19Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-28T11:39:19Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 125000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 125000,
"warmup_steps": 12500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/vacantbyron | huggingtweets | 2022-10-28T17:17:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-28T17:16:32Z | ---
language: en
thumbnail: http://www.huggingtweets.com/vacantbyron/1666977471179/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510573556157095938/U0_Wyszj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Booyahncé</div>
<div style="text-align: center; font-size: 14px;">@vacantbyron</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Booyahncé.
| Data | Booyahncé |
| --- | --- |
| Tweets downloaded | 640 |
| Retweets | 358 |
| Short tweets | 53 |
| Tweets kept | 229 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ldzye8kh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vacantbyron's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yw5vo7g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yw5vo7g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vacantbyron')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
osanseviero/llamas-alpacas-camellos-platzi | osanseviero | 2022-10-28T16:09:09Z | 66 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-28T16:08:57Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: llamas-alpacas-camellos-platzi
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.641791045665741
---
# llamas-alpacas-camellos-platzi
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### camello

#### llama
 |
tlttl/tluo_xml_roberta_base_amazon_review_sentiment | tlttl | 2022-10-28T15:51:48Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-28T07:26:12Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tluo_xml_roberta_base_amazon_review_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9552
- Accuracy: 0.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5664 | 0.33 | 5000 | 1.3816 | 0.5688 |
| 0.9494 | 0.67 | 10000 | 0.9702 | 0.5852 |
| 0.9613 | 1.0 | 15000 | 0.9545 | 0.5917 |
| 0.8611 | 1.33 | 20000 | 0.9689 | 0.5953 |
| 0.8636 | 1.67 | 25000 | 0.9556 | 0.5943 |
| 0.8582 | 2.0 | 30000 | 0.9552 | 0.6003 |
| 0.7555 | 2.33 | 35000 | 1.0001 | 0.5928 |
| 0.7374 | 2.67 | 40000 | 1.0037 | 0.594 |
| 0.733 | 3.0 | 45000 | 0.9976 | 0.5983 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/ike_eveland | huggingtweets | 2022-10-28T15:32:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-28T15:28:47Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ike_eveland/1666971105525/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471628101323038729/JoncxUuW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ike Eveland🖋️NIJISANJI EN</div>
<div style="text-align: center; font-size: 14px;">@ike_eveland</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ike Eveland🖋️NIJISANJI EN.
| Data | Ike Eveland🖋️NIJISANJI EN |
| --- | --- |
| Tweets downloaded | 3228 |
| Retweets | 1734 |
| Short tweets | 417 |
| Tweets kept | 1077 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b3693t3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ike_eveland's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mraqvjt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mraqvjt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ike_eveland')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ajankelo/pklot_small_model | ajankelo | 2022-10-28T14:32:23Z | 0 | 0 | null | [
"PyTorch",
"vfnet",
"icevision",
"en",
"license:mit",
"region:us"
]
| null | 2022-10-27T21:11:41Z | ---
language: en
license: mit
tags:
- PyTorch
- vfnet
- icevision
---
# Small PKLot
This model is trained on a subset of the PKLot dataset ( first introduced in this paper [here](https://www.inf.ufpr.br/lesoliveira/download/ESWA2015.pdf)). The subset is comprised of 50 fully annotated images for training.
## Citation for original dataset
Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
|
gokul-g-menon/xls-r_fine_tuned | gokul-g-menon | 2022-10-28T13:01:13Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-26T16:47:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r_fine_tuned
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Rocketknight1/temp_upload_test | Rocketknight1 | 2022-10-28T12:29:16Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-28T12:28:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/temp_upload_test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp_upload_test
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6858
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.6858 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
sergiocannata/dit-base-finetuned-brs | sergiocannata | 2022-10-28T10:24:35Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-26T13:46:45Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: dit-base-finetuned-brs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8823529411764706
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-finetuned-brs
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8748
- Accuracy: 0.8824
- F1: 0.8571
- Precision (ppv): 0.8571
- Recall (sensitivity): 0.8571
- Specificity: 0.9
- Npv: 0.9
- Auc: 0.8786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:|
| 0.6624 | 6.25 | 100 | 0.5548 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.5201 | 12.49 | 200 | 0.4617 | 0.8824 | 0.8571 | 0.8571 | 0.8571 | 0.9 | 0.9 | 0.8786 |
| 0.5172 | 18.74 | 300 | 0.4249 | 0.8235 | 0.8000 | 0.75 | 0.8571 | 0.8 | 0.8889 | 0.8286 |
| 0.4605 | 24.98 | 400 | 0.3172 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.4894 | 31.25 | 500 | 0.4466 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.3694 | 37.49 | 600 | 0.5077 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.6172 | 43.74 | 700 | 0.5722 | 0.7647 | 0.7143 | 0.7143 | 0.7143 | 0.8 | 0.8 | 0.7571 |
| 0.3671 | 49.98 | 800 | 0.7006 | 0.7647 | 0.6667 | 0.8 | 0.5714 | 0.9 | 0.75 | 0.7357 |
| 0.4109 | 56.25 | 900 | 0.4410 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.3198 | 62.49 | 1000 | 0.7226 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.4283 | 68.74 | 1100 | 0.8089 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.3273 | 74.98 | 1200 | 0.9059 | 0.7647 | 0.6667 | 0.8 | 0.5714 | 0.9 | 0.75 | 0.7357 |
| 0.3237 | 81.25 | 1300 | 0.8520 | 0.8235 | 0.7692 | 0.8333 | 0.7143 | 0.9 | 0.8182 | 0.8071 |
| 0.2014 | 87.49 | 1400 | 0.9183 | 0.7647 | 0.6667 | 0.8 | 0.5714 | 0.9 | 0.75 | 0.7357 |
| 0.3204 | 93.74 | 1500 | 0.6769 | 0.8824 | 0.8571 | 0.8571 | 0.8571 | 0.9 | 0.9 | 0.8786 |
| 0.1786 | 99.98 | 1600 | 0.8748 | 0.8824 | 0.8571 | 0.8571 | 0.8571 | 0.9 | 0.9 | 0.8786 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
caskcsg/cotmae_base_uncased | caskcsg | 2022-10-28T08:55:17Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"arxiv:2208.07670",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-28T08:02:10Z | ---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- transformers
---
# CoT-MAE base uncased
CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval.
**CoT-MAE base uncased** is a general pre-training language model trained with unsupervised MS-Marco corpus.
Details can be found in our paper and codes.
Paper: [ConTextual Mask Auto-Encoder for Dense Passage Retrieval](https://arxiv.org/abs/2208.07670).
Code: [caskcsg/ir/cotmae](https://github.com/caskcsg/ir/tree/main/cotmae)
## Citations
If you find our work useful, please cite our paper.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2208.07670,
doi = {10.48550/ARXIV.2208.07670},
url = {https://arxiv.org/abs/2208.07670},
author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
caskcsg/cotmae_base_msmarco_reranker | caskcsg | 2022-10-28T08:20:41Z | 101 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"feature-extraction",
"sentence-similarity",
"arxiv:2208.07670",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-28T07:56:12Z | ---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- transformers
---
# CoT-MAE MS-Marco Passage Reranker
CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval.
**CoT-MAE MS-Marco Passage Reranker** is a reranker trained with CoT-MAE retriever mined MS-Marco hard negatives using [Tevatron](github.com/texttron/tevatron) toolkit.
Details can be found in our paper and codes.
Paper: [ConTextual Mask Auto-Encoder for Dense Passage Retrieval](https://arxiv.org/abs/2208.07670).
Code: [caskcsg/ir/cotmae](https://github.com/caskcsg/ir/tree/main/cotmae)
## Scores
### MS-Marco Passage full-ranking + top-200 rerank
We first retrieve using **CoT-MAE MS-Marco Passage Retriever** (named cotmae_base_msmarco_retriever), then use reranker to re-score top-200 retrieval results. Performances are as follows.
| MRR @10 | recall@1 | recall@50 | recall@200 | QueriesRanked |
|---------|----------|-----------|------------|----------------|
| 0.43884 | 0.304871 | 0.903582 | 0.956734 | 6980 |
## Citations
If you find our work useful, please cite our paper.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2208.07670,
doi = {10.48550/ARXIV.2208.07670},
url = {https://arxiv.org/abs/2208.07670},
author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
XaviXva/distilbert-base-uncased-finetuned-paws | XaviXva | 2022-10-28T08:14:21Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:pawsx",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-26T09:59:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-paws
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.8355
- name: F1
type: f1
value: 0.8361579553422098
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-paws
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3850
- Accuracy: 0.8355
- F1: 0.8362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6715 | 1.0 | 772 | 0.5982 | 0.6785 | 0.6799 |
| 0.4278 | 2.0 | 1544 | 0.3850 | 0.8355 | 0.8362 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
roa7n/DNABert_K6_G_quad | roa7n | 2022-10-28T07:57:55Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-27T10:29:54Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DNABert_K6_G_quad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNABert_K6_G_quad
This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2424
- Accuracy: 0.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.0927 | 1.0 | 9375 | 0.0818 | 0.9719 |
| 0.0681 | 2.0 | 18750 | 0.0714 | 0.9756 |
| 0.0607 | 3.0 | 28125 | 0.0863 | 0.9734 |
| 0.055 | 4.0 | 37500 | 0.0787 | 0.9757 |
| 0.0496 | 5.0 | 46875 | 0.0882 | 0.9758 |
| 0.0445 | 6.0 | 56250 | 0.0968 | 0.9752 |
| 0.0391 | 7.0 | 65625 | 0.1024 | 0.9755 |
| 0.0345 | 8.0 | 75000 | 0.1108 | 0.9739 |
| 0.0304 | 9.0 | 84375 | 0.1235 | 0.9745 |
| 0.0261 | 10.0 | 93750 | 0.1348 | 0.9730 |
| 0.023 | 11.0 | 103125 | 0.1427 | 0.9733 |
| 0.0197 | 12.0 | 112500 | 0.1462 | 0.9738 |
| 0.0182 | 13.0 | 121875 | 0.1570 | 0.9730 |
| 0.0145 | 14.0 | 131250 | 0.1757 | 0.9729 |
| 0.0122 | 15.0 | 140625 | 0.1911 | 0.9735 |
| 0.0108 | 16.0 | 150000 | 0.1977 | 0.9736 |
| 0.01 | 17.0 | 159375 | 0.1993 | 0.9732 |
| 0.0083 | 18.0 | 168750 | 0.2172 | 0.9736 |
| 0.0074 | 19.0 | 178125 | 0.2242 | 0.9740 |
| 0.0059 | 20.0 | 187500 | 0.2245 | 0.9732 |
| 0.0058 | 21.0 | 196875 | 0.2306 | 0.9733 |
| 0.0043 | 22.0 | 206250 | 0.2414 | 0.9737 |
| 0.0044 | 23.0 | 215625 | 0.2394 | 0.9735 |
| 0.0039 | 24.0 | 225000 | 0.2420 | 0.9736 |
| 0.0032 | 25.0 | 234375 | 0.2424 | 0.9737 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
tlttl/test-results-concat | tlttl | 2022-10-28T05:24:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-28T01:35:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-results-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-results-concat
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9408
- Accuracy: 0.6012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0408 | 0.33 | 5000 | 0.9773 | 0.5697 |
| 0.9442 | 0.67 | 10000 | 0.9701 | 0.5853 |
| 0.9579 | 1.0 | 15000 | 0.9502 | 0.5895 |
| 0.8867 | 1.33 | 20000 | 0.9467 | 0.5897 |
| 0.8819 | 1.67 | 25000 | 0.9371 | 0.5893 |
| 0.8748 | 2.0 | 30000 | 0.9408 | 0.6012 |
| 0.7759 | 2.33 | 35000 | 0.9734 | 0.5968 |
| 0.7599 | 2.67 | 40000 | 0.9722 | 0.5948 |
| 0.7626 | 3.0 | 45000 | 0.9654 | 0.5975 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bpatwa-shi/bert-finetuned-ner | bpatwa-shi | 2022-10-28T05:22:16Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-28T03:37:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9333113238692637
- name: Recall
type: recall
value: 0.9515314708852238
- name: F1
type: f1
value: 0.9423333333333334
- name: Accuracy
type: accuracy
value: 0.9870636368988049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0587
- Precision: 0.9333
- Recall: 0.9515
- F1: 0.9423
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.086 | 1.0 | 1756 | 0.0634 | 0.9186 | 0.9364 | 0.9274 | 0.9829 |
| 0.0372 | 2.0 | 3512 | 0.0598 | 0.9328 | 0.9478 | 0.9402 | 0.9860 |
| 0.0217 | 3.0 | 5268 | 0.0587 | 0.9333 | 0.9515 | 0.9423 | 0.9871 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Jak0ff/may | Jak0ff | 2022-10-28T05:06:14Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2022-10-28T05:06:14Z | ---
license: cc-by-nc-sa-4.0
---
|
huggingtweets/shinononetu | huggingtweets | 2022-10-28T04:43:17Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-28T04:42:41Z | ---
language: en
thumbnail: http://www.huggingtweets.com/shinononetu/1666932192965/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381323487499980806/i2qeW2Qi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Netu</div>
<div style="text-align: center; font-size: 14px;">@shinononetu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Netu.
| Data | Netu |
| --- | --- |
| Tweets downloaded | 1912 |
| Retweets | 627 |
| Short tweets | 453 |
| Tweets kept | 832 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38lbhqc9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shinononetu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tj5n1bk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tj5n1bk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shinononetu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Alex-VisTas/swin-tiny-patch4-window7-224-finetuned-woody_LeftGR_130epochs | Alex-VisTas | 2022-10-28T04:39:21Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-27T13:44:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-woody_LeftGR_130epochs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.904707233065442
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-woody_LeftGR_130epochs
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Accuracy: 0.9047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 130
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6614 | 1.0 | 61 | 0.6404 | 0.6521 |
| 0.5982 | 2.0 | 122 | 0.5548 | 0.7107 |
| 0.579 | 3.0 | 183 | 0.5390 | 0.7141 |
| 0.5621 | 4.0 | 244 | 0.4920 | 0.7623 |
| 0.5567 | 5.0 | 305 | 0.5375 | 0.7313 |
| 0.5271 | 6.0 | 366 | 0.5542 | 0.7405 |
| 0.5312 | 7.0 | 427 | 0.4573 | 0.7876 |
| 0.5477 | 8.0 | 488 | 0.4540 | 0.7784 |
| 0.5554 | 9.0 | 549 | 0.4932 | 0.7635 |
| 0.5247 | 10.0 | 610 | 0.4407 | 0.7968 |
| 0.5239 | 11.0 | 671 | 0.4479 | 0.7842 |
| 0.5294 | 12.0 | 732 | 0.4509 | 0.7910 |
| 0.531 | 13.0 | 793 | 0.4419 | 0.7933 |
| 0.5493 | 14.0 | 854 | 0.4646 | 0.7784 |
| 0.4934 | 15.0 | 915 | 0.4310 | 0.7968 |
| 0.4965 | 16.0 | 976 | 0.4449 | 0.7876 |
| 0.4946 | 17.0 | 1037 | 0.4342 | 0.8129 |
| 0.4716 | 18.0 | 1098 | 0.4129 | 0.8140 |
| 0.4679 | 19.0 | 1159 | 0.4290 | 0.8002 |
| 0.4799 | 20.0 | 1220 | 0.4356 | 0.7842 |
| 0.4744 | 21.0 | 1281 | 0.4042 | 0.8094 |
| 0.4512 | 22.0 | 1342 | 0.3953 | 0.8117 |
| 0.4633 | 23.0 | 1403 | 0.4157 | 0.7956 |
| 0.4528 | 24.0 | 1464 | 0.3920 | 0.8094 |
| 0.4427 | 25.0 | 1525 | 0.3930 | 0.8220 |
| 0.4238 | 26.0 | 1586 | 0.3891 | 0.8140 |
| 0.4257 | 27.0 | 1647 | 0.3700 | 0.8255 |
| 0.4102 | 28.0 | 1708 | 0.4122 | 0.7968 |
| 0.4505 | 29.0 | 1769 | 0.4210 | 0.7945 |
| 0.3973 | 30.0 | 1830 | 0.3923 | 0.8197 |
| 0.3824 | 31.0 | 1891 | 0.3908 | 0.8473 |
| 0.3887 | 32.0 | 1952 | 0.3897 | 0.8312 |
| 0.3723 | 33.0 | 2013 | 0.3747 | 0.8381 |
| 0.3608 | 34.0 | 2074 | 0.3706 | 0.8301 |
| 0.3718 | 35.0 | 2135 | 0.3937 | 0.8255 |
| 0.3692 | 36.0 | 2196 | 0.3984 | 0.8037 |
| 0.3533 | 37.0 | 2257 | 0.3792 | 0.8335 |
| 0.3625 | 38.0 | 2318 | 0.4070 | 0.8163 |
| 0.3633 | 39.0 | 2379 | 0.4130 | 0.8232 |
| 0.3602 | 40.0 | 2440 | 0.3996 | 0.8186 |
| 0.3557 | 41.0 | 2501 | 0.3756 | 0.8335 |
| 0.3373 | 42.0 | 2562 | 0.3914 | 0.8220 |
| 0.3102 | 43.0 | 2623 | 0.4165 | 0.8507 |
| 0.3135 | 44.0 | 2684 | 0.3852 | 0.8278 |
| 0.3286 | 45.0 | 2745 | 0.4164 | 0.8450 |
| 0.316 | 46.0 | 2806 | 0.3498 | 0.8496 |
| 0.2802 | 47.0 | 2867 | 0.3887 | 0.8462 |
| 0.3184 | 48.0 | 2928 | 0.3829 | 0.8576 |
| 0.2785 | 49.0 | 2989 | 0.3627 | 0.8485 |
| 0.2988 | 50.0 | 3050 | 0.3679 | 0.8370 |
| 0.267 | 51.0 | 3111 | 0.3528 | 0.8645 |
| 0.2907 | 52.0 | 3172 | 0.3538 | 0.8519 |
| 0.2857 | 53.0 | 3233 | 0.3593 | 0.8530 |
| 0.2651 | 54.0 | 3294 | 0.3732 | 0.8439 |
| 0.2447 | 55.0 | 3355 | 0.3441 | 0.8542 |
| 0.2542 | 56.0 | 3416 | 0.3897 | 0.8576 |
| 0.2634 | 57.0 | 3477 | 0.4082 | 0.8657 |
| 0.2505 | 58.0 | 3538 | 0.3416 | 0.8657 |
| 0.2555 | 59.0 | 3599 | 0.3725 | 0.8576 |
| 0.2466 | 60.0 | 3660 | 0.3496 | 0.8680 |
| 0.2585 | 61.0 | 3721 | 0.3214 | 0.8783 |
| 0.235 | 62.0 | 3782 | 0.3584 | 0.8737 |
| 0.215 | 63.0 | 3843 | 0.3467 | 0.8657 |
| 0.236 | 64.0 | 3904 | 0.3471 | 0.8829 |
| 0.2211 | 65.0 | 3965 | 0.3318 | 0.8863 |
| 0.1989 | 66.0 | 4026 | 0.3645 | 0.8852 |
| 0.2133 | 67.0 | 4087 | 0.3456 | 0.8898 |
| 0.2169 | 68.0 | 4148 | 0.3287 | 0.8852 |
| 0.223 | 69.0 | 4209 | 0.3182 | 0.8921 |
| 0.2379 | 70.0 | 4270 | 0.3260 | 0.8840 |
| 0.2149 | 71.0 | 4331 | 0.3230 | 0.8886 |
| 0.2007 | 72.0 | 4392 | 0.3926 | 0.8760 |
| 0.2091 | 73.0 | 4453 | 0.4133 | 0.8783 |
| 0.2229 | 74.0 | 4514 | 0.3867 | 0.8772 |
| 0.1903 | 75.0 | 4575 | 0.3594 | 0.8840 |
| 0.2124 | 76.0 | 4636 | 0.3388 | 0.8875 |
| 0.1999 | 77.0 | 4697 | 0.3305 | 0.8875 |
| 0.2053 | 78.0 | 4758 | 0.4670 | 0.8840 |
| 0.1958 | 79.0 | 4819 | 0.3468 | 0.8909 |
| 0.1839 | 80.0 | 4880 | 0.3902 | 0.8886 |
| 0.1715 | 81.0 | 4941 | 0.3830 | 0.8875 |
| 0.1803 | 82.0 | 5002 | 0.3134 | 0.8967 |
| 0.1803 | 83.0 | 5063 | 0.3935 | 0.8909 |
| 0.1865 | 84.0 | 5124 | 0.3882 | 0.8863 |
| 0.1884 | 85.0 | 5185 | 0.3485 | 0.8990 |
| 0.1663 | 86.0 | 5246 | 0.3667 | 0.8944 |
| 0.1665 | 87.0 | 5307 | 0.3545 | 0.8932 |
| 0.1556 | 88.0 | 5368 | 0.3882 | 0.8944 |
| 0.18 | 89.0 | 5429 | 0.3751 | 0.8898 |
| 0.1974 | 90.0 | 5490 | 0.3979 | 0.8863 |
| 0.1622 | 91.0 | 5551 | 0.3623 | 0.8967 |
| 0.1657 | 92.0 | 5612 | 0.3855 | 0.8978 |
| 0.1672 | 93.0 | 5673 | 0.3722 | 0.8944 |
| 0.1807 | 94.0 | 5734 | 0.3994 | 0.8932 |
| 0.1419 | 95.0 | 5795 | 0.4017 | 0.8863 |
| 0.178 | 96.0 | 5856 | 0.4168 | 0.8886 |
| 0.1402 | 97.0 | 5917 | 0.3727 | 0.8944 |
| 0.1427 | 98.0 | 5978 | 0.3919 | 0.8967 |
| 0.1318 | 99.0 | 6039 | 0.3843 | 0.8955 |
| 0.1417 | 100.0 | 6100 | 0.4017 | 0.8898 |
| 0.1536 | 101.0 | 6161 | 0.3613 | 0.8955 |
| 0.1631 | 102.0 | 6222 | 0.3377 | 0.9047 |
| 0.1459 | 103.0 | 6283 | 0.3724 | 0.8967 |
| 0.1499 | 104.0 | 6344 | 0.3934 | 0.8955 |
| 0.1572 | 105.0 | 6405 | 0.3368 | 0.8967 |
| 0.1308 | 106.0 | 6466 | 0.3782 | 0.8990 |
| 0.1535 | 107.0 | 6527 | 0.3306 | 0.9024 |
| 0.125 | 108.0 | 6588 | 0.4076 | 0.8898 |
| 0.1339 | 109.0 | 6649 | 0.3628 | 0.8990 |
| 0.148 | 110.0 | 6710 | 0.3672 | 0.9013 |
| 0.1725 | 111.0 | 6771 | 0.4006 | 0.8909 |
| 0.1326 | 112.0 | 6832 | 0.4117 | 0.8921 |
| 0.1438 | 113.0 | 6893 | 0.3927 | 0.8978 |
| 0.1205 | 114.0 | 6954 | 0.3612 | 0.8990 |
| 0.1531 | 115.0 | 7015 | 0.3594 | 0.8932 |
| 0.1473 | 116.0 | 7076 | 0.4490 | 0.8875 |
| 0.1388 | 117.0 | 7137 | 0.3952 | 0.8921 |
| 0.136 | 118.0 | 7198 | 0.4098 | 0.8921 |
| 0.1579 | 119.0 | 7259 | 0.3595 | 0.9013 |
| 0.1359 | 120.0 | 7320 | 0.3970 | 0.8944 |
| 0.1314 | 121.0 | 7381 | 0.4092 | 0.8932 |
| 0.1337 | 122.0 | 7442 | 0.4192 | 0.8909 |
| 0.1538 | 123.0 | 7503 | 0.4154 | 0.8898 |
| 0.119 | 124.0 | 7564 | 0.4120 | 0.8909 |
| 0.1353 | 125.0 | 7625 | 0.4060 | 0.8921 |
| 0.1489 | 126.0 | 7686 | 0.4162 | 0.8909 |
| 0.1554 | 127.0 | 7747 | 0.4148 | 0.8944 |
| 0.1558 | 128.0 | 7808 | 0.4169 | 0.8944 |
| 0.1268 | 129.0 | 7869 | 0.4110 | 0.8955 |
| 0.1236 | 130.0 | 7930 | 0.4197 | 0.8944 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
agungbesti/house | agungbesti | 2022-10-28T02:59:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-10-28T02:53:02Z | ---
title: Protas
emoji: 🏃
colorFrom: yellow
colorTo: pink
sdk: gradio
app_file: app.py
pinned: false
license: apache-2.0
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list. |
huggingtweets/missalykatt | huggingtweets | 2022-10-28T02:37:20Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-28T02:34:18Z | ---
language: en
thumbnail: http://www.huggingtweets.com/missalykatt/1666924619450/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1556386443752222720/Fzb-hZ4Q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MissAlyKatt 🏳️🌈♀️</div>
<div style="text-align: center; font-size: 14px;">@missalykatt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MissAlyKatt 🏳️🌈♀️.
| Data | MissAlyKatt 🏳️🌈♀️ |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 361 |
| Short tweets | 757 |
| Tweets kept | 2099 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yaoalt1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @missalykatt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2uetdofk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2uetdofk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/missalykatt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
helloway/simple | helloway | 2022-10-28T02:00:19Z | 0 | 0 | null | [
"audio-classification",
"license:apache-2.0",
"region:us"
]
| audio-classification | 2022-10-28T01:51:37Z | ---
license: apache-2.0
tags:
- audio-classification
---
|
Sunny5353/distilbert-base-uncased-finetuned-imdb | Sunny5353 | 2022-10-28T01:40:18Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-28T01:29:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.76 | 1.0 | 157 | 0.6640 |
| 0.688 | 2.0 | 314 | 0.6581 |
| 0.6768 | 3.0 | 471 | 0.6604 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Kolgrima/Luna | Kolgrima | 2022-10-28T01:39:20Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2022-10-27T23:48:49Z | ---
license: openrail
---
## Model of Evanna Lynch as Luna Lovegood
If you've ever tried to create an image of Luna Lovegood from the movies, you'll have noticed Stable Diffusion is not good at this! That's where this model comes in.
This has been trained on 38 images of Evanna Lynch as Luna Lovegood.
## Usage
Simply use the keyword "**Luna**" anywhere in your prompt.
### Output Examples
Each image has embedded data that can be read from the PNG info tab in Stable diffusion Web UI.









 |
skang/distilbert-base-uncased-finetuned-imdb | skang | 2022-10-28T01:38:56Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-28T01:30:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.76 | 1.0 | 157 | 0.6640 |
| 0.688 | 2.0 | 314 | 0.6581 |
| 0.6768 | 3.0 | 471 | 0.6604 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
dcae10/distilbert-base-uncased-finetuned-imdb | dcae10 | 2022-10-28T01:38:21Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-28T01:29:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.76 | 1.0 | 157 | 0.6640 |
| 0.688 | 2.0 | 314 | 0.6581 |
| 0.6768 | 3.0 | 471 | 0.6604 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/revmaxxing | huggingtweets | 2022-10-28T01:23:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-27T23:49:45Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1578729528695963649/mmiLKGp1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rev 🇷🇺 🌾 🛞</div>
<div style="text-align: center; font-size: 14px;">@revmaxxing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rev 🇷🇺 🌾 🛞.
| Data | Rev 🇷🇺 🌾 🛞 |
| --- | --- |
| Tweets downloaded | 3097 |
| Retweets | 241 |
| Short tweets | 416 |
| Tweets kept | 2440 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nfmh3no/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @revmaxxing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zust2rmi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zust2rmi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/revmaxxing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TingChenChang/t5-end2end-questions-generation | TingChenChang | 2022-10-28T00:36:02Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-27T14:37:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [TingChenChang/t5-end2end-questions-generation](https://huggingface.co/TingChenChang/t5-end2end-questions-generation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5711 | 0.4 | 100 | 1.6119 |
| 1.5353 | 0.8 | 200 | 1.6052 |
| 1.502 | 1.2 | 300 | 1.6082 |
| 1.4525 | 1.6 | 400 | 1.5918 |
| 1.4463 | 2.0 | 500 | 1.5847 |
| 1.3885 | 2.4 | 600 | 1.6236 |
| 1.4029 | 2.8 | 700 | 1.5962 |
| 1.3947 | 3.2 | 800 | 1.5932 |
| 1.3685 | 3.6 | 900 | 1.5898 |
| 1.3926 | 4.0 | 1000 | 1.5624 |
| 1.4666 | 4.4 | 1100 | 1.5535 |
| 1.4573 | 4.8 | 1200 | 1.5483 |
| 1.4342 | 5.2 | 1300 | 1.5449 |
| 1.4281 | 5.6 | 1400 | 1.5347 |
| 1.4031 | 6.0 | 1500 | 1.5456 |
| 1.375 | 6.4 | 1600 | 1.5375 |
| 1.3867 | 6.8 | 1700 | 1.5393 |
| 1.3763 | 7.2 | 1800 | 1.5401 |
| 1.357 | 7.6 | 1900 | 1.5361 |
| 1.3568 | 8.0 | 2000 | 1.5295 |
| 1.3503 | 8.4 | 2100 | 1.5377 |
| 1.3335 | 8.8 | 2200 | 1.5353 |
| 1.3416 | 9.2 | 2300 | 1.5288 |
| 1.3179 | 9.6 | 2400 | 1.5324 |
| 1.3276 | 10.0 | 2500 | 1.5291 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.12.1
|
caffsean/bert-base-cased-deep-ritmo | caffsean | 2022-10-28T00:17:00Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-27T03:19:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-deep-ritmo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-deep-ritmo
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0463 | 1.0 | 1875 | 3.7428 |
| 3.3393 | 2.0 | 3750 | 3.0259 |
| 2.7435 | 3.0 | 5625 | 2.5837 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
allenai/scirepeval_adapters_qry | allenai | 2022-10-28T00:06:24Z | 12 | 1 | adapter-transformers | [
"adapter-transformers",
"adapterhub:scirepeval/adhoc_search",
"bert",
"dataset:allenai/scirepeval",
"region:us"
]
| null | 2022-10-28T00:06:13Z | ---
tags:
- adapterhub:scirepeval/adhoc_search
- adapter-transformers
- bert
datasets:
- allenai/scirepeval
---
# Adapter `allenai/scirepeval_adapters_qry` for malteos/scincl
An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/adhoc_search](https://adapterhub.ml/explore/scirepeval/adhoc_search/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("malteos/scincl")
adapter_name = model.load_adapter("allenai/scirepeval_adapters_qry", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
OpenMatch/condenser-large | OpenMatch | 2022-10-28T00:04:23Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-27T23:44:05Z | ---
license: mit
---
This model has been pretrained on BookCorpus and English Wikipedia following the approach described in the paper **Condenser: a Pre-training Architecture for Dense Retrieval**. The model can be used to reproduce the experimental results within the GitHub repository https://github.com/OpenMatch/COCO-DR.
This model is trained with BERT-large as the backbone with 335M hyperparameters. |
allenai/scirepeval_adapters_clf | allenai | 2022-10-28T00:03:35Z | 14 | 0 | adapter-transformers | [
"adapter-transformers",
"adapterhub:scirepeval/classification",
"bert",
"dataset:allenai/scirepeval",
"region:us"
]
| null | 2022-10-28T00:03:26Z | ---
tags:
- adapterhub:scirepeval/classification
- adapter-transformers
- bert
datasets:
- allenai/scirepeval
---
# Adapter `allenai/scirepeval_adapters_clf` for malteos/scincl
An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/classification](https://adapterhub.ml/explore/scirepeval/classification/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("malteos/scincl")
adapter_name = model.load_adapter("allenai/scirepeval_adapters_clf", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
rajistics/setfit-model | rajistics | 2022-10-27T23:47:04Z | 2 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-27T23:46:48Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/sadieyay | huggingtweets | 2022-10-27T23:42:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-27T23:21:37Z | ---
language: en
thumbnail: http://www.huggingtweets.com/sadieyay/1666914122057/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509399260441292800/yttWeCzW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sadie</div>
<div style="text-align: center; font-size: 14px;">@sadieyay</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sadie.
| Data | sadie |
| --- | --- |
| Tweets downloaded | 636 |
| Retweets | 38 |
| Short tweets | 97 |
| Tweets kept | 501 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2reqej16/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sadieyay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/usyd3rqz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/usyd3rqz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sadieyay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
andrewzhang505/lunar_lander_example | andrewzhang505 | 2022-10-27T22:35:12Z | 5 | 0 | sample-factory | [
"sample-factory",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-27T22:29:42Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- metrics:
- type: mean_reward
value: 93.18 +/- 76.95
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
A(n) **APPO** model trained on the **LunarLanderContinuous-v2** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
wavymulder/zelda-diffusion-HN | wavymulder | 2022-10-27T21:32:27Z | 0 | 18 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-25T01:06:42Z | ---
license: creativeml-openrail-m
---
**Zelda Diffusion - Hypernet**
[*DOWNLOAD LINK*](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/zeldaBOTW.pt) - This is a hypernet trained on screenshots of Princess Zelda from BOTW

Here's a random batch of 9 images to show the hypernet uncherrypicked. The prompt is "anime princess zelda volumetric lighting" and the negative prompt is "cel render 3d animation"

and [a link to more](https://i.imgur.com/NixQGid.jpg)
---
Tips:
You'll want to adjust the hypernetwork strength depending on what style you're trying to put Zelda into. I usually keep it at 80% strength and go from there.
This hypernetwork helps make Zelda look more like the BOTW Zelda. You still have to prompt for what you want. Extra weight might sometimes need to be applied to get her to wear costumes. You may also have luck putting her name closer to the end of the prompt than you normally would.
Since the hypernetwork is trained on screenshots from the videogame, it imparts a heavy Cel Shading effect [(Example here)](https://huggingface.co/wavymulder/zelda-diffusion-HN/resolve/main/00108-920950.png). You can minimize this by negative prompting "cel". I believe every example posted here uses this.
The hypernet can be used either with very simple prompting, as shown above, or a prompt of your favourite artists.

You can put this hypernet on top of different models to create some really cool Zeldas, such as this one made with [Nitrosocke](https://huggingface.co/nitrosocke)'s [Modern Disney Model](https://huggingface.co/nitrosocke/modern-disney-diffusion).

|
Aadarsh/bert-finetuned-ner | Aadarsh | 2022-10-27T21:31:02Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-26T22:08:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1429
- Precision: 0.4954
- Recall: 0.6136
- F1: 0.5482
- Accuracy: 0.9642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 141 | 0.2894 | 0.4649 | 0.3258 | 0.3831 | 0.9219 |
| No log | 2.0 | 282 | 0.1767 | 0.4706 | 0.4545 | 0.4624 | 0.9487 |
| No log | 3.0 | 423 | 0.1429 | 0.4954 | 0.6136 | 0.5482 | 0.9642 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
marceloprates/opus-mt-en-ro-finetuned-en-to-ro | marceloprates | 2022-10-27T21:22:15Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-27T21:06:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4457
- Bleu: 0.0
- Gen Len: 8.0045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 2.5302 | 1.0 | 1863 | 2.4457 | 0.0 | 8.0045 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ViktorDo/SciBERT-POWO_Epiphyte_Finetuned | ViktorDo | 2022-10-27T21:10:45Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-27T19:53:27Z | ---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-POWO_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-POWO_Epiphyte_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0909 | 1.0 | 2063 | 0.0860 |
| 0.0763 | 2.0 | 4126 | 0.1000 |
| 0.0627 | 3.0 | 6189 | 0.0898 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Phantasion/phaninc | Phantasion | 2022-10-27T21:03:33Z | 0 | 1 | null | [
"region:us"
]
| null | 2022-10-27T20:18:49Z | 
Phaninc is a model based on my cyberpunk tumblr blog phantasyinc. One thing that has frustrated me with AI art is the generic quality of prompting for cyberpunk imagery, so I went through my blog and curated a dataset for 25 new keywords to get the results I desire. I have been heavily inspired by the work of nousr on robodiffusion whose model gave me a lot of results I love.
I have utilised the new FAST dreambooth method, and run it at 20000 steps on 684 images (around 800 steps per concept). At the time of writing the model is still training but I thought I would use my training time to summarise my intent with each keyword. I expect there to be problems and some of my experiments to not pan out so well, but I thought I would share.
*Post training update: the entire model is contaminated, most prompts are gonna churn out cyberpunk work, but the keywords are still good against one another and work as desired, and the base model has had some interesting lessons taught to it.*
**phanborg**
This set was the first to be tested, it is a combination of portraits of cyborgs much like phancyborg and phandroid. The difference between the three is that phanborg uses a combination of images with the face covered and uncovered by machinery, while phancyborg uses only uncovered cyborgs and phandroid only covered cyborgs. The images used in all three are entirely different so that I can play with a diversity of trained features with my keywords.
**phanbrutal**
Images I consider a combination of cyberpunk and brutalism.
**phanbw**
This one is one of my more experimental keywords, utilising monochrome cyberpunk images I find quite striking in black and white. However apart from sticking to a cyberpunk theme, there is no consistent subject matter and may just end up being a generic monochrome keyword.
**phancircle**
another experimental keyword, this keyword utilises a selection of architectural, textural and 3d design images with circles and spheres as a recurring motif. My hope is this keyword will help provide a cyberpunk texture to other prompts with a circular motif.
**phancity**
Bleak futuristic cityscapes, but like phanbw this experiment may fail due to being too varied subject matter.
**phanconcrete**
concrete, images of architecture with mostly concrete finishes, might be overkill with phanbrutal above, but I like that there will still be nuanced differences to play with.
**phanconsole**
A command centre needs buttons to beep and switches to boop, this keyword is all about screens and buttons.
**phancorridor**
images of spaceship corridors and facilities to provide a more futuristic interior design.
**phancyborg**
phancyborg is an image selection of cyborgs with some or all of a human face uncovered.
**phandraw**
a selection focused on drawn cyberpunk artwork with bright neon colors and defined linework
**phandroid**
this is where I pay most homage to nousrs robodiffusion, using only cyborgs with their faces concealed or just plain humanoid robots
**phandustrial**
futuristic ndustrial imagery of pipes wires and messes of cables.
**phanfashion**
trying to get that urbanwear hoodie look but with some variations.
**phanfem**
a series of cyberpunk women
**phanglitch**
Glitch art I had reblogged on the blog with a cyberpunk feel. Quite colorful.
**phangrunge**
Dilapidated dens for the scum of the city. Hopefully will add a good dose of urban decay to your prompt.
**phanlogo**
Sleek graphic design, typography and logos.
**phanmachine**
Built with unclear subject matter, phanmachine focuses on the details of futuristic shiny machinery in hopes of it coming out as a style or texture that can be applied in prompts.
**phanmecha**
The three cyborg keywords are sleek and humanoid, phanmecha focuses more on creating unique robot bodytypes.
**phanmilitary**
Future soldiers, man and machine. Likely to attach a gun to your prompt's character.
**phanneon**
Bright neon lights taking over the scene, this feature is what annoyed me with a lot of cyberpunk prompts in ai models. Overall I have it pretty isolated to this keyword, if you want those futuristic glowies.
**phanrooms**
Totally seperate to the rest of the theming, phanrooms is trained on backrooms and liminal space imagery. Which like cyberpunk is of high visual interest to me, and something the base model can sometimes struggle with.
**phansterile**
This is like cyberpunk cleancore, lots of white, very clean, clinical theming.
**phantex**
I don't know why latex outfits are cyberpunk but they just are, these images were selected for the accessorising on top of just the latex outfits.
**phanture**
Abstract textures that were cyberpunk enough for me to put on my blog.
|
motmono/ppo-LunarLander-v2 | motmono | 2022-10-27T20:39:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-27T20:39:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.74 +/- 15.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits