modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cottonlove/dummy-model | c8eb5a94874fb4c000bab7f6ace0009b1e73f7d0 | 2022-07-20T05:31:33.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cottonlove | null | cottonlove/dummy-model | 2 | null | transformers | 27,600 | Entry not found |
glory20h/jbspeechrec_scz | cbffc4b354a74f0c0739c1843c0017c4eddfc61b | 2022-07-20T08:18:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | glory20h | null | glory20h/jbspeechrec_scz | 2 | null | transformers | 27,601 | Entry not found |
WYHu/cve2cpe_bert | 8b8a981de0909aaee60ae2f897af8eda214495fc | 2022-07-20T09:20:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | WYHu | null | WYHu/cve2cpe_bert | 2 | null | transformers | 27,602 | Entry not found |
chiendvhust/bert-finetuned-squad | e25ec1eaa198a1132786ae62e0f4e5304314a997 | 2022-07-20T13:41:21.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | chiendvhust | null | chiendvhust/bert-finetuned-squad | 2 | null | transformers | 27,603 | Entry not found |
gemasphi/laprador_mmarco | 4eb10c42c89d61bd9b96d94c7a7a44a1eea8e32c | 2022-07-20T11:02:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | gemasphi | null | gemasphi/laprador_mmarco | 2 | null | sentence-transformers | 27,604 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_mmarco
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_mmarco')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_mmarco')
model = AutoModel.from_pretrained('gemasphi/laprador_mmarco')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_mmarco)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6653 with parameters:
```
{'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
shila/distilbert-base-uncased-finetuned-squad | 5b3c15c2dc1bbbd1b03a2f6cd290c84d875e3190 | 2022-07-21T09:44:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2_loading_script",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | shila | null | shila/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 27,605 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2_loading_script
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2_loading_script dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 5.4661 |
| No log | 2.0 | 30 | 5.0915 |
| No log | 3.0 | 45 | 4.9348 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/kchonyc | 7c9ead179943724cdcacfc3253d0d09b16739b2d | 2022-07-20T18:49:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/kchonyc | 2 | null | transformers | 27,606 | ---
language: en
thumbnail: http://www.huggingtweets.com/kchonyc/1658342940411/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485997480089108483/yi4s4d5F_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kyunghyun Cho</div>
<div style="text-align: center; font-size: 14px;">@kchonyc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kyunghyun Cho.
| Data | Kyunghyun Cho |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 774 |
| Short tweets | 298 |
| Tweets kept | 2164 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cu6z57w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kchonyc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m6pgno8m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m6pgno8m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kchonyc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
duchung17/vivos-base-cmv | b928d97361763647537609a4a5ad62f0e16646a1 | 2022-07-21T07:16:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | duchung17 | null | duchung17/vivos-base-cmv | 2 | null | transformers | 27,607 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vivos-base-cmv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivos-base-cmv
This model is a fine-tuned version of [duchung17/wav2vec2-base-cmv-featured](https://huggingface.co/duchung17/wav2vec2-base-cmv-featured) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5293
- Wer: 0.3322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.8653 | 1.25 | 500 | 0.4774 | 0.4997 |
| 0.5745 | 2.49 | 1000 | 0.4670 | 0.4687 |
| 0.4888 | 3.74 | 1500 | 0.4393 | 0.4375 |
| 0.4309 | 4.99 | 2000 | 0.4268 | 0.4179 |
| 0.379 | 6.23 | 2500 | 0.4294 | 0.4074 |
| 0.3491 | 7.48 | 3000 | 0.4398 | 0.3942 |
| 0.3191 | 8.73 | 3500 | 0.4467 | 0.3858 |
| 0.3001 | 9.98 | 4000 | 0.4249 | 0.3701 |
| 0.2716 | 11.22 | 4500 | 0.4533 | 0.3726 |
| 0.2624 | 12.47 | 5000 | 0.4465 | 0.3713 |
| 0.2383 | 13.72 | 5500 | 0.4536 | 0.3666 |
| 0.2223 | 14.96 | 6000 | 0.4484 | 0.3585 |
| 0.2036 | 16.21 | 6500 | 0.4728 | 0.3617 |
| 0.1937 | 17.46 | 7000 | 0.4786 | 0.3585 |
| 0.1834 | 18.7 | 7500 | 0.4724 | 0.3494 |
| 0.1726 | 19.95 | 8000 | 0.4831 | 0.3462 |
| 0.1649 | 21.2 | 8500 | 0.4896 | 0.3412 |
| 0.153 | 22.44 | 9000 | 0.4899 | 0.3416 |
| 0.1454 | 23.69 | 9500 | 0.4917 | 0.3366 |
| 0.1377 | 24.94 | 10000 | 0.5095 | 0.3392 |
| 0.1312 | 26.18 | 10500 | 0.5265 | 0.3354 |
| 0.1268 | 27.43 | 11000 | 0.5322 | 0.3307 |
| 0.1212 | 28.68 | 11500 | 0.5407 | 0.3346 |
| 0.1187 | 29.93 | 12000 | 0.5293 | 0.3322 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
keepitreal/bert-finetuned-squad | d1e4eb5ef2d5dc55704ef29c4125c257148664d6 | 2022-07-21T05:37:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | keepitreal | null | keepitreal/bert-finetuned-squad | 2 | null | transformers | 27,608 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/evetixx | 18668a831f06421f8ead898e6200cc12b40254f0 | 2022-07-21T05:36:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/evetixx | 2 | null | transformers | 27,609 | ---
language: en
thumbnail: http://www.huggingtweets.com/evetixx/1658381755785/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1480525219177500675/wKTMg3gl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">eve</div>
<div style="text-align: center; font-size: 14px;">@evetixx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from eve.
| Data | eve |
| --- | --- |
| Tweets downloaded | 185 |
| Retweets | 25 |
| Short tweets | 55 |
| Tweets kept | 105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2o14y995/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @evetixx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r3der0q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r3der0q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/evetixx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jh1/distilbert-base-uncased-finetuned-chunk | 04850e0f22b88d49d34fb77972a93f304ac96d05 | 2022-07-21T07:45:52.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | jh1 | null | jh1/distilbert-base-uncased-finetuned-chunk | 2 | null | transformers | 27,610 | Entry not found |
pannaga/wav2vec2-base-timit-demo-google-colab-testing | bedbc066ed735e56096a4755235f9ae3ade47410 | 2022-07-27T04:18:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pannaga | null | pannaga/wav2vec2-base-timit-demo-google-colab-testing | 2 | null | transformers | 27,611 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-testing
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3080
- Wer: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6292 | 20.83 | 500 | 3.5570 | 0.9994 |
| 2.8237 | 41.67 | 1000 | 3.3080 | 0.9994 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
cjvt/legacy-t5-sl-small | 5063a1cbd9021159db629a5f3224f7cadd4e22d9 | 2022-07-21T11:14:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"sl",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | cjvt | null | cjvt/legacy-t5-sl-small | 2 | null | transformers | 27,612 | ---
language:
- sl
license: cc-by-sa-4.0
---
# [legacy] t5-sl-small
This is the first version of the t5-sl-small model, which has since been replaced by an updated model (cjvt/t5-sl-small). The architecture of the two models is the same, but the legacy version was trained for about 6 times less (i.e. the model has seen 6 times less data during the training).
This version remains here due to reproducibility reasons.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
|
huggingtweets/lpachter | 47710f687afb1ad5511c362e029460cabf8d459f | 2022-07-21T12:11:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lpachter | 2 | null | transformers | 27,613 | ---
language: en
thumbnail: http://www.huggingtweets.com/lpachter/1658405511004/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1257000705761525760/R7Pphmei_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lior Pachter</div>
<div style="text-align: center; font-size: 14px;">@lpachter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lior Pachter.
| Data | Lior Pachter |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 1213 |
| Short tweets | 245 |
| Tweets kept | 1774 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rt1wriv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lpachter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23sx643q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23sx643q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lpachter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
myvision/scibert-uncased-synthetic-50k | a32337266ddc985a219775efbcef7c23e66525fd | 2022-07-21T15:04:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | myvision | null | myvision/scibert-uncased-synthetic-50k | 2 | null | transformers | 27,614 | Entry not found |
Ammonsh/wav2vec2-common_voice-tr-demo | b985ebfcdf69c2ddce59fc750e67b3dc1370cf95 | 2022-07-22T00:38:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Ammonsh | null | Ammonsh/wav2vec2-common_voice-tr-demo | 2 | null | transformers | 27,615 | Entry not found |
danhsf/m2m100_418M-finetuned-kde4-en-to-pt_BR | 1b54a8512c8b2928b7cc7a99f70cb6d8b439d83a | 2022-07-22T12:47:59.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | translation | false | danhsf | null | danhsf/m2m100_418M-finetuned-kde4-en-to-pt_BR | 2 | null | transformers | 27,616 | ---
license: mit
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-kde4-en-to-pt_BR
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-pt_BR
metrics:
- name: Bleu
type: bleu
value: 58.31959113813223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-kde4-en-to-pt_BR
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5150
- Bleu: 58.3196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lvxue/distilled_test_0.99_delete_metric | 00a290c08865ff4145294a0bae8e9ccc853e1b85 | 2022-07-22T03:23:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/distilled_test_0.99_delete_metric | 2 | null | transformers | 27,617 | Entry not found |
Lvxue/distilled_test_0.9_delete_metric | 56656a3fbb7c01c1f5e2487c76041e3343d04aec | 2022-07-22T04:10:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/distilled_test_0.9_delete_metric | 2 | null | transformers | 27,618 | Entry not found |
RupE/xlm-roberta-base-finetuned-panx-de | eaddd1311b23675408c3faa91335796cd5339100 | 2022-07-22T05:15:36.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 27,619 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8503293209175562
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1354
- F1: 0.8503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 132 | 0.1757 | 0.8055 |
| No log | 2.0 | 264 | 0.1372 | 0.8424 |
| No log | 3.0 | 396 | 0.1354 | 0.8503 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lvxue/distilled_test_0.5_delete_metric | 1253ae074acaf8e41b5236bdea13393d6bf3956e | 2022-07-22T05:43:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/distilled_test_0.5_delete_metric | 2 | null | transformers | 27,620 | Entry not found |
RupE/xlm-roberta-base-finetuned-panx-de-fr | 71be54fa202bb51cb9c5ea1fa63b75b89a976c29 | 2022-07-22T05:37:41.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-de-fr | 2 | null | transformers | 27,621 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1632
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.1842 | 0.8256 |
| No log | 2.0 | 358 | 0.1720 | 0.8395 |
| No log | 3.0 | 537 | 0.1632 | 0.8505 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RupE/xlm-roberta-base-finetuned-panx-fr | 0c25e8c02a010fae54920fcf43b3e7296c3d7943 | 2022-07-22T05:43:37.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-fr | 2 | null | transformers | 27,622 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8151120026746907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2880
- F1: 0.8151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 48 | 0.3642 | 0.7463 |
| No log | 2.0 | 96 | 0.3007 | 0.7975 |
| No log | 3.0 | 144 | 0.2880 | 0.8151 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RupE/xlm-roberta-base-finetuned-panx-it | 6b38ace314b4240ead194fe61441a3347c4e4805 | 2022-07-22T05:47:13.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 27,623 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.7434973989595838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3355
- F1: 0.7435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 18 | 0.6871 | 0.4648 |
| No log | 2.0 | 36 | 0.3901 | 0.6932 |
| No log | 3.0 | 54 | 0.3355 | 0.7435 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RupE/xlm-roberta-base-finetuned-panx-en | 2eeddb62403321020a26dbc5fe0564854add2fee | 2022-07-22T05:50:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-en | 2 | null | transformers | 27,624 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.5541666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6380
- F1: 0.5542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 13 | 1.0388 | 0.1801 |
| No log | 2.0 | 26 | 0.7545 | 0.5053 |
| No log | 3.0 | 39 | 0.6380 | 0.5542 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RupE/xlm-roberta-base-finetuned-panx-all | 89642c3e2cceeeee9b90aa266da7d958315abca9 | 2022-07-22T06:04:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | RupE | null | RupE/xlm-roberta-base-finetuned-panx-all | 2 | null | transformers | 27,625 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- F1: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 209 | 0.1990 | 0.8088 |
| No log | 2.0 | 418 | 0.1748 | 0.8426 |
| No log | 3.0 | 627 | 0.1748 | 0.8467 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper1_mesum5 | ac48cf5fe57fbdcaa5be68b209923dbd331d361b | 2022-07-22T11:23:22.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper1_mesum5 | 2 | null | transformers | 27,626 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper1_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper1_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
- Accuracy: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9352 | 0.23 | 100 | 3.8550 | 0.1959 |
| 3.1536 | 0.47 | 200 | 3.1755 | 0.2888 |
| 2.6937 | 0.7 | 300 | 2.6332 | 0.4272 |
| 2.3748 | 0.93 | 400 | 2.2833 | 0.4970 |
| 1.5575 | 1.16 | 500 | 1.8712 | 0.5888 |
| 1.4063 | 1.4 | 600 | 1.6048 | 0.6314 |
| 1.1841 | 1.63 | 700 | 1.4109 | 0.6621 |
| 1.0857 | 1.86 | 800 | 1.1832 | 0.7112 |
| 0.582 | 2.09 | 900 | 1.0371 | 0.7479 |
| 0.5971 | 2.33 | 1000 | 0.9839 | 0.7462 |
| 0.4617 | 2.56 | 1100 | 0.9233 | 0.7657 |
| 0.4621 | 2.79 | 1200 | 0.8417 | 0.7828 |
| 0.2128 | 3.02 | 1300 | 0.7644 | 0.7970 |
| 0.1883 | 3.26 | 1400 | 0.7001 | 0.8183 |
| 0.1501 | 3.49 | 1500 | 0.6826 | 0.8201 |
| 0.1626 | 3.72 | 1600 | 0.6568 | 0.8254 |
| 0.1053 | 3.95 | 1700 | 0.6401 | 0.8278 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper2_mesum5 | aded307255831b7b742183195fbfe0cd57bef09f | 2022-07-22T11:39:11.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper2_mesum5 | 2 | null | transformers | 27,627 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper2_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper2_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4589
- Accuracy: 0.1308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.4265 | 0.23 | 100 | 4.3676 | 0.0296 |
| 4.1144 | 0.47 | 200 | 4.1606 | 0.0544 |
| 4.0912 | 0.7 | 300 | 4.1071 | 0.0509 |
| 4.0361 | 0.93 | 400 | 4.0625 | 0.0669 |
| 4.0257 | 1.16 | 500 | 3.9682 | 0.0822 |
| 3.8846 | 1.4 | 600 | 3.9311 | 0.0834 |
| 3.9504 | 1.63 | 700 | 3.9255 | 0.0698 |
| 3.9884 | 1.86 | 800 | 3.9404 | 0.0722 |
| 3.7191 | 2.09 | 900 | 3.8262 | 0.0935 |
| 3.7952 | 2.33 | 1000 | 3.8236 | 0.0734 |
| 3.8085 | 2.56 | 1100 | 3.7694 | 0.0964 |
| 3.7535 | 2.79 | 1200 | 3.6757 | 0.1059 |
| 3.4218 | 3.02 | 1300 | 3.6474 | 0.1095 |
| 3.5172 | 3.26 | 1400 | 3.5621 | 0.1166 |
| 3.5173 | 3.49 | 1500 | 3.5579 | 0.1207 |
| 3.4346 | 3.72 | 1600 | 3.4817 | 0.1249 |
| 3.3995 | 3.95 | 1700 | 3.4589 | 0.1308 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper3_mesum5 | cb127fb5725018772a63e6bbba1e295e1bf923c4 | 2022-07-22T12:10:49.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper3_mesum5 | 2 | null | transformers | 27,628 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper3_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper3_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6366
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.895 | 0.23 | 100 | 3.8276 | 0.1935 |
| 3.1174 | 0.47 | 200 | 3.1217 | 0.3107 |
| 2.6 | 0.7 | 300 | 2.5399 | 0.4207 |
| 2.256 | 0.93 | 400 | 2.1767 | 0.5160 |
| 1.5441 | 1.16 | 500 | 1.8086 | 0.5852 |
| 1.3834 | 1.4 | 600 | 1.5565 | 0.6325 |
| 1.1995 | 1.63 | 700 | 1.3339 | 0.6763 |
| 1.0845 | 1.86 | 800 | 1.3299 | 0.6533 |
| 0.6472 | 2.09 | 900 | 1.0679 | 0.7219 |
| 0.5948 | 2.33 | 1000 | 1.0286 | 0.7124 |
| 0.5565 | 2.56 | 1100 | 0.9595 | 0.7284 |
| 0.4879 | 2.79 | 1200 | 0.8915 | 0.7420 |
| 0.2816 | 3.02 | 1300 | 0.8159 | 0.7763 |
| 0.2412 | 3.26 | 1400 | 0.7766 | 0.7911 |
| 0.2015 | 3.49 | 1500 | 0.7850 | 0.7828 |
| 0.274 | 3.72 | 1600 | 0.7361 | 0.7935 |
| 0.1244 | 3.95 | 1700 | 0.7299 | 0.7911 |
| 0.0794 | 4.19 | 1800 | 0.7441 | 0.7846 |
| 0.0915 | 4.42 | 1900 | 0.7614 | 0.7941 |
| 0.0817 | 4.65 | 2000 | 0.7310 | 0.8012 |
| 0.0561 | 4.88 | 2100 | 0.7222 | 0.8065 |
| 0.0165 | 5.12 | 2200 | 0.7515 | 0.8059 |
| 0.0168 | 5.35 | 2300 | 0.6687 | 0.8213 |
| 0.0212 | 5.58 | 2400 | 0.6671 | 0.8249 |
| 0.0389 | 5.81 | 2500 | 0.6893 | 0.8278 |
| 0.0087 | 6.05 | 2600 | 0.6839 | 0.8260 |
| 0.0087 | 6.28 | 2700 | 0.6412 | 0.8320 |
| 0.0077 | 6.51 | 2800 | 0.6366 | 0.8367 |
| 0.0065 | 6.74 | 2900 | 0.6697 | 0.8272 |
| 0.0061 | 6.98 | 3000 | 0.6510 | 0.8349 |
| 0.0185 | 7.21 | 3100 | 0.6452 | 0.8367 |
| 0.0059 | 7.44 | 3200 | 0.6426 | 0.8379 |
| 0.0062 | 7.67 | 3300 | 0.6398 | 0.8379 |
| 0.0315 | 7.91 | 3400 | 0.6397 | 0.8385 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper4_mesum5 | 3413581cafd958b6808cf5b755a79bd9b69bb0fb | 2022-07-22T12:10:07.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper4_mesum5 | 2 | null | transformers | 27,629 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper4_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper4_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4389
- Accuracy: 0.1331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3793 | 0.23 | 100 | 3.4527 | 0.1308 |
| 3.2492 | 0.47 | 200 | 3.4501 | 0.1331 |
| 3.3847 | 0.7 | 300 | 3.4500 | 0.1272 |
| 3.3739 | 0.93 | 400 | 3.4504 | 0.1320 |
| 3.4181 | 1.16 | 500 | 3.4452 | 0.1320 |
| 3.214 | 1.4 | 600 | 3.4503 | 0.1320 |
| 3.282 | 1.63 | 700 | 3.4444 | 0.1325 |
| 3.5308 | 1.86 | 800 | 3.4473 | 0.1337 |
| 3.2251 | 2.09 | 900 | 3.4415 | 0.1361 |
| 3.4385 | 2.33 | 1000 | 3.4408 | 0.1343 |
| 3.3702 | 2.56 | 1100 | 3.4406 | 0.1325 |
| 3.366 | 2.79 | 1200 | 3.4411 | 0.1355 |
| 3.2022 | 3.02 | 1300 | 3.4403 | 0.1308 |
| 3.2768 | 3.26 | 1400 | 3.4394 | 0.1320 |
| 3.3444 | 3.49 | 1500 | 3.4394 | 0.1314 |
| 3.2981 | 3.72 | 1600 | 3.4391 | 0.1331 |
| 3.3349 | 3.95 | 1700 | 3.4389 | 0.1331 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper6_mesum5 | 527a31ecd9abac8bc8a0a6fdf39f17275ea1bb47 | 2022-07-22T13:30:23.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper6_mesum5 | 2 | null | transformers | 27,630 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper6_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper6_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8241
- Accuracy: 0.8036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9276 | 0.23 | 100 | 3.8550 | 0.2089 |
| 3.0853 | 0.47 | 200 | 3.1106 | 0.3414 |
| 2.604 | 0.7 | 300 | 2.5732 | 0.4379 |
| 2.3183 | 0.93 | 400 | 2.2308 | 0.4882 |
| 1.5326 | 1.16 | 500 | 1.7903 | 0.5828 |
| 1.3367 | 1.4 | 600 | 1.5524 | 0.6349 |
| 1.1544 | 1.63 | 700 | 1.3167 | 0.6645 |
| 1.0788 | 1.86 | 800 | 1.3423 | 0.6385 |
| 0.6762 | 2.09 | 900 | 1.0780 | 0.7124 |
| 0.6483 | 2.33 | 1000 | 1.0090 | 0.7284 |
| 0.6321 | 2.56 | 1100 | 1.0861 | 0.7024 |
| 0.5558 | 2.79 | 1200 | 0.9933 | 0.7183 |
| 0.342 | 3.02 | 1300 | 0.8871 | 0.7462 |
| 0.2964 | 3.26 | 1400 | 0.9330 | 0.7408 |
| 0.1959 | 3.49 | 1500 | 0.9367 | 0.7343 |
| 0.368 | 3.72 | 1600 | 0.8472 | 0.7550 |
| 0.1821 | 3.95 | 1700 | 0.8937 | 0.7568 |
| 0.1851 | 4.19 | 1800 | 0.9546 | 0.7485 |
| 0.1648 | 4.42 | 1900 | 0.9790 | 0.7355 |
| 0.172 | 4.65 | 2000 | 0.8947 | 0.7627 |
| 0.0928 | 4.88 | 2100 | 1.0093 | 0.7462 |
| 0.0699 | 5.12 | 2200 | 0.8374 | 0.7639 |
| 0.0988 | 5.35 | 2300 | 0.9189 | 0.7645 |
| 0.0822 | 5.58 | 2400 | 0.9512 | 0.7580 |
| 0.1223 | 5.81 | 2500 | 1.0809 | 0.7349 |
| 0.0509 | 6.05 | 2600 | 0.9297 | 0.7769 |
| 0.0511 | 6.28 | 2700 | 0.8981 | 0.7822 |
| 0.0596 | 6.51 | 2800 | 0.9468 | 0.7704 |
| 0.0494 | 6.74 | 2900 | 0.9045 | 0.7870 |
| 0.0643 | 6.98 | 3000 | 1.1559 | 0.7391 |
| 0.0158 | 7.21 | 3100 | 0.8450 | 0.7899 |
| 0.0129 | 7.44 | 3200 | 0.8241 | 0.8036 |
| 0.0441 | 7.67 | 3300 | 0.9679 | 0.7751 |
| 0.0697 | 7.91 | 3400 | 1.0387 | 0.7751 |
| 0.0084 | 8.14 | 3500 | 0.9441 | 0.7947 |
| 0.0182 | 8.37 | 3600 | 0.8967 | 0.7994 |
| 0.0042 | 8.6 | 3700 | 0.8750 | 0.8041 |
| 0.0028 | 8.84 | 3800 | 0.9349 | 0.8041 |
| 0.0053 | 9.07 | 3900 | 0.9403 | 0.7982 |
| 0.0266 | 9.3 | 4000 | 0.9966 | 0.7959 |
| 0.0022 | 9.53 | 4100 | 0.9472 | 0.8018 |
| 0.0018 | 9.77 | 4200 | 0.8717 | 0.8136 |
| 0.0018 | 10.0 | 4300 | 0.8964 | 0.8083 |
| 0.0046 | 10.23 | 4400 | 0.8623 | 0.8160 |
| 0.0037 | 10.47 | 4500 | 0.8762 | 0.8172 |
| 0.0013 | 10.7 | 4600 | 0.9028 | 0.8142 |
| 0.0013 | 10.93 | 4700 | 0.9084 | 0.8178 |
| 0.0013 | 11.16 | 4800 | 0.8733 | 0.8213 |
| 0.001 | 11.4 | 4900 | 0.8823 | 0.8207 |
| 0.0009 | 11.63 | 5000 | 0.8769 | 0.8213 |
| 0.0282 | 11.86 | 5100 | 0.8791 | 0.8219 |
| 0.001 | 12.09 | 5200 | 0.8673 | 0.8249 |
| 0.0016 | 12.33 | 5300 | 0.8633 | 0.8225 |
| 0.0008 | 12.56 | 5400 | 0.8766 | 0.8195 |
| 0.0008 | 12.79 | 5500 | 0.8743 | 0.8225 |
| 0.0008 | 13.02 | 5600 | 0.8752 | 0.8231 |
| 0.0008 | 13.26 | 5700 | 0.8676 | 0.8237 |
| 0.0007 | 13.49 | 5800 | 0.8677 | 0.8237 |
| 0.0008 | 13.72 | 5900 | 0.8703 | 0.8237 |
| 0.0007 | 13.95 | 6000 | 0.8725 | 0.8237 |
| 0.0006 | 14.19 | 6100 | 0.8741 | 0.8231 |
| 0.0006 | 14.42 | 6200 | 0.8758 | 0.8237 |
| 0.0008 | 14.65 | 6300 | 0.8746 | 0.8243 |
| 0.0007 | 14.88 | 6400 | 0.8759 | 0.8243 |
| 0.0007 | 15.12 | 6500 | 0.8803 | 0.8231 |
| 0.0007 | 15.35 | 6600 | 0.8808 | 0.8237 |
| 0.0007 | 15.58 | 6700 | 0.8798 | 0.8243 |
| 0.0007 | 15.81 | 6800 | 0.8805 | 0.8243 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper7_mesum5 | fe55181b981a9b7eb7fb2319036686f572311c54 | 2022-07-22T14:31:45.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper7_mesum5 | 2 | null | transformers | 27,631 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper7_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper7_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5889
- Accuracy: 0.8538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2072 | 0.23 | 100 | 4.1532 | 0.1923 |
| 3.5433 | 0.47 | 200 | 3.5680 | 0.2888 |
| 3.1388 | 0.7 | 300 | 3.1202 | 0.3911 |
| 2.7924 | 0.93 | 400 | 2.7434 | 0.4787 |
| 2.1269 | 1.16 | 500 | 2.3262 | 0.5781 |
| 1.8589 | 1.4 | 600 | 1.9754 | 0.6272 |
| 1.7155 | 1.63 | 700 | 1.7627 | 0.6840 |
| 1.4689 | 1.86 | 800 | 1.5937 | 0.6994 |
| 1.0149 | 2.09 | 900 | 1.3168 | 0.7497 |
| 0.8148 | 2.33 | 1000 | 1.1630 | 0.7615 |
| 0.7159 | 2.56 | 1100 | 1.0869 | 0.7675 |
| 0.7257 | 2.79 | 1200 | 0.9607 | 0.7893 |
| 0.4171 | 3.02 | 1300 | 0.8835 | 0.7935 |
| 0.2969 | 3.26 | 1400 | 0.8259 | 0.8130 |
| 0.2405 | 3.49 | 1500 | 0.7711 | 0.8142 |
| 0.2948 | 3.72 | 1600 | 0.7629 | 0.8112 |
| 0.1765 | 3.95 | 1700 | 0.7117 | 0.8124 |
| 0.1603 | 4.19 | 1800 | 0.6946 | 0.8237 |
| 0.0955 | 4.42 | 1900 | 0.6597 | 0.8349 |
| 0.0769 | 4.65 | 2000 | 0.6531 | 0.8266 |
| 0.0816 | 4.88 | 2100 | 0.6335 | 0.8337 |
| 0.0315 | 5.12 | 2200 | 0.6087 | 0.8402 |
| 0.0368 | 5.35 | 2300 | 0.6026 | 0.8444 |
| 0.0377 | 5.58 | 2400 | 0.6450 | 0.8278 |
| 0.0603 | 5.81 | 2500 | 0.6564 | 0.8343 |
| 0.0205 | 6.05 | 2600 | 0.6119 | 0.8467 |
| 0.019 | 6.28 | 2700 | 0.6070 | 0.8479 |
| 0.0249 | 6.51 | 2800 | 0.6002 | 0.8538 |
| 0.0145 | 6.74 | 2900 | 0.6012 | 0.8497 |
| 0.0134 | 6.98 | 3000 | 0.5991 | 0.8521 |
| 0.0271 | 7.21 | 3100 | 0.5972 | 0.8503 |
| 0.0128 | 7.44 | 3200 | 0.5911 | 0.8521 |
| 0.0123 | 7.67 | 3300 | 0.5889 | 0.8538 |
| 0.0278 | 7.91 | 3400 | 0.6135 | 0.8491 |
| 0.0106 | 8.14 | 3500 | 0.5934 | 0.8533 |
| 0.0109 | 8.37 | 3600 | 0.5929 | 0.8533 |
| 0.0095 | 8.6 | 3700 | 0.5953 | 0.8550 |
| 0.009 | 8.84 | 3800 | 0.5933 | 0.8574 |
| 0.009 | 9.07 | 3900 | 0.5948 | 0.8550 |
| 0.0089 | 9.3 | 4000 | 0.5953 | 0.8556 |
| 0.0086 | 9.53 | 4100 | 0.5956 | 0.8544 |
| 0.0085 | 9.77 | 4200 | 0.5955 | 0.8556 |
| 0.0087 | 10.0 | 4300 | 0.5954 | 0.8538 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-triplet | 8760f1e7fa63fd9921adfcb26d36cce7b87b5e9b | 2022-07-24T15:52:49.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-triplet | 2 | null | transformers | 27,632 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-b-triplet | 38448eeea7fb5f70005213a67f231f8ae59eb4a3 | 2022-07-24T17:01:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-b-triplet | 2 | null | transformers | 27,633 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-c-triplet | 26557d20e4204a84f6454610944d6d521bf4775d | 2022-07-24T18:24:44.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-c-triplet | 2 | null | transformers | 27,634 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-d-triplet | 84b69e8c9ea6922a9fe34d2b554643d6bfbf4aa8 | 2022-07-24T19:37:52.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-d-triplet | 2 | null | transformers | 27,635 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-e-triplet | 3b231cae6c60cd47794fb316bcbb9ad52c56f46a | 2022-07-24T21:02:47.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-e-triplet | 2 | null | transformers | 27,636 | Entry not found |
huggingtweets/deepleffen-tsm_leffen | 1ef65227da30da9acf06c1bc01f3844274a02b2d | 2022-07-22T17:50:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/deepleffen-tsm_leffen | 2 | null | transformers | 27,637 | ---
language: en
thumbnail: http://www.huggingtweets.com/deepleffen-tsm_leffen/1658512231427/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547974425718300675/wvQuPBGR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & TSM FTX Leffen</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-tsm_leffen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Deep Leffen Bot & TSM FTX Leffen.
| Data | Deep Leffen Bot | TSM FTX Leffen |
| --- | --- | --- |
| Tweets downloaded | 591 | 3249 |
| Retweets | 14 | 291 |
| Short tweets | 27 | 283 |
| Tweets kept | 550 | 2675 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lq4lpvp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-tsm_leffen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1v9tktg9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1v9tktg9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-tsm_leffen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tsrivatsav/wav2vec2-large-xls-r-300m-en-libri-more-steps | 0626af327e3f0fa3c4c747ff62706966181a582a | 2022-07-24T17:57:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tsrivatsav | null | tsrivatsav/wav2vec2-large-xls-r-300m-en-libri-more-steps | 2 | null | transformers | 27,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: wav2vec2-large-xls-r-300m-en-libri-more-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-en-libri-more-steps
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7624
- Wer: 0.8772
- Cer: 0.3762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.94 | 33 | 2.9987 | 1.0 | 1.0 |
| No log | 3.88 | 66 | 2.8951 | 1.0 | 1.0 |
| No log | 5.82 | 99 | 2.8732 | 1.0 | 1.0 |
| 3.781 | 7.76 | 132 | 2.6057 | 1.0 | 1.0 |
| 3.781 | 9.71 | 165 | 1.9015 | 1.0154 | 0.5616 |
| 3.781 | 11.65 | 198 | 1.5226 | 0.9263 | 0.4462 |
| 2.2258 | 13.59 | 231 | 1.5116 | 0.8913 | 0.3967 |
| 2.2258 | 15.53 | 264 | 1.5634 | 0.8922 | 0.3842 |
| 2.2258 | 17.47 | 297 | 1.7016 | 0.8876 | 0.3796 |
| 0.7946 | 19.41 | 330 | 1.7624 | 0.8772 | 0.3762 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cpu
- Datasets 1.18.3
- Tokenizers 0.12.1
|
techsword/wav2vec-large-xlsr-53-frisian-fame | eb4ea70c34983127e36970e06d1b5a3b546c0e6e | 2022-07-23T20:23:55.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | techsword | null | techsword/wav2vec-large-xlsr-53-frisian-fame | 2 | null | transformers | 27,639 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-a-triplet | db433ae22f4d34387fd693b64191f1a8f1d62ac8 | 2022-07-24T15:34:31.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-a-triplet | 2 | null | transformers | 27,640 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-b-triplet | aadb0b35f47502eceaee464e02aefdccc6b84bf3 | 2022-07-24T16:41:49.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-b-triplet | 2 | null | transformers | 27,641 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-c-triplet | a2a95f91728d4703c4f50140be9b2e4fa2fb3f72 | 2022-07-24T18:05:07.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-c-triplet | 2 | null | transformers | 27,642 | Entry not found |
relbert/relbert-roberta-large-semeval2012-average-prompt-d-triplet | d8b81d0c55c46382b34f4eca8b1119b97177cabd | 2022-07-24T19:15:24.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-average-prompt-d-triplet | 2 | null | transformers | 27,643 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-a-triplet | 94eec0a698dd3601218682f4e8035e3cba341e06 | 2022-07-24T15:13:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-a-triplet | 2 | null | transformers | 27,644 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-b-triplet | e4c1cca41e721c1348a3ca3f1e3885a19f533bfa | 2022-07-24T16:22:36.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-b-triplet | 2 | null | transformers | 27,645 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-c-triplet | e5f73184d758fe4b9037bc6b1c42c113359a4618 | 2022-07-24T17:43:51.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-c-triplet | 2 | null | transformers | 27,646 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-d-triplet | c9cad930450da6339ee92d23edda20b7ad6cb5a5 | 2022-07-24T18:53:37.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-d-triplet | 2 | null | transformers | 27,647 | Entry not found |
relbert/relbert-roberta-large-semeval2012-mask-prompt-e-triplet | c7d6e2a87c020b8d434eed99da616db944dff628 | 2022-07-24T20:16:17.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-semeval2012-mask-prompt-e-triplet | 2 | null | transformers | 27,648 | Entry not found |
affahrizain/distilbert-base-uncased-finetuned-emotion | 7fa547bca29e264b809cc75bcf14dcfdaa1876d7 | 2022-07-24T16:10:36.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | affahrizain | null | affahrizain/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 27,649 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.936
- name: F1
type: f1
value: 0.936054890104025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.936
- F1: 0.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4279 | 1.0 | 2000 | 0.2058 | 0.9345 | 0.9347 |
| 0.1603 | 2.0 | 4000 | 0.1858 | 0.936 | 0.9361 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tsrivatsav/wav2vec2-large-xls-r-300m-en-libri-even-more-steps | 317a68952bdc9788cedcc7b99620525a2691169c | 2022-07-25T14:09:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tsrivatsav | null | tsrivatsav/wav2vec2-large-xls-r-300m-en-libri-even-more-steps | 2 | null | transformers | 27,650 | Entry not found |
jslowik/xlm-roberta-base-finetuned-panx-de | c920449b94d66bef0ae3f162305cbb2b514a4e0a | 2022-07-25T11:04:21.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | jslowik | null | jslowik/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 27,651 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8641580540170158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1634
- F1: 0.8642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2624 | 1.0 | 1573 | 0.1790 | 0.8286 |
| 0.1395 | 2.0 | 3146 | 0.1491 | 0.8463 |
| 0.0815 | 3.0 | 4719 | 0.1634 | 0.8642 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2r_de_xls-r_accent_germany-0_austria-10_s350 | a36e80fb818f68fd81dc40df9bf36ffd8732145a | 2022-07-25T13:06:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_de_xls-r_accent_germany-0_austria-10_s350 | 2 | null | transformers | 27,652 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_xls-r_accent_germany-0_austria-10_s350
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
SummerChiam/rust_image_classification_10 | 0e93f85379b09c7b3aed402c188faf6ef35de348 | 2022-07-26T14:07:46.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/rust_image_classification_10 | 2 | null | transformers | 27,653 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_4
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9417721629142761
---
# rust_image_classification_4
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
swtx/ernie-2.0-base-chinese | 5b6eb368877a0f180d95744b56f88f5b8ceef992 | 2022-07-26T15:02:37.000Z | [
"pytorch",
"transformers",
"license:apache-2.0"
] | null | false | swtx | null | swtx/ernie-2.0-base-chinese | 2 | null | transformers | 27,654 | ---
license: apache-2.0
---
|
fourthbrain-demo/demo | 6569d541bbdb445e25bf0a19e9dd31c5472dbf5a | 2022-07-26T16:08:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | fourthbrain-demo | null | fourthbrain-demo/demo | 2 | null | transformers | 27,655 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zeaksi/bert-finetuned-ner | 4dc7e3a07aef96fc0affc3bae1bbe4410282aad7 | 2022-07-27T08:03:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | zeaksi | null | zeaksi/bert-finetuned-ner | 2 | null | transformers | 27,656 | Entry not found |
wooihen/xlm-roberta-base-finetuned-panx-de-fr | 3e4304aa752b1639ff32a040dc7b4337e0d3a3da | 2022-07-27T07:25:02.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | wooihen | null | wooihen/xlm-roberta-base-finetuned-panx-de-fr | 2 | null | transformers | 27,657 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AlphaNinja27/wav2vec2-large-xls-r-300m-panjabi-colab | 5e68968a77f5aa3316006c08ef5b911787c0cc03 | 2022-07-27T12:14:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AlphaNinja27 | null | AlphaNinja27/wav2vec2-large-xls-r-300m-panjabi-colab | 2 | null | transformers | 27,658 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-panjabi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-panjabi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/jordo4today-paddedpossum-wrenfing | c0f55e573aa23f696a434163e2ba974da3b5f39d | 2022-07-27T10:16:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jordo4today-paddedpossum-wrenfing | 2 | null | transformers | 27,659 | ---
language: en
thumbnail: http://www.huggingtweets.com/jordo4today-paddedpossum-wrenfing/1658916978297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1538409928943083526/gilLk6Ju_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381760254799716353/bNTnf-3w_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1546006810754260992/Dk6vMJU3_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mr. Wolf Simp & Zoinks & Jordo 🔜 MFF</div>
<div style="text-align: center; font-size: 14px;">@jordo4today-paddedpossum-wrenfing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mr. Wolf Simp & Zoinks & Jordo 🔜 MFF.
| Data | Mr. Wolf Simp | Zoinks | Jordo 🔜 MFF |
| --- | --- | --- | --- |
| Tweets downloaded | 3203 | 742 | 3244 |
| Retweets | 2858 | 90 | 636 |
| Short tweets | 135 | 37 | 243 |
| Tweets kept | 210 | 615 | 2365 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e01we01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jordo4today-paddedpossum-wrenfing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wh0na3g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wh0na3g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jordo4today-paddedpossum-wrenfing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sudo-s/modeversion28_7 | 1e5a4dea7fe054ad14d6bbc92a2cfe5b15148e5a | 2022-07-27T18:22:15.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
] | image-classification | false | sudo-s | null | sudo-s/modeversion28_7 | 2 | null | transformers | 27,660 | Entry not found |
curtsmith/distilbert-base-uncased-finetuned-cola | 4a581106a28cf4b035fb3b19bc4da1307da7f5ce | 2022-07-27T18:41:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | curtsmith | null | curtsmith/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 27,661 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5363967157085073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8123
- Matthews Correlation: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 |
| 0.3466 | 2.0 | 1070 | 0.5048 | 0.4832 |
| 0.2335 | 3.0 | 1605 | 0.5641 | 0.5173 |
| 0.1812 | 4.0 | 2140 | 0.7638 | 0.5200 |
| 0.1334 | 5.0 | 2675 | 0.8123 | 0.5364 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
victorcosta/bert-finetuned-ner | dea3bb35b6febdf36b6f0d22c1dd91f9622d05e3 | 2022-07-27T22:04:51.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | victorcosta | null | victorcosta/bert-finetuned-ner | 2 | null | transformers | 27,662 | Entry not found |
mughalk4/mBERT-Turkish-Mono | a6edba043b414be1cf8ffb98575bac3ed9a4c989 | 2022-07-28T08:28:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mughalk4 | null | mughalk4/mBERT-Turkish-Mono | 2 | null | transformers | 27,663 | Entry not found |
jinghan/bert-base-uncased-finetuned-wnli | 1ba537eda30dd620d30884159e563971ca773314 | 2022-07-28T13:04:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jinghan | null | jinghan/bert-base-uncased-finetuned-wnli | 2 | null | transformers | 27,664 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6917
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.6925 | 0.5493 |
| No log | 2.0 | 20 | 0.6917 | 0.5634 |
| No log | 3.0 | 30 | 0.6971 | 0.3239 |
| No log | 4.0 | 40 | 0.6999 | 0.2958 |
| No log | 5.0 | 50 | 0.6998 | 0.2676 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jperezv/distilbert-base-uncased-finetuned-imdb | 8d41fa77d3a616a096b4bc16cec96a81ad1c095b | 2022-07-28T17:14:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jperezv | null | jperezv/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 27,665 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
Vlasta/DNADebertaSentencepiece30k | 8628a56ffbe56e240b310d2c9a098a2d17dee2d4 | 2022-07-30T10:12:11.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/DNADebertaSentencepiece30k | 2 | null | transformers | 27,666 | Entry not found |
commanderstrife/ADE-Bio_ClinicalBERT-NER | 1b2419d0dc87b9d7c3c458c2e7a8d4eb128703cc | 2022-07-29T01:39:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | commanderstrife | null | commanderstrife/ADE-Bio_ClinicalBERT-NER | 2 | null | transformers | 27,667 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ADE-Bio_ClinicalBERT-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ADE-Bio_ClinicalBERT-NER
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1926
- Precision: 0.7830
- Recall: 0.8811
- F1: 0.8291
- Accuracy: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2389 | 1.0 | 201 | 0.2100 | 0.7155 | 0.8292 | 0.7681 | 0.9263 |
| 0.0648 | 2.0 | 402 | 0.1849 | 0.7716 | 0.8711 | 0.8183 | 0.9392 |
| 0.2825 | 3.0 | 603 | 0.1856 | 0.7834 | 0.8788 | 0.8284 | 0.9422 |
| 0.199 | 4.0 | 804 | 0.1875 | 0.7796 | 0.8781 | 0.8259 | 0.9430 |
| 0.0404 | 5.0 | 1005 | 0.1926 | 0.7830 | 0.8811 | 0.8291 | 0.9437 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
chintagunta85/test_ner3 | 76be0e08ff7b02e80444ba7380f4bca10ef54cfa | 2022-07-29T04:40:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:pv_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | chintagunta85 | null | chintagunta85/test_ner3 | 2 | null | transformers | 27,668 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pv_dataset
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test_ner3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: pv_dataset
type: pv_dataset
config: PVDatasetCorpus
split: train
args: PVDatasetCorpus
metrics:
- name: Precision
type: precision
value: 0.6698151950718686
- name: Recall
type: recall
value: 0.6499117663801446
- name: F1
type: f1
value: 0.6597133941985438
- name: Accuracy
type: accuracy
value: 0.9606609586670052
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_ner3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pv_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2983
- Precision: 0.6698
- Recall: 0.6499
- F1: 0.6597
- Accuracy: 0.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1106 | 1.0 | 1813 | 0.1128 | 0.6050 | 0.5949 | 0.5999 | 0.9565 |
| 0.0705 | 2.0 | 3626 | 0.1190 | 0.6279 | 0.6122 | 0.6200 | 0.9585 |
| 0.0433 | 3.0 | 5439 | 0.1458 | 0.6342 | 0.5983 | 0.6157 | 0.9574 |
| 0.0301 | 4.0 | 7252 | 0.1453 | 0.6305 | 0.6818 | 0.6552 | 0.9594 |
| 0.0196 | 5.0 | 9065 | 0.1672 | 0.6358 | 0.6871 | 0.6605 | 0.9594 |
| 0.0133 | 6.0 | 10878 | 0.1931 | 0.6427 | 0.6138 | 0.6279 | 0.9587 |
| 0.0104 | 7.0 | 12691 | 0.1948 | 0.6657 | 0.6511 | 0.6583 | 0.9607 |
| 0.0081 | 8.0 | 14504 | 0.2243 | 0.6341 | 0.6574 | 0.6455 | 0.9586 |
| 0.0054 | 9.0 | 16317 | 0.2432 | 0.6547 | 0.6318 | 0.6431 | 0.9588 |
| 0.0041 | 10.0 | 18130 | 0.2422 | 0.6717 | 0.6397 | 0.6553 | 0.9605 |
| 0.0041 | 11.0 | 19943 | 0.2415 | 0.6571 | 0.6420 | 0.6495 | 0.9601 |
| 0.0027 | 12.0 | 21756 | 0.2567 | 0.6560 | 0.6590 | 0.6575 | 0.9601 |
| 0.0023 | 13.0 | 23569 | 0.2609 | 0.6640 | 0.6495 | 0.6566 | 0.9606 |
| 0.002 | 14.0 | 25382 | 0.2710 | 0.6542 | 0.6670 | 0.6606 | 0.9598 |
| 0.0012 | 15.0 | 27195 | 0.2766 | 0.6692 | 0.6539 | 0.6615 | 0.9610 |
| 0.001 | 16.0 | 29008 | 0.2938 | 0.6692 | 0.6415 | 0.6551 | 0.9603 |
| 0.0007 | 17.0 | 30821 | 0.2969 | 0.6654 | 0.6490 | 0.6571 | 0.9604 |
| 0.0007 | 18.0 | 32634 | 0.3035 | 0.6628 | 0.6456 | 0.6541 | 0.9601 |
| 0.0007 | 19.0 | 34447 | 0.2947 | 0.6730 | 0.6489 | 0.6607 | 0.9609 |
| 0.0004 | 20.0 | 36260 | 0.2983 | 0.6698 | 0.6499 | 0.6597 | 0.9607 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
relbert/relbert-roberta-large-conceptnet-hc-average-prompt-b-nce | 49140c1465ac0213c4ff2cfe37f1bd63aa128e4d | 2022-07-29T03:35:47.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-conceptnet-hc-average-prompt-b-nce | 2 | null | transformers | 27,669 | Entry not found |
keithanpai/swin-tiny-patch4-window7-224-finetuned-eurosat | 872ce39c1f33596c2da003ccc9e9f37b88124d0a | 2022-07-29T22:22:54.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | keithanpai | null | keithanpai/swin-tiny-patch4-window7-224-finetuned-eurosat | 2 | null | transformers | 27,670 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8083832335329342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5765
- Accuracy: 0.8084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.731 | 0.99 | 70 | 0.7428 | 0.7405 |
| 0.6044 | 1.99 | 140 | 0.6433 | 0.7735 |
| 0.5525 | 2.99 | 210 | 0.5765 | 0.8084 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BramVanroy/xlm-roberta-base-hebban-reviews5 | fb89e59f1e0f9001fa1ba2d4ea3e6893e131099c | 2022-07-29T09:56:23.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
] | text-classification | false | BramVanroy | null | BramVanroy/xlm-roberta-base-hebban-reviews5 | 2 | null | transformers | 27,671 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: xlm-roberta-base-hebban-reviews5
results:
- dataset:
config: filtered_rating
name: BramVanroy/hebban-reviews - filtered_rating - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.4125246548323471
- name: Test f1
type: f1
value: 0.25056861304587985
- name: Test precision
type: precision
value: 0.3248910707548293
- name: Test qwk
type: qwk
value: 0.11537886275015763
- name: Test recall
type: recall
value: 0.4125246548323471
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# xlm-roberta-base-hebban-reviews5
*This model should not be used*, it would seem that it converged poorly. It may be updated in the future.
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_rating
- dataset_revision: 2.0.0
- labelcolumn: review_rating0
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.10318187882131191
- best_model_checkpoint: trained/hebban-reviews5/xlm-roberta-base/checkpoint-3000
# Test results of best checkpoint
- accuracy: 0.4125246548323471
- f1: 0.25056861304587985
- precision: 0.3248910707548293
- qwk: 0.11537886275015763
- recall: 0.4125246548323471
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 8159b4c1d5e66b36f68dd263299927ffb8670ebd
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
AkmalAshirmatov/first_try | 278c8a060c42b5c0ead8d3174d5430600b1d73e7 | 2022-07-29T09:14:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice_7_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AkmalAshirmatov | null | AkmalAshirmatov/first_try | 2 | null | transformers | 27,672 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_7_0
model-index:
- name: first_try
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_try
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_7_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
psroy/wav2vec2-base-timit-demo-colab | 70d74f272f1e04850bb7a2f4c034fc8c528c147e | 2022-07-30T07:08:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | psroy | null | psroy/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 27,673 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4772
- Wer: 0.2821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6949 | 0.87 | 500 | 2.4599 | 0.9999 |
| 0.9858 | 1.73 | 1000 | 0.5249 | 0.4674 |
| 0.4645 | 2.6 | 1500 | 0.4604 | 0.3900 |
| 0.3273 | 3.46 | 2000 | 0.3939 | 0.3612 |
| 0.2474 | 4.33 | 2500 | 0.4150 | 0.3560 |
| 0.2191 | 5.19 | 3000 | 0.3855 | 0.3344 |
| 0.1662 | 6.06 | 3500 | 0.3779 | 0.3258 |
| 0.1669 | 6.92 | 4000 | 0.4841 | 0.3286 |
| 0.151 | 7.79 | 4500 | 0.4182 | 0.3219 |
| 0.1175 | 8.65 | 5000 | 0.4194 | 0.3107 |
| 0.1103 | 9.52 | 5500 | 0.4256 | 0.3129 |
| 0.1 | 10.38 | 6000 | 0.4352 | 0.3089 |
| 0.0949 | 11.25 | 6500 | 0.4649 | 0.3160 |
| 0.0899 | 12.11 | 7000 | 0.4472 | 0.3065 |
| 0.0787 | 12.98 | 7500 | 0.4763 | 0.3128 |
| 0.0742 | 13.84 | 8000 | 0.4321 | 0.3034 |
| 0.067 | 14.71 | 8500 | 0.4562 | 0.3076 |
| 0.063 | 15.57 | 9000 | 0.4541 | 0.3102 |
| 0.0624 | 16.44 | 9500 | 0.5113 | 0.3040 |
| 0.0519 | 17.3 | 10000 | 0.4925 | 0.3008 |
| 0.0525 | 18.17 | 10500 | 0.4710 | 0.2987 |
| 0.046 | 19.03 | 11000 | 0.4781 | 0.2977 |
| 0.0455 | 19.9 | 11500 | 0.4572 | 0.2969 |
| 0.0394 | 20.76 | 12000 | 0.5256 | 0.2966 |
| 0.0373 | 21.63 | 12500 | 0.4723 | 0.2921 |
| 0.0375 | 22.49 | 13000 | 0.4640 | 0.2847 |
| 0.0334 | 23.36 | 13500 | 0.4740 | 0.2917 |
| 0.0304 | 24.22 | 14000 | 0.4817 | 0.2874 |
| 0.0291 | 25.09 | 14500 | 0.4722 | 0.2896 |
| 0.0247 | 25.95 | 15000 | 0.4765 | 0.2870 |
| 0.0223 | 26.82 | 15500 | 0.4728 | 0.2821 |
| 0.0223 | 27.68 | 16000 | 0.4690 | 0.2834 |
| 0.0207 | 28.55 | 16500 | 0.4706 | 0.2825 |
| 0.0186 | 29.41 | 17000 | 0.4772 | 0.2821 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/onlythesexiest_ | a02a8ac65532ce03c92e0b10bbd02495803ed3cb | 2022-07-29T13:28:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/onlythesexiest_ | 2 | null | transformers | 27,674 | ---
language: en
thumbnail: http://www.huggingtweets.com/onlythesexiest_/1659101307927/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1399411396140535812/UwTllUci_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Only The Sexiest 18+</div>
<div style="text-align: center; font-size: 14px;">@onlythesexiest_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Only The Sexiest 18+.
| Data | Only The Sexiest 18+ |
| --- | --- |
| Tweets downloaded | 2986 |
| Retweets | 2785 |
| Short tweets | 36 |
| Tweets kept | 165 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3oqup13u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @onlythesexiest_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ajjfffk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ajjfffk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/onlythesexiest_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phjhk/hklegal-xlm-r-large | d9e584d0a8cbaab80e58f5b9b82456a6506d2d94 | 2022-07-29T14:51:34.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phjhk | null | phjhk/hklegal-xlm-r-large | 2 | null | transformers | 27,675 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments
# Uses
The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain.
```python
>>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-large")
>>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-large")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
```
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
``` |
ibm/re2g-qry-encoder-nq | bef754be7b733d854c7462dfb2afa5f6eab039b0 | 2022-07-29T16:13:36.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | ibm | null | ibm/re2g-qry-encoder-nq | 2 | null | transformers | 27,676 | ---
license: apache-2.0
---
|
ibm/re2g-ctx-encoder-nq | d7fae8fbc2e6c5e14d4cb17bc13e90eb3c339c0c | 2022-07-29T16:16:10.000Z | [
"pytorch",
"dpr",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-ctx-encoder-nq | 2 | null | transformers | 27,677 | ---
license: apache-2.0
---
|
simecek/DNADebertaK7b | 43d547bed0f009481692c74a614623664c38fa83 | 2022-07-30T08:22:07.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADebertaK7b | 2 | null | transformers | 27,678 | Entry not found |
ibm/re2g-qry-encoder-trex | ba4eecb1d041d2c575247ae7858d30f7d68d1561 | 2022-07-29T18:12:19.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | ibm | null | ibm/re2g-qry-encoder-trex | 2 | null | transformers | 27,679 | ---
license: apache-2.0
---
|
ibm/re2g-ctx-encoder-trex | 7b660665cb43621cb6ec8c0c7ad3e2ba40b1a9b8 | 2022-07-29T18:17:37.000Z | [
"pytorch",
"dpr",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-ctx-encoder-trex | 2 | null | transformers | 27,680 | ---
license: apache-2.0
---
|
ibm/re2g-generation-triviaqa | 7c1866da07c91d82c98972078080dac61b49c9df | 2022-07-29T18:21:30.000Z | [
"pytorch",
"rag",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-generation-triviaqa | 2 | null | transformers | 27,681 | ---
license: apache-2.0
---
|
ibm/re2g-reranker-triviaqa | da788d4e156f28eca100544c0aa178a4af6fdbb3 | 2022-07-29T18:24:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | ibm | null | ibm/re2g-reranker-triviaqa | 2 | null | transformers | 27,682 | ---
license: apache-2.0
---
|
ibm/re2g-qry-encoder-triviaqa | 24d8410c02dba0967dc486850e07252c8a2a762b | 2022-07-29T18:26:20.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | ibm | null | ibm/re2g-qry-encoder-triviaqa | 2 | null | transformers | 27,683 | ---
license: apache-2.0
---
|
ibm/re2g-generation-wow | f7049eef62e49c8a4ce3b61220fa38b285fcec14 | 2022-07-29T20:22:56.000Z | [
"pytorch",
"rag",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-generation-wow | 2 | null | transformers | 27,684 | ---
license: apache-2.0
---
|
platzi/platzi-vit-model-omar-espejel | ce0b733302dcce351585e6110024b21608b68528 | 2022-07-29T19:32:18.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | platzi | null | platzi/platzi-vit-model-omar-espejel | 2 | null | transformers | 27,685 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-omar-espejel
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-omar-espejel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0091
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1372 | 3.85 | 500 | 0.0091 | 1.0 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
asparius/combined-distil | 1616fbdc956894ab868fd7d218cd92c00613cb4a | 2022-07-29T20:29:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | asparius | null | asparius/combined-distil | 2 | null | transformers | 27,686 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: combined-distil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combined-distil
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9342
- Accuracy: 0.8566
- F1: 0.8615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
romainlhardy/finetuned-ner | 2837a109804f87d6ba4ff3675f61b4231c0b4044 | 2022-07-29T20:28:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | romainlhardy | null | romainlhardy/finetuned-ner | 2 | null | transformers | 27,687 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9048086359175662
- name: Recall
type: recall
value: 0.9309996634129922
- name: F1
type: f1
value: 0.9177173191771731
- name: Accuracy
type: accuracy
value: 0.9816918820274327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0712
- Precision: 0.9048
- Recall: 0.9310
- F1: 0.9177
- Accuracy: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0849 | 1.0 | 1756 | 0.0712 | 0.9048 | 0.9310 | 0.9177 | 0.9817 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ibm/re2g-reranker-wow | daacf639c3bcab9bfd395d5cfb58c46efb542391 | 2022-07-29T20:25:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | ibm | null | ibm/re2g-reranker-wow | 2 | null | transformers | 27,688 | ---
license: apache-2.0
---
|
ibm/re2g-qry-encoder-wow | 46e3f2e1cec76405380499475a93eb31c66a5909 | 2022-07-29T20:27:32.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | ibm | null | ibm/re2g-qry-encoder-wow | 2 | null | transformers | 27,689 | ---
license: apache-2.0
---
|
ibm/re2g-ctx-encoder-wow | 3853c9fe09d3c3334ac40a6a4a6d92a8d0200721 | 2022-07-29T20:29:04.000Z | [
"pytorch",
"dpr",
"transformers",
"license:apache-2.0"
] | null | false | ibm | null | ibm/re2g-ctx-encoder-wow | 2 | null | transformers | 27,690 | ---
license: apache-2.0
---
|
huggingtweets/zk_faye | b1914ace7cbe6a8f26a789bb9243bfe0955509d7 | 2022-07-29T22:03:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zk_faye | 2 | null | transformers | 27,691 | ---
language: en
thumbnail: http://www.huggingtweets.com/zk_faye/1659132206531/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544789753639436289/_nNZ-fpO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">❤️ ANGEL FAYE ❤️</div>
<div style="text-align: center; font-size: 14px;">@zk_faye</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ❤️ ANGEL FAYE ❤️.
| Data | ❤️ ANGEL FAYE ❤️ |
| --- | --- |
| Tweets downloaded | 422 |
| Retweets | 152 |
| Short tweets | 119 |
| Tweets kept | 151 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w29di03/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zk_faye's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1klggdh2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1klggdh2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zk_faye')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
relbert/relbert-roberta-large-conceptnet-hc-average-prompt-c-nce | 3afa44e39f34be0d9508df66234fdf39008e3a81 | 2022-07-29T23:52:23.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | relbert | null | relbert/relbert-roberta-large-conceptnet-hc-average-prompt-c-nce | 2 | null | transformers | 27,692 | Entry not found |
huggingtweets/dags | b1ddb60d6dfbe3f5370e8cc94ce6d2014918d2cb | 2022-07-30T01:32:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dags | 2 | null | transformers | 27,693 | ---
language: en
thumbnail: http://www.huggingtweets.com/dags/1659144733206/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/722815128501026817/IMWCRzEn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DAGs</div>
<div style="text-align: center; font-size: 14px;">@dags</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DAGs.
| Data | DAGs |
| --- | --- |
| Tweets downloaded | 3003 |
| Retweets | 31 |
| Short tweets | 158 |
| Tweets kept | 2814 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyk6uzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dags's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dags')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
123www/test_model | 6881f4988ef8eaa1d33d9cd3ea39b748a0654ddc | 2022-01-10T06:01:01.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | 123www | null | 123www/test_model | 1 | null | transformers | 27,694 | Entry not found |
13048909972/wav2vec2-large-xls-r-300m-tr-colab | 40c06d2135b1ce044d045f0ee54e88e24f81b7d6 | 2021-12-09T10:24:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 13048909972 | null | 13048909972/wav2vec2-large-xls-r-300m-tr-colab | 1 | null | transformers | 27,695 | Entry not found |
13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606 | 79e73cd04aaeec5f8c9b0aa5590926cdec954e0f | 2021-12-11T02:05:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 13048909972 | null | 13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606 | 1 | null | transformers | 27,696 | Entry not found |
275Gameplay/test | 010e28a139a4b30eee211c45df77c18c8fcf52ed | 2021-12-17T15:17:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 275Gameplay | null | 275Gameplay/test | 1 | null | transformers | 27,697 | Entry not found |
2early4coffee/DialoGPT-small-deadpool | 10864634bcddcd66acf8981037ad486ae34ad1f2 | 2021-10-28T17:14:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | 2early4coffee | null | 2early4coffee/DialoGPT-small-deadpool | 1 | null | transformers | 27,698 | ---
tags:
- conversational
---
# Deadpool DialoGPT Model |
3koozy/gpt2-HxH | c23b81bc97590e0963cae8ad29a6c92a378c1be4 | 2021-08-25T11:31:49.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | 3koozy | null | 3koozy/gpt2-HxH | 1 | null | transformers | 27,699 | this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\
you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles
you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.