modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
masakhane/m2m100_418M_en_twi_rel_ft | f734d447f6dd198485ebda1b996b534db3cddcb0 | 2022-05-12T12:35:37.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_twi_rel_ft | 0 | null | transformers | 37,500 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel_ft | dcca35a20b3059ba3fdf47cba53876ce3d94370c | 2022-05-12T12:35:40.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_twi_en_rel_ft | 0 | null | transformers | 37,501 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_ft | 33c070f3066efbc312546fa49d388e9d3dda0f39 | 2022-05-12T13:36:21.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_zul_rel_ft | 0 | null | transformers | 37,502 | ---
license: afl-3.0
---
|
huggingtweets/alice_lbl-lotrbookquotes-theprincess_lbl | 8340978cf1d47a551622ea71b8ce605909e03525 | 2022-05-14T03:52:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alice_lbl-lotrbookquotes-theprincess_lbl | 0 | null | transformers | 37,503 | ---
language: en
thumbnail: http://www.huggingtweets.com/alice_lbl-lotrbookquotes-theprincess_lbl/1652500340141/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424546909104926720/g4pTa5BS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1047569624693465089/0yKYd-Xl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424540771579928579/8moTa864_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alice in Wonderland & Looking-Glass (line by line) & Lord of the Rings quotes & The Princess Bride (line by line)</div>
<div style="text-align: center; font-size: 14px;">@alice_lbl-lotrbookquotes-theprincess_lbl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alice in Wonderland & Looking-Glass (line by line) & Lord of the Rings quotes & The Princess Bride (line by line).
| Data | Alice in Wonderland & Looking-Glass (line by line) | Lord of the Rings quotes | The Princess Bride (line by line) |
| --- | --- | --- | --- |
| Tweets downloaded | 3078 | 3250 | 1769 |
| Retweets | 0 | 0 | 0 |
| Short tweets | 38 | 0 | 204 |
| Tweets kept | 3040 | 3250 | 1565 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1q83n6h6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alice_lbl-lotrbookquotes-theprincess_lbl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1614bya5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1614bya5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alice_lbl-lotrbookquotes-theprincess_lbl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Splend1dchan/wav2vec2-large-lv60_t5lephone | 643d0ef8b085bf671ff7778c3f6d12aff574aac8 | 2022-05-14T06:37:01.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone | 0 | null | null | 37,504 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_byt5-small | 11ea07b3e0dfdeab5db70c230e8b66a8d32c1096 | 2022-05-18T07:46:23.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_byt5-small | 0 | null | null | 37,505 | Entry not found |
lilitket/20220511-135859 | e9b9a46cb0b8724e33b51b1417c683bea03a3855 | 2022-05-11T11:44:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220511-135859 | 0 | null | transformers | 37,506 | Entry not found |
jinjinjin/CULR | a1d308b48caf97bf9622f1b7b32aa1c9b28f4965 | 2022-05-26T15:47:35.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinjinjin | null | jinjinjin/CULR | 0 | null | transformers | 37,507 | Entry not found |
victor123/clip-roberta-finetuned | d616c6dc4c7ede87a40b55f2cc0d5b720c0205a8 | 2022-05-11T11:57:08.000Z | [
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"dataset:ydshieh/coco_dataset_script",
"transformers",
"generated_from_trainer",
"model-index"
] | feature-extraction | false | victor123 | null | victor123/clip-roberta-finetuned | 0 | null | transformers | 37,508 | ---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
hxl/test-mlm-wwm2 | 4980b6ea7509a290f4aad9e761af237f2d0a44d1 | 2022-05-11T13:43:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hxl | null | hxl/test-mlm-wwm2 | 0 | null | transformers | 37,509 | åŸºç¡€ç ”ç©¶æ”¿ç– |
subhasisj/hi-TAPT-MLM-MiniLM | 1d39623bff01610a5cfaf401a33d74d056f9fd1b | 2022-05-11T16:44:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/hi-TAPT-MLM-MiniLM | 0 | null | transformers | 37,510 | ---
tags:
- generated_from_trainer
model-index:
- name: hi-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hi-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
sileod/distractors_prediction | 9801ae7d4622e6fbacfb4875b86dc3931e40ddf0 | 2022-05-24T17:03:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | sileod | null | sileod/distractors_prediction | 0 | null | sentence-transformers | 37,511 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# sileod/distractors_prediction
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sileod/distractors_prediction')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sileod/distractors_prediction)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 773 with parameters:
```
{'batch_size': 96, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Asym(
(DOC-0): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed-earlystopping | f6f8f09dee59259489341292ff4bc7f975b3017b | 2022-05-11T23:46:14.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed-earlystopping | 0 | null | transformers | 37,512 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-pubmed-earlystopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-pubmed-earlystopping
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8596
- Rouge1: 53.4491
- Rouge2: 35.0041
- Rougel: 37.2742
- Rougelsum: 50.9867
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 0.31 | 125 | 1.3772 | 50.6084 | 30.8075 | 32.6113 | 47.883 | 142.0 |
| No log | 0.63 | 250 | 1.2423 | 52.1758 | 31.6326 | 32.9448 | 49.8089 | 141.6296 |
| No log | 0.94 | 375 | 1.1223 | 52.3494 | 32.3508 | 35.3638 | 49.6019 | 142.0 |
| 1.3557 | 1.26 | 500 | 1.1004 | 51.8935 | 32.8506 | 35.521 | 49.6249 | 142.0 |
| 1.3557 | 1.57 | 625 | 1.0600 | 50.8085 | 31.0397 | 34.2021 | 48.2264 | 141.5741 |
| 1.3557 | 1.88 | 750 | 0.9834 | 53.0701 | 34.0699 | 36.4029 | 51.043 | 142.0 |
| 1.3557 | 2.2 | 875 | 0.9554 | 53.4385 | 34.2976 | 36.8142 | 51.1262 | 141.9444 |
| 0.868 | 2.51 | 1000 | 0.9256 | 52.2123 | 32.7568 | 34.5883 | 49.8566 | 142.0 |
| 0.868 | 2.83 | 1125 | 0.8944 | 53.8062 | 34.6687 | 36.9645 | 51.5162 | 142.0 |
| 0.868 | 3.14 | 1250 | 0.9290 | 53.1356 | 34.1301 | 37.7713 | 50.762 | 141.9074 |
| 0.868 | 3.45 | 1375 | 0.9017 | 53.4455 | 35.0572 | 37.3033 | 50.9773 | 142.0 |
| 0.6252 | 3.77 | 1500 | 0.8519 | 53.9228 | 35.5575 | 38.9119 | 51.5202 | 142.0 |
| 0.6252 | 4.08 | 1625 | 0.8991 | 54.4223 | 36.3072 | 38.5771 | 51.9874 | 141.9074 |
| 0.6252 | 4.4 | 1750 | 0.8857 | 53.4105 | 35.348 | 37.5814 | 50.8842 | 142.0 |
| 0.6252 | 4.71 | 1875 | 0.8596 | 53.4491 | 35.0041 | 37.2742 | 50.9867 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
frmccann/CLSRIL-23 | a301712c7cbdef70c167fe85529e68fb1cfc615b | 2022-05-12T18:28:16.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | frmccann | null | frmccann/CLSRIL-23 | 0 | null | transformers | 37,513 | Entry not found |
ynhi/t5vi-finetuned-en-to-vi | 2a1711c0215118a5c9c70f6db1edc39f1984b076 | 2022-05-11T22:10:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:mt_eng_vietnamese",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ynhi | null | ynhi/t5vi-finetuned-en-to-vi | 0 | null | transformers | 37,514 | ---
tags:
- generated_from_trainer
datasets:
- mt_eng_vietnamese
metrics:
- bleu
model-index:
- name: t5vi-finetuned-en-to-vi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: mt_eng_vietnamese
type: mt_eng_vietnamese
args: iwslt2015-en-vi
metrics:
- name: Bleu
type: bleu
value: 13.5652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5vi-finetuned-en-to-vi
This model is a fine-tuned version of [imthanhlv/t5vi](https://huggingface.co/imthanhlv/t5vi) on the mt_eng_vietnamese dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3823
- Bleu: 13.5652
- Gen Len: 17.3578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8037 | 1.0 | 6666 | 1.5935 | 11.0819 | 17.3467 |
| 1.6216 | 2.0 | 13332 | 1.4639 | 12.4698 | 17.3515 |
| 1.5104 | 3.0 | 19998 | 1.4139 | 13.2283 | 17.4058 |
| 1.4483 | 4.0 | 26664 | 1.3904 | 13.4698 | 17.3562 |
| 1.4097 | 5.0 | 33330 | 1.3823 | 13.5652 | 17.3578 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
mcurmei/single_label_N_max_long_training | 10c1f47ab3a88590148523ef384c018fbefd4fdc | 2022-05-11T18:10:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mcurmei | null | mcurmei/single_label_N_max_long_training | 0 | null | transformers | 37,515 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: single_label_N_max_long_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# single_label_N_max_long_training
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0568 | 1.0 | 674 | 1.9993 |
| 1.6024 | 2.0 | 1348 | 1.8497 |
| 1.0196 | 3.0 | 2022 | 1.9178 |
| 0.7622 | 4.0 | 2696 | 2.0412 |
| 0.6066 | 5.0 | 3370 | 2.2523 |
| 0.4136 | 6.0 | 4044 | 2.3845 |
| 0.3113 | 7.0 | 4718 | 2.5712 |
| 0.2777 | 8.0 | 5392 | 2.6790 |
| 0.208 | 9.0 | 6066 | 2.7464 |
| 0.1749 | 10.0 | 6740 | 2.8288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
subhasisj/ar-finetuned-squad-qa-minilmv2-32 | 6816849c6acaa6dbd2ebc1d15019fe1c6186bcf0 | 2022-05-11T18:14:20.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/ar-finetuned-squad-qa-minilmv2-32 | 0 | null | transformers | 37,516 | Entry not found |
huxxx657/roberta-base-finetuned-deletion-squad-10 | dc23a5b43d2e9ae1bfc8ea79cd92eb7fc5b8eaf5 | 2022-05-11T20:03:05.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-deletion-squad-10 | 0 | null | transformers | 37,517 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-deletion-squad-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-deletion-squad-10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0269 | 1.0 | 5533 | 1.0246 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huxxx657/roberta-base-finetuned-deletion-squad-15 | 34b52ee5c004ef39a5b6d6eac38e22dadfb5bce5 | 2022-05-11T21:15:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-deletion-squad-15 | 0 | null | transformers | 37,518 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-deletion-squad-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-deletion-squad-15
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1127 | 1.0 | 5531 | 1.1057 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping | 712c24645113240d3ef9df3b8d69f5f5a230f0d3 | 2022-05-13T21:16:27.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping | 0 | null | transformers | 37,519 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8793
- Rouge1: 56.2055
- Rouge2: 41.9231
- Rougel: 45.0616
- Rougelsum: 54.6643
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 0.31 | 125 | 1.2057 | 50.9339 | 30.6777 | 32.6396 | 47.9592 | 141.3519 |
| No log | 0.63 | 250 | 1.0933 | 52.0728 | 31.2361 | 32.8214 | 48.9776 | 141.9815 |
| No log | 0.94 | 375 | 0.9685 | 51.6847 | 32.1578 | 34.1933 | 48.8808 | 141.5556 |
| 1.1594 | 1.26 | 500 | 0.9725 | 50.5131 | 30.6043 | 32.1861 | 47.4346 | 142.0 |
| 1.1594 | 1.57 | 625 | 0.9342 | 52.228 | 32.2073 | 33.797 | 49.2395 | 142.0 |
| 1.1594 | 1.88 | 750 | 0.8715 | 52.2 | 33.6602 | 36.1303 | 49.7138 | 141.6481 |
| 1.1594 | 2.2 | 875 | 0.8334 | 53.116 | 33.9871 | 35.9641 | 50.7658 | 141.8889 |
| 0.6845 | 2.51 | 1000 | 0.8241 | 52.2612 | 32.8025 | 35.27 | 49.5694 | 142.0 |
| 0.6845 | 2.83 | 1125 | 0.7986 | 54.1803 | 35.0019 | 37.4582 | 51.4577 | 142.0 |
| 0.6845 | 3.14 | 1250 | 0.8532 | 52.1328 | 32.6086 | 34.7455 | 49.6219 | 141.7037 |
| 0.6845 | 3.45 | 1375 | 0.8319 | 51.9614 | 32.8544 | 35.3269 | 49.3279 | 141.7593 |
| 0.4488 | 3.77 | 1500 | 0.8033 | 53.1404 | 34.6086 | 37.5482 | 50.7414 | 142.0 |
| 0.4488 | 4.08 | 1625 | 0.8322 | 53.1736 | 34.8662 | 37.7514 | 51.0601 | 142.0 |
| 0.4488 | 4.4 | 1750 | 0.7985 | 51.8251 | 32.9457 | 36.4164 | 49.55 | 142.0 |
| 0.4488 | 4.71 | 1875 | 0.8049 | 54.3423 | 36.6293 | 39.1316 | 52.2706 | 141.8148 |
| 0.3017 | 5.03 | 2000 | 0.8148 | 53.0698 | 35.2569 | 38.406 | 50.9346 | 141.7778 |
| 0.3017 | 5.34 | 2125 | 0.8153 | 53.4479 | 35.1525 | 37.8071 | 51.3731 | 141.0741 |
| 0.3017 | 5.65 | 2250 | 0.8009 | 52.5517 | 34.8287 | 37.999 | 50.2889 | 141.6111 |
| 0.3017 | 5.97 | 2375 | 0.7509 | 54.2725 | 37.4164 | 40.516 | 52.1379 | 142.0 |
| 0.2052 | 6.28 | 2500 | 0.8019 | 54.622 | 36.4776 | 39.9306 | 52.5069 | 142.0 |
| 0.2052 | 6.6 | 2625 | 0.8176 | 55.4796 | 38.4502 | 41.5523 | 53.5211 | 142.0 |
| 0.2052 | 6.91 | 2750 | 0.7956 | 55.4906 | 37.9064 | 40.845 | 53.107 | 141.9815 |
| 0.2052 | 7.22 | 2875 | 0.7966 | 54.5177 | 37.3399 | 40.7678 | 52.4241 | 142.0 |
| 0.1465 | 7.54 | 3000 | 0.8311 | 54.3473 | 37.0659 | 40.2507 | 52.372 | 142.0 |
| 0.1465 | 7.85 | 3125 | 0.8227 | 53.9245 | 36.4695 | 39.1205 | 51.9416 | 141.8889 |
| 0.1465 | 8.17 | 3250 | 0.7947 | 54.766 | 38.4275 | 41.2293 | 52.9075 | 142.0 |
| 0.1465 | 8.48 | 3375 | 0.7954 | 54.5305 | 37.6934 | 40.6804 | 52.5884 | 141.9444 |
| 0.115 | 8.79 | 3500 | 0.8433 | 54.7962 | 37.9373 | 41.3906 | 52.3778 | 142.0 |
| 0.115 | 9.11 | 3625 | 0.8416 | 56.59 | 41.2271 | 44.4207 | 54.7199 | 142.0 |
| 0.115 | 9.42 | 3750 | 0.8164 | 55.1903 | 39.0588 | 41.4908 | 53.4897 | 142.0 |
| 0.115 | 9.74 | 3875 | 0.8363 | 55.2894 | 39.3598 | 42.1138 | 53.831 | 141.8889 |
| 0.0912 | 10.05 | 4000 | 0.8850 | 55.7705 | 40.4924 | 43.1048 | 54.254 | 142.0 |
| 0.0912 | 10.36 | 4125 | 0.8268 | 56.1664 | 40.641 | 42.798 | 54.0001 | 141.9259 |
| 0.0912 | 10.68 | 4250 | 0.8564 | 55.4701 | 39.4949 | 42.2559 | 53.4486 | 141.8889 |
| 0.0912 | 10.99 | 4375 | 0.8557 | 56.0849 | 41.2861 | 45.8277 | 54.5999 | 141.6667 |
| 0.0707 | 11.31 | 4500 | 0.8432 | 54.9496 | 39.3006 | 42.0025 | 53.3854 | 142.0 |
| 0.0707 | 11.62 | 4625 | 0.8377 | 54.2438 | 37.6959 | 40.4637 | 52.3088 | 142.0 |
| 0.0707 | 11.93 | 4750 | 0.8794 | 55.9488 | 40.5401 | 43.7347 | 54.1282 | 142.0 |
| 0.0707 | 12.25 | 4875 | 0.8563 | 57.8762 | 43.366 | 46.6757 | 56.6985 | 142.0 |
| 0.0604 | 12.56 | 5000 | 0.8835 | 54.8926 | 39.3755 | 42.384 | 53.2687 | 141.6481 |
| 0.0604 | 12.88 | 5125 | 0.8570 | 55.6656 | 39.849 | 42.1455 | 54.352 | 142.0 |
| 0.0604 | 13.19 | 5250 | 0.8539 | 57.1549 | 41.901 | 45.153 | 55.213 | 142.0 |
| 0.0604 | 13.51 | 5375 | 0.8847 | 56.3279 | 40.9269 | 43.416 | 54.7242 | 142.0 |
| 0.051 | 13.82 | 5500 | 0.8795 | 56.8982 | 42.3333 | 45.2669 | 55.1034 | 142.0 |
| 0.051 | 14.13 | 5625 | 0.8751 | 55.3173 | 40.2853 | 43.2479 | 53.7236 | 142.0 |
| 0.051 | 14.45 | 5750 | 0.8799 | 56.1678 | 41.0862 | 43.8581 | 54.6316 | 142.0 |
| 0.051 | 14.76 | 5875 | 0.8678 | 57.3539 | 43.0473 | 44.8511 | 55.6474 | 142.0 |
| 0.0467 | 15.08 | 6000 | 0.8945 | 56.1939 | 41.985 | 45.0266 | 54.8139 | 142.0 |
| 0.0467 | 15.39 | 6125 | 0.9245 | 56.2071 | 41.5265 | 44.3228 | 54.5042 | 141.4074 |
| 0.0467 | 15.7 | 6250 | 0.8793 | 56.2055 | 41.9231 | 45.0616 | 54.6643 | 142.0 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-4_H-768_A-12_wiki103 | b50af847d5c05b2e9ad21b6afd3161e146c47458 | 2022-05-11T22:14:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-768_A-12_wiki103 | 0 | null | transformers | 37,520 | Entry not found |
huggingtweets/nft_redlist | d4f123f82007d5c083b9b6d5d05708a84bd0c9ab | 2022-05-12T00:43:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nft_redlist | 0 | null | transformers | 37,521 | ---
language: en
thumbnail: http://www.huggingtweets.com/nft_redlist/1652316177890/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487841586541215745/J1Y65sDN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TON Animals Red List</div>
<div style="text-align: center; font-size: 14px;">@nft_redlist</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TON Animals Red List.
| Data | TON Animals Red List |
| --- | --- |
| Tweets downloaded | 48 |
| Retweets | 1 |
| Short tweets | 1 |
| Tweets kept | 46 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38vs0taq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nft_redlist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sshkc45) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sshkc45/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nft_redlist')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
negfir/bert_uncased_L-12_H-512_A-8_wiki103 | 1bcecfff581611bbc9d0674300f905a1f0c1f1ce | 2022-05-12T01:26:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-512_A-8_wiki103 | 0 | null | transformers | 37,522 | Entry not found |
s1c5000/s1c_roberta_large_mrc | 972c2711e6cf796e5efd3653f839c71e43bc7965 | 2022-05-12T02:00:54.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | s1c5000 | null | s1c5000/s1c_roberta_large_mrc | 0 | null | transformers | 37,523 | ---
license: apache-2.0
---
|
negfir/bert_uncased_L-4_H-512_A-8_wiki103 | 4a2ba9f5ce1c50a700f408f9014d704eee7effc7 | 2022-05-12T03:19:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-512_A-8_wiki103 | 0 | null | transformers | 37,524 | Entry not found |
hxl/split_test_model | dc42f7b0a8219bdbfa57893cc75d4b05ad2b91e9 | 2022-05-12T03:35:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hxl | null | hxl/split_test_model | 0 | null | transformers | 37,525 | Entry not found |
zoha/wav2vec2-xlsr-persian | a05ebf85f6cd760096447ccd3760cc31d0b40474 | 2022-07-03T09:41:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-xlsr-persian | 0 | null | transformers | 37,526 | Entry not found |
negfir/bert_uncased_L-4_H-256_A-4_wiki103 | d63e95fe8d23eb582ccb5473184182de943aebd2 | 2022-05-12T06:47:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-256_A-4_wiki103 | 0 | null | transformers | 37,527 | Entry not found |
yogeshchandrasekharuni/t5-small-finetuned-xsum | 806627cb27da1c29d4dfa02cbc4a6d1a2ba54e72 | 2022-05-12T07:34:14.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yogeshchandrasekharuni | null | yogeshchandrasekharuni/t5-small-finetuned-xsum | 0 | null | transformers | 37,528 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 16 | 2.3636 | 60.9559 | 47.1972 | 58.7384 | 59.5004 | 18.082 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-12_H-256_A-4_wiki103 | 5932bbbfc1c5ba4992eb46d5a92105093560c5fa | 2022-05-12T07:45:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-256_A-4_wiki103 | 0 | null | transformers | 37,529 | Entry not found |
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping | b56d23b4765817e8cce727efd371a265b4ee64f0 | 2022-05-12T14:00:24.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping | 0 | null | transformers | 37,530 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-earlystopping
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8347
- Rouge1: 53.9049
- Rouge2: 35.5953
- Rougel: 39.788
- Rougelsum: 51.4101
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 0.31 | 125 | 1.0240 | 52.5632 | 32.977 | 34.672 | 49.9905 | 142.0 |
| No log | 0.63 | 250 | 1.0056 | 52.5508 | 32.4826 | 34.6851 | 49.835 | 141.6852 |
| No log | 0.94 | 375 | 0.8609 | 53.0475 | 32.9384 | 35.3322 | 50.272 | 141.6481 |
| 0.8255 | 1.26 | 500 | 0.9022 | 52.2493 | 31.5622 | 33.389 | 49.6612 | 142.0 |
| 0.8255 | 1.57 | 625 | 0.8706 | 53.3568 | 33.2533 | 35.7531 | 50.4568 | 141.8889 |
| 0.8255 | 1.88 | 750 | 0.8186 | 52.7375 | 33.4439 | 37.1094 | 50.5323 | 142.0 |
| 0.8255 | 2.2 | 875 | 0.8041 | 53.4992 | 34.6929 | 37.9614 | 51.091 | 142.0 |
| 0.5295 | 2.51 | 1000 | 0.7907 | 52.6185 | 33.8053 | 37.1725 | 50.4881 | 142.0 |
| 0.5295 | 2.83 | 1125 | 0.7740 | 52.7107 | 33.1023 | 36.0865 | 50.0365 | 142.0 |
| 0.5295 | 3.14 | 1250 | 0.8200 | 52.5607 | 33.7948 | 37.2312 | 50.3345 | 142.0 |
| 0.5295 | 3.45 | 1375 | 0.8188 | 53.9233 | 34.446 | 36.7566 | 51.3135 | 142.0 |
| 0.351 | 3.77 | 1500 | 0.8071 | 53.9096 | 35.5977 | 38.6832 | 51.4986 | 142.0 |
| 0.351 | 4.08 | 1625 | 0.8347 | 53.9049 | 35.5953 | 39.788 | 51.4101 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-4_H-128_A-2_wiki103 | fda5c9a7c70d4145ba8190cd1e1939ca5027c7cd | 2022-05-12T09:36:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-128_A-2_wiki103 | 0 | null | transformers | 37,531 | Entry not found |
negfir/bert_uncased_L-12_H-128_A-2_wiki103 | 75852dd4bd7ba37d4133067fea547f8ecb212f1a | 2022-05-12T13:09:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-128_A-2_wiki103 | 0 | null | transformers | 37,532 | Entry not found |
subhasisj/hi-finetuned-squad-qa-minilmv2-32 | 511544040a14a8901b29ec17aa79f20f320182db | 2022-05-12T16:09:36.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/hi-finetuned-squad-qa-minilmv2-32 | 0 | null | transformers | 37,533 | Entry not found |
mybot/DialoGPT-medium-harrypotter | 1f19ef956f7325fed91b1f36fc0f75d95255a384 | 2022-05-12T17:14:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mybot | null | mybot/DialoGPT-medium-harrypotter | 0 | null | transformers | 37,534 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
vives/distilbert-base-uncased-finetuned-imdb-accelerate | 89e822051a6dfef5d94885877993a1e32ad493c8 | 2022-05-12T17:18:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vives | null | vives/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 37,535 | Entry not found |
deepparag/gpt-j-6B-longer-generation | 3a4e20591c2bb653e01005e889086270f80ff1f0 | 2022-05-12T17:33:59.000Z | [
"en",
"dataset:The Pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"pytorch",
"causal-lm",
"license:apache-2.0"
] | null | false | deepparag | null | deepparag/gpt-j-6B-longer-generation | 0 | null | null | 37,536 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# This model is a clone of https://huggingface.co/EleutherAI/gpt-j-6B in which I have simply increased the max response size.
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend. |
subhasisj/de-TAPT-MLM-MiniLM | 75a00d571ed16f2591d76c4f12a1ceb26629f566 | 2022-05-12T20:03:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/de-TAPT-MLM-MiniLM | 0 | null | transformers | 37,537 | ---
tags:
- generated_from_trainer
model-index:
- name: de-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# de-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ruselkomp/tests-finetuned-squad-test-bert | d660daaec838da395e329eb00721ac5703ea719a | 2022-05-13T07:11:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/tests-finetuned-squad-test-bert | 0 | null | transformers | 37,538 | Entry not found |
huxxx657/distilbert-base-uncased-finetuned-jumbling-squad-15 | 06169fe73669feb8f801e4c37cdf9a1f3800e6d5 | 2022-05-13T01:01:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/distilbert-base-uncased-finetuned-jumbling-squad-15 | 0 | null | transformers | 37,539 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-jumbling-squad-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-jumbling-squad-15
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3629 | 1.0 | 5532 | 1.3345 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
tomhavy/t5-small-finetuned-spider | 737f58ee376b3681e9a136a78224e5433876fa60 | 2022-05-13T03:55:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tomhavy | null | tomhavy/t5-small-finetuned-spider | 0 | null | transformers | 37,540 | ---
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-spider
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-spider
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1914
- Rouge2 Precision: 0.6349
- Rouge2 Recall: 0.3964
- Rouge2 Fmeasure: 0.4619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.2912 | 1.0 | 1120 | 0.2631 | 0.5653 | 0.3537 | 0.4118 |
| 0.2967 | 2.0 | 2240 | 0.2465 | 0.5758 | 0.363 | 0.4209 |
| 0.3106 | 3.0 | 3360 | 0.2372 | 0.5858 | 0.367 | 0.427 |
| 0.2993 | 4.0 | 4480 | 0.2340 | 0.5995 | 0.3791 | 0.4403 |
| 0.2702 | 5.0 | 5600 | 0.2204 | 0.6035 | 0.3786 | 0.4401 |
| 0.2624 | 6.0 | 6720 | 0.2159 | 0.6094 | 0.3807 | 0.4435 |
| 0.2463 | 7.0 | 7840 | 0.2121 | 0.6207 | 0.3911 | 0.4544 |
| 0.2427 | 8.0 | 8960 | 0.2053 | 0.6198 | 0.3886 | 0.452 |
| 0.2336 | 9.0 | 10080 | 0.2014 | 0.6217 | 0.3871 | 0.4518 |
| 0.2256 | 10.0 | 11200 | 0.1980 | 0.6298 | 0.394 | 0.4589 |
| 0.2212 | 11.0 | 12320 | 0.1960 | 0.6304 | 0.3936 | 0.4589 |
| 0.2141 | 12.0 | 13440 | 0.1962 | 0.63 | 0.3939 | 0.4586 |
| 0.2069 | 13.0 | 14560 | 0.1921 | 0.6328 | 0.3942 | 0.4594 |
| 0.2096 | 14.0 | 15680 | 0.1915 | 0.632 | 0.3953 | 0.46 |
| 0.2115 | 15.0 | 16800 | 0.1914 | 0.6349 | 0.3964 | 0.4619 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
hauver/autotrain-luyingqu-test-861227400 | 2b4ccb0517fee877d9b377b1c0c4f79f967cefb6 | 2022-05-13T04:56:25.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | hauver | null | hauver/autotrain-luyingqu-test-861227400 | 0 | null | transformers | 37,541 | Entry not found |
shenyi/gpt2-wikitext2 | b97f09491205fff54524e59920367c6b588615bb | 2022-05-13T07:21:52.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | shenyi | null | shenyi/gpt2-wikitext2 | 0 | null | transformers | 37,542 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.2.1
- Tokenizers 0.12.1
|
shenyi/bert-base-cased-wikitext2 | 3793e46c9cc26758b6a983d9864fbc4ff98a37a3 | 2022-05-13T07:53:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | shenyi | null | shenyi/bert-base-cased-wikitext2 | 0 | null | transformers | 37,543 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 7.2240 |
| 7.6715 | 2.0 | 782 | 7.0516 |
| 7.0737 | 3.0 | 1173 | 7.0823 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.2.1
- Tokenizers 0.12.1
|
vocab-transformers/entity-distilbert-base-uncased | 2472d75cd9073c6c4594b849bbc4bf5591e51195 | 2022-05-13T11:34:43.000Z | [
"pytorch"
] | null | false | vocab-transformers | null | vocab-transformers/entity-distilbert-base-uncased | 0 | null | null | 37,544 | Entry not found |
manirai91/mbert-conll2003 | 988ac654e3035e0443e9f9db71a8d0216f6748dc | 2022-05-13T11:10:53.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | manirai91 | null | manirai91/mbert-conll2003 | 0 | null | transformers | 37,545 | Entry not found |
ruselkomp/tests-finetuned-squad-test-bert-2 | 7737ef9ad6b4ad0cd0c624c7570f8c15a10291f5 | 2022-05-13T19:44:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/tests-finetuned-squad-test-bert-2 | 0 | null | transformers | 37,546 | Entry not found |
subhasisj/vi-finetuned-squad-qa-minilmv2-8 | 47bd3c8759487680ddec9acdc0bc5011cd8b1cf2 | 2022-05-13T17:04:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/vi-finetuned-squad-qa-minilmv2-8 | 0 | null | transformers | 37,547 | ---
tags:
- generated_from_trainer
model-index:
- name: vi-finetuned-squad-qa-minilmv2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-finetuned-squad-qa-minilmv2-8
This model is a fine-tuned version of [subhasisj/vi-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/vi-TAPT-MLM-MiniLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1669 | 1.0 | 1424 | 1.4979 |
| 1.2377 | 2.0 | 2848 | 1.3259 |
| 1.0536 | 3.0 | 4272 | 1.3133 |
| 0.9568 | 4.0 | 5696 | 1.3103 |
| 0.8859 | 5.0 | 7120 | 1.3335 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.11.0
|
subhasisj/en-TAPT-MLM-MiniLM | 4cb8c511637548c33b17ffe9e4c367f521084b65 | 2022-05-13T19:35:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/en-TAPT-MLM-MiniLM | 0 | null | transformers | 37,548 | ---
tags:
- generated_from_trainer
model-index:
- name: en-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
subhasisj/en-finetuned-squad-qa-minilmv2-32 | c14c66ff4910a6837866bd733a44386e17152cf7 | 2022-05-13T21:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/en-finetuned-squad-qa-minilmv2-32 | 0 | null | transformers | 37,549 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: en-finetuned-squad-qa-minilmv2-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-finetuned-squad-qa-minilmv2-32
This model is a fine-tuned version of [subhasisj/en-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/en-TAPT-MLM-MiniLM) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 350 | 2.1514 |
| 2.9587 | 2.0 | 700 | 1.4819 |
| 1.3873 | 3.0 | 1050 | 1.2724 |
| 1.3873 | 4.0 | 1400 | 1.2039 |
| 1.0438 | 5.0 | 1750 | 1.1955 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
pidanr/bert-finetuned-race | 2f9d4e11e6b5d9fb2cec7767ec9f84ee8bf04e93 | 2022-05-14T22:30:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | pidanr | null | pidanr/bert-finetuned-race | 0 | null | transformers | 37,550 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-finetuned-race
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-race
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3863
- Accuracy: 0.2982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3936 | 0.25 | 3100 | 1.3863 | 0.2418 |
| 1.3768 | 0.51 | 6200 | 1.3863 | 0.2483 |
| 1.3954 | 0.76 | 9300 | 1.3863 | 0.2982 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue2 | 955836f96679de8732c0006dea680cf24126d1ec | 2022-05-18T18:55:31.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miazhao | null | miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue2 | 0 | null | transformers | 37,551 | Entry not found |
ruselkomp/deepavlov-framebank-10size | 699e3e719934866f3866f3b1b27421610b5efcd9 | 2022-05-14T03:48:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deepavlov-framebank-10size | 0 | null | transformers | 37,552 | ---
tags:
- generated_from_trainer
model-index:
- name: deepavlov-test-bert-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepavlov-test-bert-2
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0314 | 1.0 | 4523 | 1.0242 |
| 0.739 | 2.0 | 9046 | 1.0326 |
| 0.5207 | 3.0 | 13569 | 1.1607 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_bs256 | cf948787cdeb2177c85086aabfab990acb2eb95e | 2022-05-16T04:07:34.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_bs256 | 0 | null | null | 37,553 | Entry not found |
ruselkomp/sber-full-test | e051a3dad5de72fbd41ba8ff8ff2c45c6b9bd359 | 2022-05-14T21:47:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/sber-full-test | 0 | null | transformers | 37,554 | ---
tags:
- generated_from_trainer
model-index:
- name: sber-full-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sber-full-test
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0779 | 1.0 | 9046 | 1.3850 |
| 0.7429 | 2.0 | 18092 | 1.1795 |
| 0.446 | 3.0 | 27138 | 1.4148 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
huggingtweets/dnouri | 46c8c3f622d48c927da1971f8581c521b68abbfd | 2022-05-14T13:30:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dnouri | 0 | null | transformers | 37,555 | ---
language: en
thumbnail: http://www.huggingtweets.com/dnouri/1652535050986/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/479663838896214016/nZtbm6to_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Daniel Nouri</div>
<div style="text-align: center; font-size: 14px;">@dnouri</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Daniel Nouri.
| Data | Daniel Nouri |
| --- | --- |
| Tweets downloaded | 3224 |
| Retweets | 875 |
| Short tweets | 147 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d09140r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dnouri's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1sbu4o5b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1sbu4o5b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dnouri')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ruiqi-zhong/t5proposer_0514 | f11c36cf15f632aa0725b5f102c009aa02399048 | 2022-05-14T14:20:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ruiqi-zhong | null | ruiqi-zhong/t5proposer_0514 | 0 | null | transformers | 37,556 | Entry not found |
ruiqi-zhong/t5verifier_0514 | a6df4b9ee43f0263e5e044ed1a01be5478923fd4 | 2022-05-14T16:57:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ruiqi-zhong | null | ruiqi-zhong/t5verifier_0514 | 0 | null | transformers | 37,557 | Entry not found |
likebeats/distilbert-base-uncased-finetuned-squad | d7e6316146f5086141e6f0d00250e1f601be15a1 | 2022-05-15T01:20:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | likebeats | null | likebeats/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,558 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2249 | 1.0 | 5533 | 1.1704 |
| 0.9542 | 2.0 | 11066 | 1.1215 |
| 0.7467 | 3.0 | 16599 | 1.1538 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
menglingbei/t5-small-finetuned-xsum | bf04b3211ff065ca0d65fda9e40a4ace40d87ab8 | 2022-05-15T02:03:19.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | menglingbei | null | menglingbei/t5-small-finetuned-xsum | 0 | null | transformers | 37,559 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
prodm93/gpt2-kbkw-abstract-model-v1 | a4558b724ded4b8d7e647f6120209f29103b1e92 | 2022-05-15T04:23:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prodm93 | null | prodm93/gpt2-kbkw-abstract-model-v1 | 0 | null | transformers | 37,560 | Entry not found |
prodm93/t5-kbkw-abstract-model-v1 | ce0c946abb57d4117f5802d4eae4c72156f3fbc6 | 2022-05-15T04:28:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | prodm93 | null | prodm93/t5-kbkw-abstract-model-v1 | 0 | null | transformers | 37,561 | Entry not found |
harikp20/hkp24 | bd6ae7b54bb7a6e354b96f704e547097501519b4 | 2022-05-15T11:34:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | harikp20 | null | harikp20/hkp24 | 0 | null | transformers | 37,562 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: hkp24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hkp24
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2249 | 1.0 | 5533 | 1.1675 |
| 0.961 | 2.0 | 11066 | 1.1376 |
| 0.7581 | 3.0 | 16599 | 1.1619 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
pujaburman30/autotrain-hi_ner_xlmr-869827677 | c6d32f007e56e2d0420cd6993b84d9a9a7ad9cc1 | 2022-05-15T09:00:47.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"unk",
"dataset:pujaburman30/autotrain-data-hi_ner_xlmr",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | pujaburman30 | null | pujaburman30/autotrain-hi_ner_xlmr-869827677 | 0 | null | transformers | 37,563 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pujaburman30/autotrain-data-hi_ner_xlmr
co2_eq_emissions: 4.365496441173981
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 869827677
- CO2 Emissions (in grams): 4.365496441173981
## Validation Metrics
- Loss: 0.894961416721344
- Accuracy: 0.7411180773249739
- Precision: 0.590625
- Recall: 0.5080645161290323
- F1: 0.546242774566474
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pujaburman30/autotrain-hi_ner_xlmr-869827677
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pujaburman30/autotrain-hi_ner_xlmr-869827677", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
gkss/distilbert-base-uncased-finetuned-squad | 416d389d30dbaa060faa481f2bb09665c4205686 | 2022-05-15T18:11:06.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | gkss | null | gkss/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,564 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
prodm93/T5Dynamic_text_model_v1 | 8bb12f65924aaa6e9a2d5a20603d8a28e133e540 | 2022-05-15T22:10:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | prodm93 | null | prodm93/T5Dynamic_text_model_v1 | 0 | null | transformers | 37,565 | Entry not found |
stevemobs/quales-iberlef-squad_2 | df30ecab042de43fcf56374e7c0bb846aa22aac5 | 2022-05-16T01:51:09.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/quales-iberlef-squad_2 | 0 | null | transformers | 37,566 | ---
tags:
- generated_from_trainer
model-index:
- name: quales-iberlef-squad_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# quales-iberlef-squad_2
This model is a fine-tuned version of [jamarju/roberta-large-bne-squad-2.0-es](https://huggingface.co/jamarju/roberta-large-bne-squad-2.0-es) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
prodm93/T5Dynamic_title_model_v1 | 16f607228585063d3823cc26ec477918d64827ce | 2022-05-15T22:10:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | prodm93 | null | prodm93/T5Dynamic_title_model_v1 | 0 | null | transformers | 37,567 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_mt5-small | b38c84bb3b7dca78cc7d44212177b786dbc03ea7 | 2022-05-24T02:45:22.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5-small | 0 | null | null | 37,568 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_bs64 | a19d4e0ee8e4a63114b461d46e830695723b5bc2 | 2022-05-20T02:08:07.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_bs64 | 0 | null | null | 37,569 | Entry not found |
nandezgarcia/roberta-base-bne-sqac-finetuned-recores | d34bc840c31f0072bc407ae0764e46d5f4843c7c | 2022-05-16T08:07:43.000Z | [
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | nandezgarcia | null | nandezgarcia/roberta-base-bne-sqac-finetuned-recores | 0 | null | transformers | 37,570 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-sqac-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-sqac-finetuned-recores
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4624
- Accuracy: 0.3691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5643 | 1.0 | 1047 | 1.5474 | 0.3526 |
| 0.8147 | 2.0 | 2094 | 2.6498 | 0.3719 |
| 0.1618 | 3.0 | 3141 | 3.1061 | 0.3719 |
| 0.0135 | 4.0 | 4188 | 3.4624 | 0.3691 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jsunster/layoutlmv2-finetuned-cord | 5d8e7ccec2a5ccb7c492ee36af7739c6b0c1882e | 2022-05-16T09:35:27.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | jsunster | null | jsunster/layoutlmv2-finetuned-cord | 0 | null | transformers | 37,571 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.10.0+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
hasanalay/wav2vec2-large-xls-r-300m-turkish-colab-2 | 4fd60192b0d473ba412546245d9469b8b1497d8f | 2022-05-16T14:46:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | hasanalay | null | hasanalay/wav2vec2-large-xls-r-300m-turkish-colab-2 | 0 | null | transformers | 37,572 | Entry not found |
mriggs/wikisource_epoch2 | 03160ffb4a70ef824ec10ca364db526d5af30fcf | 2022-05-16T13:05:45.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/wikisource_epoch2 | 0 | null | transformers | 37,573 | Entry not found |
subhasisj/ar-kd-XLM-minilmv2-32 | 3de48bb4b60413a0b49110ca40c81571e58fa308 | 2022-05-16T16:50:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/ar-kd-XLM-minilmv2-32 | 0 | null | transformers | 37,574 | ---
tags:
- generated_from_trainer
model-index:
- name: ar-kd-XLM-minilmv2-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar-kd-XLM-minilmv2-32
This model is a fine-tuned version of [subhasisj/ar-TAPT-MLM-MiniLM](https://huggingface.co/subhasisj/ar-TAPT-MLM-MiniLM) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
subhasisj/de-kd-XLM-minilmv2-4 | f74f95b9806ad5bfc94b8b6767d350d2d9470562 | 2022-05-16T18:25:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/de-kd-XLM-minilmv2-4 | 0 | null | transformers | 37,575 | Entry not found |
knurm/xlm-roberta-base-finetuned-est | 594e24c1a70265e60866f4d59c1ac205b5bfe611 | 2022-05-23T20:34:34.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | knurm | null | knurm/xlm-roberta-base-finetuned-est | 0 | null | transformers | 37,576 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-est
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-est
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 4.2865 |
| No log | 2.0 | 104 | 4.0711 |
| No log | 3.0 | 156 | 3.9351 |
| No log | 4.0 | 208 | 3.8885 |
| No log | 5.0 | 260 | 3.8077 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
peggyhuang/gpt2-canard | 2ce2bbb4466a375fe8982d456d69b60081174b1b | 2022-05-16T19:41:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | peggyhuang | null | peggyhuang/gpt2-canard | 0 | null | transformers | 37,577 | Entry not found |
negfir/bert_uncased_L-2_H-768_A-12_wiki103 | 0c97b767c00f7fc53b5f0f99c0e4bcbbd11c051d | 2022-05-16T21:14:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-2_H-768_A-12_wiki103 | 0 | null | transformers | 37,578 | Entry not found |
negfir/bert_uncased_L-2_H-512_A-8_wiki103 | 9a7ae337b6a94c245c3ae34debff5a527e4de5bc | 2022-05-17T01:37:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-2_H-512_A-8_wiki103 | 0 | null | transformers | 37,579 | Entry not found |
subhasisj/en-kd-XLM-minilmv2-4 | 8f7af302f6cde31bb8949181d8653a52a70a3a99 | 2022-05-17T17:53:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/en-kd-XLM-minilmv2-4 | 0 | null | transformers | 37,580 | Entry not found |
khalidalt/sentence_T5_tasky_classification | 5ce8671cefb0cc0526edf5beb220b22084909bee | 2022-05-17T13:11:35.000Z | [
"pytorch"
] | null | false | khalidalt | null | khalidalt/sentence_T5_tasky_classification | 0 | null | null | 37,581 | Entry not found |
bmichele/poetry-generation-firstline-mbart-ws-en-sorted | a5e3ee6e64010c3eb360cc7acc98e7161107d05d | 2022-05-17T13:28:22.000Z | [
"pytorch"
] | null | false | bmichele | null | bmichele/poetry-generation-firstline-mbart-ws-en-sorted | 0 | null | null | 37,582 | TODO: This is still a demo model, the file does not match with the model card!!!
# poetry-generation-firstline-mbart-ws-fi-sorted
* `nextline`: generates the first poem line from keywords
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `ws`: trained on Wikisource data
* `en`: English language
* `sorted`: the order of input keywords matter when generating candidates |
ykilcher/gpt-4chan | 1eb96cbe347e27ffc89ce5ccb5c4a720b9569406 | 2022-06-14T21:14:10.000Z | [
"pytorch",
"gptj",
"text-generation",
"en",
"arxiv:2109.07958",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | ykilcher | null | ykilcher/gpt-4chan | 0 | 26 | transformers | 37,583 | |
subhasisj/hi-kd-XLM-minilmv2-32 | dd0ee352fcd5c071fc423dc795559e1f1ab1e7b3 | 2022-05-17T18:20:21.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/hi-kd-XLM-minilmv2-32 | 0 | null | transformers | 37,584 | Entry not found |
zoha/wav2vec2-base-timit-demo-google-colab | 4eef48a4118dacaa80a38a9794eeedddbd6f4c22 | 2022-06-13T06:01:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-base-timit-demo-google-colab | 0 | null | transformers | 37,585 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5035
- Wer: 0.3346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.1411 | 1.0 | 500 | 0.6675 | 0.6001 |
| 0.5668 | 2.01 | 1000 | 0.4699 | 0.4973 |
| 0.3773 | 3.01 | 1500 | 0.4475 | 0.4403 |
| 0.2696 | 4.02 | 2000 | 0.4162 | 0.4166 |
| 0.2165 | 5.02 | 2500 | 0.3809 | 0.4011 |
| 0.1849 | 6.02 | 3000 | 0.4120 | 0.3849 |
| 0.1542 | 7.03 | 3500 | 0.4436 | 0.3770 |
| 0.1385 | 8.03 | 4000 | 0.3977 | 0.3732 |
| 0.1224 | 9.04 | 4500 | 0.4530 | 0.3672 |
| 0.1233 | 10.04 | 5000 | 0.3949 | 0.3596 |
| 0.1078 | 11.04 | 5500 | 0.4616 | 0.3539 |
| 0.097 | 12.05 | 6000 | 0.4354 | 0.3697 |
| 0.0821 | 13.05 | 6500 | 0.4370 | 0.3643 |
| 0.0724 | 14.06 | 7000 | 0.4729 | 0.3587 |
| 0.0678 | 15.06 | 7500 | 0.5822 | 0.3742 |
| 0.0632 | 16.06 | 8000 | 0.4460 | 0.3513 |
| 0.0627 | 17.07 | 8500 | 0.5531 | 0.3537 |
| 0.0574 | 18.07 | 9000 | 0.5262 | 0.3575 |
| 0.0515 | 19.08 | 9500 | 0.4794 | 0.3488 |
| 0.0475 | 20.08 | 10000 | 0.4941 | 0.3458 |
| 0.0463 | 21.08 | 10500 | 0.4741 | 0.3377 |
| 0.0392 | 22.09 | 11000 | 0.5390 | 0.3381 |
| 0.0401 | 23.09 | 11500 | 0.4984 | 0.3413 |
| 0.0371 | 24.1 | 12000 | 0.5112 | 0.3460 |
| 0.0305 | 25.1 | 12500 | 0.5255 | 0.3418 |
| 0.0278 | 26.1 | 13000 | 0.5045 | 0.3389 |
| 0.0265 | 27.11 | 13500 | 0.4990 | 0.3371 |
| 0.0248 | 28.11 | 14000 | 0.5242 | 0.3362 |
| 0.0249 | 29.12 | 14500 | 0.5035 | 0.3346 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
pahntanapat/rsm-w2v2-xls-r-char | 966b81cf51779d87f007e36d9086f3a84f0237e9 | 2022-05-30T12:27:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | pahntanapat | null | pahntanapat/rsm-w2v2-xls-r-char | 0 | null | transformers | 37,586 | Entry not found |
haunt224/distilbert-base-uncased-finetuned-squad | 5f5af64c8599794de12f3b371f8e94e5d8d6771d | 2022-05-19T17:52:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | haunt224 | null | haunt224/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,587 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 4.4454 |
| No log | 2.0 | 12 | 3.2500 |
| No log | 3.0 | 18 | 2.7507 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
papsebestyen/hubert-base-cc-finetuned-forum | 69443bed476baa338348667884bd73a5db7bb036 | 2022-05-18T18:45:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | papsebestyen | null | papsebestyen/hubert-base-cc-finetuned-forum | 0 | null | transformers | 37,588 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hubert-base-cc-finetuned-forum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-cc-finetuned-forum
This model is a fine-tuned version of [SZTAKI-HLT/hubert-base-cc](https://huggingface.co/SZTAKI-HLT/hubert-base-cc) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7966 | 1.0 | 157 | 2.5139 |
| 2.6303 | 2.0 | 314 | 2.4601 |
| 2.5525 | 3.0 | 471 | 2.4501 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ruselkomp/deep-pavlov-full-2 | 1332065c45ce9801448f6d9889715d85785e83d2 | 2022-05-18T19:25:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/deep-pavlov-full-2 | 0 | null | transformers | 37,589 | ---
tags:
- generated_from_trainer
model-index:
- name: deep-pavlov-full-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deep-pavlov-full-2
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 18
- eval_batch_size: 18
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0425 | 1.0 | 2513 | 1.0277 |
| 0.7953 | 2.0 | 5026 | 1.0226 |
| 0.5902 | 3.0 | 7539 | 1.0892 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
pszemraj/opt-peter-1.3B-1E | 037d21b2cc5505d15027c6a160013862d15effe8 | 2022-06-24T14:06:36.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"non-commercial",
"license:apache-2.0"
] | text-generation | false | pszemraj | null | pszemraj/opt-peter-1.3B-1E | 0 | null | transformers | 37,590 | ---
license: apache-2.0
tags:
- generated_from_trainer
- text-generation
- opt
- non-commercial
inference: False
---
# OPT-Peter-1.3B-1E
> This is an initial checkpoint of the model - the latest version is [here](https://huggingface.co/pszemraj/opt-peter-1.3B)
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on text message data (mine) for 1.6 epochs.
It achieves the following results on the evaluation set (at the end of epoch 1):
- eval_loss: 3.3595
- eval_runtime: 988.6985
- eval_samples_per_second: 8.803
- eval_steps_per_second: 2.201
- epoch: 1.0
- step: 1235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1.6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/barterblex | b880c0977b3185bcffa94c7b6ef7cc9d3ea135c5 | 2022-05-19T01:33:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/barterblex | 0 | null | transformers | 37,591 | ---
language: en
thumbnail: http://www.huggingtweets.com/barterblex/1652924018963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1497272349636239361/L-9JXZCa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dr. Negative B</div>
<div style="text-align: center; font-size: 14px;">@barterblex</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dr. Negative B.
| Data | Dr. Negative B |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 1158 |
| Short tweets | 343 |
| Tweets kept | 1729 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e0l085dr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @barterblex's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pkg7hp1s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pkg7hp1s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/barterblex')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
quentin99/layoutlmv2-finetuned-funsd-test | 263465585e0b94a54307f8d68a415a84c3d31a53 | 2022-05-19T02:57:46.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | quentin99 | null | quentin99/layoutlmv2-finetuned-funsd-test | 0 | null | transformers | 37,592 | Entry not found |
varunpatrikar/dummy-model | af4ce6186157544580ee827234e7eac72fd5e423 | 2022-05-19T07:34:54.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | varunpatrikar | null | varunpatrikar/dummy-model | 0 | null | transformers | 37,593 | Just a dummy first model |
jesperjmb/CompundedIntros | 052cb462234fed61e84a556c5e0326210a44957d | 2022-05-19T08:08:29.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | jesperjmb | null | jesperjmb/CompundedIntros | 0 | null | transformers | 37,594 | Fine-tuned KB BERT for identifying compounded introductions in the Riksdagen corpus |
wooglee/gpt2-imdb-pos-v2 | 5a0cf5c6ec4bb2d03415adc1585f6f31b1180162 | 2022-05-19T08:55:34.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | wooglee | null | wooglee/gpt2-imdb-pos-v2 | 0 | null | transformers | 37,595 | Entry not found |
huggingtweets/pmadhavv | 75be0c496c52e2d2a475722c360fa9325126d377 | 2022-05-19T09:30:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pmadhavv | 0 | null | transformers | 37,596 | ---
language: en
thumbnail: http://www.huggingtweets.com/pmadhavv/1652952613201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522099366592352257/qhlVXNl9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Madhav Patel</div>
<div style="text-align: center; font-size: 14px;">@pmadhavv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Madhav Patel.
| Data | Madhav Patel |
| --- | --- |
| Tweets downloaded | 352 |
| Retweets | 109 |
| Short tweets | 46 |
| Tweets kept | 197 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3utgj60m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pmadhavv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/268raihu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/268raihu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pmadhavv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
okwach/mawaidhaChatbot | d0beecfc0129876997193e40aca0b21502f8c13d | 2022-05-19T12:07:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | okwach | null | okwach/mawaidhaChatbot | 0 | null | transformers | 37,597 | ---
tags:
- conversational
---
# mawaidhaChatbot Model |
jjezabek/bert-base-uncased-yelp_full | 9ee319e3f45edee2d2c1cd589d46b7f340ed287b | 2022-05-19T20:49:03.000Z | [
"pytorch"
] | null | false | jjezabek | null | jjezabek/bert-base-uncased-yelp_full | 0 | null | null | 37,598 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_lrdiff_bs64 | aeaf16dbb7f27eed2cd7874b86e23343b8e48658 | 2022-05-22T04:28:55.000Z | [
"pytorch"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_lrdiff_bs64 | 0 | null | null | 37,599 | adapter lr = 1e-3, failed
FAILED |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.