modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
imvladikon/general_character_bert
|
imvladikon
| 2022-01-30T11:35:11Z | 20 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"language model",
"en",
"dataset:wikipedia",
"dataset:openwebtext",
"arxiv:2010.10392",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- language model
datasets:
- wikipedia
- openwebtext
---
Pretrained general_character_bert model
from the ['CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020](https://github.com/helboukkouri/character-bert)
```
@inproceedings{el-boukkouri-etal-2020-characterbert,
title = "{C}haracter{BERT}: Reconciling {ELM}o and {BERT} for Word-Level Open-Vocabulary Representations From Characters",
author = "El Boukkouri, Hicham and
Ferret, Olivier and
Lavergne, Thomas and
Noji, Hiroshi and
Zweigenbaum, Pierre and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year={2020},
eprint={2010.10392},
archivePrefix={arXiv},
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.609",
doi = "10.18653/v1/2020.coling-main.609",
pages = "6903--6915",
abstract = "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.",
}
```
|
sshasnain/finetune-wav2vec2-large-xlsr-bengali
|
sshasnain
| 2022-01-30T07:55:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:custom",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: finetune-wav2vec2-large-xlsr-bengali
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.011
---
# finetune-wav2vec2-large-xlsr-bengali
***
## Usage
***
|
pinecone/mpnet-retriever-discourse
|
pinecone
| 2022-01-30T07:23:58Z | 4 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"question-answering",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- question-answering
---
# MPNet Retriever (Discourse)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\].
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters:
```
{'batch_size': 12}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/).
|
jcmc/wav2vec-1b-cv8-ir-n
|
jcmc
| 2022-01-30T07:16:19Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9810
- Wer: 0.4761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2427 | 15.15 | 500 | 1.4632 | 0.9481 |
| 1.3128 | 30.3 | 1000 | 0.8662 | 0.6195 |
| 0.9403 | 45.45 | 1500 | 0.8163 | 0.5169 |
| 0.6868 | 60.61 | 2000 | 0.8661 | 0.4858 |
| 0.563 | 75.76 | 2500 | 0.9447 | 0.4867 |
| 0.4887 | 90.91 | 3000 | 0.9650 | 0.4823 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
pablouribe/xls-r-ab-test
|
pablouribe
| 2022-01-30T05:13:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.2596
- Wer: 19.1571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jogonba2/mbarthez-copy_mechanism-hal_articles
|
jogonba2
| 2022-01-30T03:52:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbarthez-copy_mechanism-hal_articles
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 36.548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbarthez-davide_articles-copy_enhanced
This model is a fine-tuned version of [moussaKam/mbarthez](https://huggingface.co/moussaKam/mbarthez) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4905
- Rouge1: 36.548
- Rouge2: 19.6282
- Rougel: 30.2513
- Rougelsum: 30.2765
- Gen Len: 25.7238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6706 | 1.0 | 33552 | 1.5690 | 31.2477 | 16.5455 | 26.9855 | 26.9754 | 18.6217 |
| 1.3446 | 2.0 | 67104 | 1.5060 | 32.1108 | 17.1408 | 27.7833 | 27.7703 | 18.9115 |
| 1.3245 | 3.0 | 100656 | 1.4905 | 32.9084 | 17.7027 | 28.2912 | 28.2975 | 18.9801 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingtweets/goando
|
huggingtweets
| 2022-01-30T02:34:28Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/goando/1643510064373/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Go Ando / PREDUCTS / THE GUILD</div>
<div style="text-align: center; font-size: 14px;">@goando</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD.
| Data | Go Ando / PREDUCTS / THE GUILD |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 91 |
| Short tweets | 1680 |
| Tweets kept | 1476 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37h8wmzh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qeev4eu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qeev4eu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/shiikazuo
|
huggingtweets
| 2022-01-30T01:27:28Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/shiikazuo/1643506044134/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3624876884/b16d250401cc357c5be9859f7ba3db8f_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">志位和夫</div>
<div style="text-align: center; font-size: 14px;">@shiikazuo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 志位和夫.
| Data | 志位和夫 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 38 |
| Short tweets | 35 |
| Tweets kept | 3176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/243t6rzm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shiikazuo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shiikazuo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Santiagot1105/wav2vec2-large-xlsr-finetune-spanish-col
|
Santiagot1105
| 2022-01-29T17:54:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-finetune-spanish-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-finetune-spanish-col
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7105
- Wer: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.2829 | 3.25 | 400 | 2.9632 | 1.0 |
| 2.9664 | 6.5 | 800 | 2.8494 | 1.0542 |
| 2.8353 | 9.76 | 1200 | 2.8352 | 1.0101 |
| 2.7863 | 13.01 | 1600 | 2.7421 | 0.9837 |
| 2.762 | 16.26 | 2000 | 2.7254 | 0.9861 |
| 2.7483 | 19.51 | 2400 | 2.7228 | 0.9874 |
| 2.7482 | 22.76 | 2800 | 2.7228 | 0.9999 |
| 2.7373 | 26.02 | 3200 | 2.7163 | 0.9824 |
| 2.7328 | 29.27 | 3600 | 2.7105 | 0.9824 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingtweets/tylerrjoseph
|
huggingtweets
| 2022-01-29T12:35:08Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tylerrjoseph/1643459612585/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1461794294336045066/SUrpcEaz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tyler jøseph</div>
<div style="text-align: center; font-size: 14px;">@tylerrjoseph</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tyler jøseph.
| Data | tyler jøseph |
| --- | --- |
| Tweets downloaded | 474 |
| Retweets | 54 |
| Short tweets | 79 |
| Tweets kept | 341 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xiz1b44/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tylerrjoseph's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mp0omnb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mp0omnb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tylerrjoseph')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MarioPenguin/finetuned-model
|
MarioPenguin
| 2022-01-29T11:18:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8601
- Accuracy: 0.6117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 84 | 0.8663 | 0.5914 |
| No log | 2.0 | 168 | 0.8601 | 0.6117 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
huggingtweets/_ikeay
|
huggingtweets
| 2022-01-29T08:38:34Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/_ikeay/1643445509714/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1438483410503176195/v_ghm6Un_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">いけあや(意識が低い方)</div>
<div style="text-align: center; font-size: 14px;">@_ikeay</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from いけあや(意識が低い方).
| Data | いけあや(意識が低い方) |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 26 |
| Short tweets | 2264 |
| Tweets kept | 959 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2c6c03ss/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_ikeay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r85zooae) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r85zooae/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_ikeay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
k-partha/decision_style_bert_bio
|
k-partha
| 2022-01-29T03:36:37Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to [conscientiousness](https://en.wikipedia.org/wiki/Conscientiousness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label.
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
k-partha/extrabert_bio
|
k-partha
| 2022-01-29T03:36:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
Classifies Twitter biographies as either introverts or extroverts.
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
k-partha/curiosity_bert_bio
|
k-partha
| 2022-01-29T03:35:48Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
Labels Twitter biographies on [Openness](https://en.wikipedia.org/wiki/Openness_to_experience), strongly related to intellectual curiosity.
Intuitive: Associated with higher intellectual curiosity
Sensing: Associated with lower intellectual curiosity
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
huggingtweets/flightlessmilfs
|
huggingtweets
| 2022-01-29T02:13:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/flightlessmilfs/1643422380331/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471692544157405184/P3FUX4w9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ali ψ</div>
<div style="text-align: center; font-size: 14px;">@flightlessmilfs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ali ψ.
| Data | Ali ψ |
| --- | --- |
| Tweets downloaded | 1815 |
| Retweets | 642 |
| Short tweets | 181 |
| Tweets kept | 992 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yuw97j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @flightlessmilfs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31esgsfh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31esgsfh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/flightlessmilfs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
facebook/tts_transformer-vi-cv7
|
facebook
| 2022-01-28T23:31:48Z | 29 | 11 |
fairseq
|
[
"fairseq",
"audio",
"text-to-speech",
"vi",
"dataset:common_voice",
"arxiv:1809.08895",
"arxiv:2109.06912",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: vi
datasets:
- common_voice
widget:
- text: "Xin chào, đây là một cuộc chạy thử nghiệm."
example_title: "Hello, this is a test run."
---
# tts_transformer-vi-cv7
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Vietnamese
- Single-speaker male voice
- Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-vi-cv7",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Xin chào, đây là một cuộc chạy thử nghiệm."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
facebook/tts_transformer-ru-cv7_css10
|
facebook
| 2022-01-28T23:28:04Z | 105 | 13 |
fairseq
|
[
"fairseq",
"audio",
"text-to-speech",
"ru",
"dataset:common_voice",
"dataset:css10",
"arxiv:1809.08895",
"arxiv:2109.06912",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: ru
datasets:
- common_voice
- css10
widget:
- text: "Здравствуйте, это пробный запуск."
example_title: "Hello, this is a test run."
---
# tts_transformer-ru-cv7_css10
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Russian
- Single-speaker male voice
- Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/tts_transformer-ru-cv7_css10",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Здравствуйте, это пробный запуск."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
facebook/fastspeech2-en-ljspeech
|
facebook
| 2022-01-28T23:25:24Z | 2,168 | 268 |
fairseq
|
[
"fairseq",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:2006.04558",
"arxiv:2109.06912",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: en
datasets:
- ljspeech
widget:
- text: "Hello, this is a test run."
example_title: "Hello, this is a test run."
---
# fastspeech2-en-ljspeech
[FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- English
- Single-speaker female voice
- Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
Kneecapsnatcher/Unon
|
Kneecapsnatcher
| 2022-01-28T21:21:10Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: bsd-2-clause
---
|
gagan3012/wav2vec2-large-xls-r-300m-hindi
|
gagan3012
| 2022-01-28T18:47:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
anjulRajendraSharma/WavLm-base-en
|
anjulRajendraSharma
| 2022-01-28T16:40:52Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"english_asr",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- english_asr
- generated_from_trainer
model-index:
- name: wavlm-base-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base-english
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the english_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Wer: 0.0773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8664 | 0.17 | 300 | 2.8439 | 1.0 |
| 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 |
| 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 |
| 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 |
| 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 |
| 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 |
| 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 |
| 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 |
| 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 |
| 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 |
| 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.0
- Tokenizers 0.10.3
|
FitoDS/wav2vec2-large-xls-r-300m-guarani-colab
|
FitoDS
| 2022-01-28T16:22:06Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-guarani-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2392
- Wer: 1.0743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 18.2131 | 49.94 | 400 | 3.2901 | 1.0 |
| 2.0496 | 99.94 | 800 | 3.2392 | 1.0743 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Rocketknight1/distilgpt2-finetuned-wikitext2
|
Rocketknight1
| 2022-01-28T13:23:20Z | 14 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8577
- Validation Loss: 3.6752
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8577 | 3.6752 | 0 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.8.0-rc0
- Datasets 1.17.0
- Tokenizers 0.11.0
|
peterhsu/tf_bert-finetuned-ner
|
peterhsu
| 2022-01-28T12:52:36Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tf_bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf_bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0272
- Validation Loss: 0.0522
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1727 | 0.0673 | 0 |
| 0.0462 | 0.0541 | 1 |
| 0.0272 | 0.0522 | 2 |
### Framework versions
- Transformers 4.16.0
- TensorFlow 2.7.0
- Datasets 1.18.1
- Tokenizers 0.11.0
|
pitehu/T5_NER_CONLL_ENTITYREPLACE
|
pitehu
| 2022-01-28T11:05:16Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:CoNLL-2003",
"arxiv:2111.10952",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
license: "apache-2.0"
datasets:
- CoNLL-2003
metrics:
- F1
---
This is a T5 small model finetuned on CoNLL-2003 dataset for named entity recognition (NER).
Example Input and Output:
“Recognize all the named entities in this sequence (replace named entities with one of [PER], [ORG], [LOC], [MISC]): When Alice visited New York” → “When PER visited LOC LOC"
Evaluation Result:
% of match (for comparison with ExT5: https://arxiv.org/pdf/2111.10952.pdf):
| Model| ExT5_{Base} | This Model | T5_NER_CONLL_OUTPUTLIST
| :---: | :---: | :---: | :---: |
| % of Complete Match| 86.53 | 79.03 | TBA|
There are some outputs (212/3453 or 6.14% that does not have the same length as the input)
F1 score on testing set of those with matching length :
| Model | This Model | T5_NER_CONLL_OUTPUTLIST | BERTbase
| :---: | :---: | :---: | :---: |
| F1| 0.8901 | 0.8691| 0.9240
**Caveat: The testing set of these aren't the same, due to matching length issue...
T5_NER_CONLL_OUTPUTLIST only has 27/3453 missing length (only 0.78%); The BERT number is directly from their paper (https://arxiv.org/pdf/1810.04805.pdf)
|
google/vit-large-patch32-384
|
google
| 2022-01-28T10:24:24Z | 186,213 | 16 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"vision",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# Vision Transformer (large-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384.
Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384')
model = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
google/vit-large-patch16-384
|
google
| 2022-01-28T10:22:26Z | 8,875 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"vision",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# Vision Transformer (large-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-384')
model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
google/vit-large-patch32-224-in21k
|
google
| 2022-01-28T10:21:30Z | 1,295 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"vit",
"image-feature-extraction",
"vision",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"region:us"
] |
image-feature-extraction
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
inference: false
---
# Vision Transformer (large-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
huggingtweets/coinerstakingls-elonmusk-tyler
|
huggingtweets
| 2022-01-28T05:27:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/coinerstakingls-elonmusk-tyler/1643347618705/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474910968157249536/FS8-70Ie_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1394891459900231689/xXdX3yWP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1439959943067709448/Z-Dsp_Ge_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Crypto Bros Taking Ls & Tyler Winklevoss</div>
<div style="text-align: center; font-size: 14px;">@coinerstakingls-elonmusk-tyler</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Crypto Bros Taking Ls & Tyler Winklevoss.
| Data | Elon Musk | Crypto Bros Taking Ls | Tyler Winklevoss |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 566 | 3248 |
| Retweets | 163 | 94 | 1550 |
| Short tweets | 930 | 222 | 357 |
| Tweets kept | 2157 | 250 | 1341 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mpyx1oz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coinerstakingls-elonmusk-tyler's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mnlaoaj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mnlaoaj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coinerstakingls-elonmusk-tyler')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rodrigogelacio/autonlp-department-classification-534915130
|
rodrigogelacio
| 2022-01-28T02:06:52Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:rodrigogelacio/autonlp-data-department-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rodrigogelacio/autonlp-data-department-classification
co2_eq_emissions: 1.4862856774320061
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 534915130
- CO2 Emissions (in grams): 1.4862856774320061
## Validation Metrics
- Loss: 0.37066277861595154
- Accuracy: 0.9204545454545454
- Macro F1: 0.9103715740678612
- Micro F1: 0.9204545454545455
- Weighted F1: 0.9196871607509906
- Macro Precision: 0.9207759152612094
- Micro Precision: 0.9204545454545454
- Weighted Precision: 0.922177301864802
- Macro Recall: 0.9055002187355129
- Micro Recall: 0.9204545454545454
- Weighted Recall: 0.9204545454545454
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rodrigogelacio/autonlp-department-classification-534915130
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
kika2000/wav2vec2-large-xls-r-300m-kika4_my-colab
|
kika2000
| 2022-01-28T01:03:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-kika4_my-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika4_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/simpleflips
|
huggingtweets
| 2022-01-27T21:36:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/simpleflips/1643319371533/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1347536196684062723/j6OoN12w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SimpleFlips</div>
<div style="text-align: center; font-size: 14px;">@simpleflips</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SimpleFlips.
| Data | SimpleFlips |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 562 |
| Short tweets | 746 |
| Tweets kept | 1933 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dh54zgvz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @simpleflips's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2siatc5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2siatc5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/simpleflips')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/glitchy22
|
huggingtweets
| 2022-01-27T21:05:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/glitchy22/1643317484748/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484899984126451716/oY7g67aC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💙💗🤍 Mama Ava's House of Fun 💙💗🤍</div>
<div style="text-align: center; font-size: 14px;">@glitchy22</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💙💗🤍 Mama Ava's House of Fun 💙💗🤍.
| Data | 💙💗🤍 Mama Ava's House of Fun 💙💗🤍 |
| --- | --- |
| Tweets downloaded | 1690 |
| Retweets | 198 |
| Short tweets | 387 |
| Tweets kept | 1105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2h5yvnyr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchy22's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/glitchy22')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cd-dvd/testmodel2
|
cd-dvd
| 2022-01-27T19:45:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"Text Generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- Text Generation
---
# GIMPLEARN knows modeltest2
# To generate conversation use input such as Human: What should I do?\nAI:
|
Rocketknight1/t5-small-finetuned-xsum
|
Rocketknight1
| 2022-01-27T19:39:43Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7172
- Validation Loss: 2.3977
- Train Rouge1: 28.7469
- Train Rouge2: 7.9005
- Train Rougel: 22.5917
- Train Rougelsum: 22.6162
- Train Gen Len: 18.875
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.7172 | 2.3977 | 28.7469 | 7.9005 | 22.5917 | 22.6162 | 18.875 | 0 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.8.0-rc0
- Datasets 1.17.0
- Tokenizers 0.11.0
|
vkhangpham/shopee-ner
|
vkhangpham
| 2022-01-27T19:15:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: shopee-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shopee-ner
This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Precision: 0.7666
- Recall: 0.8666
- F1: 0.8135
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2282 | 1.0 | 33750 | 0.2174 | 0.7443 | 0.8506 | 0.7939 | 0.9253 |
| 0.1983 | 2.0 | 67500 | 0.2046 | 0.7666 | 0.8666 | 0.8135 | 0.9320 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
Jacobo/aristoBERTo
|
Jacobo
| 2022-01-27T19:02:16Z | 10 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
language:
- grc
model-index:
- name: aristoBERTo
results: []
widget:
- text: "Πλάτων ὁ Περικτιόνης [MASK] γένος ἀνέφερεν εἰς Σόλωνα."
- text: "ὁ Κριτίας ἀπέβλεψε [MASK] τὴν θύραν."
- text: "πρῶτοι δὲ καὶ οὐνόματα ἱρὰ ἔγνωσαν καὶ [MASK] ἱροὺς ἔλεξαν."
---
# aristoBERTo
aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed.
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego.
## Intended uses
This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the [Diogenet project](https://diogenet.ucsd.edu). As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts.
It achieves the following results on the evaluation set:
- Loss: 1.6323
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 1.377 | 20.0 | 3414220 | 1.6314 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/peanutfarttles
|
huggingtweets
| 2022-01-27T18:59:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/peanutfarttles/1643309967522/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/451440440231743494/jO_jI-yv_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Peanut Farttles</div>
<div style="text-align: center; font-size: 14px;">@peanutfarttles</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Peanut Farttles.
| Data | Peanut Farttles |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 301 |
| Tweets kept | 2949 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cslrzv64/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @peanutfarttles's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22iy3grf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22iy3grf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/peanutfarttles')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mbateman/marian-finetuned-kde4-en-to-fr
|
mbateman
| 2022-01-27T17:33:02Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
Adinda/Adinda
|
Adinda
| 2022-01-27T17:02:42Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: artistic-2.0
---
|
huggingtweets/northernlion
|
huggingtweets
| 2022-01-27T16:46:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/northernlion/1643301960230/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2236512789/ChannelIcon_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ryan Letourneau</div>
<div style="text-align: center; font-size: 14px;">@northernlion</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ryan Letourneau.
| Data | Ryan Letourneau |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 85 |
| Short tweets | 480 |
| Tweets kept | 2684 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xmzb7x7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @northernlion's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/northernlion')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bayartsogt/tts_transformer-mn-mbspeech
|
bayartsogt
| 2022-01-27T16:35:40Z | 18 | 1 |
fairseq
|
[
"fairseq",
"audio",
"text-to-speech",
"mn",
"dataset:mbspeech",
"arxiv:1809.08895",
"arxiv:2109.06912",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: mn
datasets:
- mbspeech
widget:
- text: "миний нэрийг баярцогт гэдэг"
example_title: "Say my name!"
- text: "би монгол улсын нийслэл, улаанбаатар хотод амьдардаг"
example_title: "Where I am from?"
- text: "энэхүү өгөгдлийг нээлттэй болгосон, болор соофтынхонд баярлалаа"
example_title: "Thank you!"
- text: "энэхүү ажлын ихэнх хэсгийг, төгөлдөр ах хийсэн болно"
example_title: "Shout out to original creater"
---
# tts_transformer-mn-mbspeech
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Mongolian
- Single-speaker male voice
- Trained on [MBSpeech](https://github.com/tugstugi/mongolian-nlp/blob/master/datasets/MBSpeech-1.0-csv.zip)
|
Tsubasaz/clinical-pubmed-bert-base-128
|
Tsubasaz
| 2022-01-27T15:44:06Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:MIMIC-III",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- en
license: mit
datasets:
- MIMIC-III
widget:
- text: "Due to shortness of breath, the patient is diagnosed with [MASK], and other respiratory problems."
example_title: "Example 1"
---
# ClinicalPubMedBERT
## Description
A BERT model pre-trained on PubMed abstracts, and continual pre-trained on clinical notes ([MIMIC-III](https://mimic.physionet.org/)). We try combining two domains that have fewer overlaps with general knowledge text corpora: EHRs and biomedical papers. We hope this model can serve better results on clinical-related downstream tasks such as readmissions.
This model is trained on 500000 clinical notes randomly sampled from MIMIC datasets, with 120k steps of training. We also used whole word masking to enhance the coherence of the language model. All notes are chunked into a length of 128 tokens.
Pre-trained model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
|
mrm8488/ppo-CartPole-v1
|
mrm8488
| 2022-01-27T15:13:48Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
#@title
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
---
# PPO CartPole v1 🤖⚖️
This is a pre-trained model of a PPO agent playing CartPole-v1 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
<video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/ppo-CartPole-v1/resolve/main/output.mp4"></video>
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="mrm8488/ppo-CartPole-v1", filename="cartpole-v1.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('CartPole-v1')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: mean_reward=500.00 +/- 0.0
|
jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom
|
jhonparra18
| 2022-01-27T14:58:01Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-custom
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2245
- eval_wer: 0.2082
- eval_runtime: 801.6784
- eval_samples_per_second: 18.822
- eval_steps_per_second: 2.354
- epoch: 0.76
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
ncoop57/codeparrot-neo-125M-py
|
ncoop57
| 2022-01-27T14:44:13Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"rust",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# GPT-Neo 125M
## Model Description
GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
TBD
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
|
oskrmiguel/mt5-simplification-spanish
|
oskrmiguel
| 2022-01-27T13:32:24Z | 22 | 6 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"simplification",
"spanish",
"es",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- es
thumbnail:
tags:
- simplification
- mt5
- spanish
license: cc-by-nc-sa-4.0
metrics:
- sari
widget:
- text: "La Simplificación Textual es el proceso de transformación de un texto a otro texto equivalente más comprensible para un determinado tipo de grupo o población."
- text: "Los textos simplificados son apropiados para muchos grupos de lectores, como, por ejemplo: estudiantes de idiomas, personas con discapacidades intelectuales y otras personas con necesidades especiales de lectura y comprensión.
"
---
# mt5-simplification-spanish
## Model description
This is a fine-tuned mt5-small model for generating simple text from complex text.
This model was created with the IXA Group research group of the University of the Basque Country, the model has been evaluated with the Sari, Bleu and Fklg metrics; it was trained and tested using the [Simplext corpus](https://dl.acm.org/doi/10.1145/2738046).
## Dataset
Simplext
## Model Evaluation
Bleu: 13,186
Sari: 42,203
Fklg: 10,284
## Authors
Oscar M. Cumbicus-Pineda, Itziar Gonzalez-Dios, Aitor Soroa, November 2021
## Code
https://github.com/oskrmiguel/mt5-simplification
|
benjaminbeilharz/bert-base-uncased-empatheticdialogues-sentiment-classifier
|
benjaminbeilharz
| 2022-01-27T13:22:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
dataset: empathetic_dialogues
---
|
jcmc/wav2vec2-xls-r-1b-ir
|
jcmc
| 2022-01-27T13:09:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6569
- Wer: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1851 | 15.62 | 500 | 1.8067 | 0.9256 |
| 2.1586 | 31.25 | 1000 | 1.7883 | 0.9180 |
| 2.0302 | 46.86 | 1500 | 1.7571 | 0.9192 |
| 1.8706 | 62.49 | 2000 | 1.6314 | 0.8858 |
| 1.7008 | 78.12 | 2500 | 1.6131 | 0.8679 |
| 1.4982 | 93.74 | 3000 | 1.6540 | 0.8650 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
anirudh21/albert-xxlarge-v2-finetuned-wnli
|
anirudh21
| 2022-01-27T13:00:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-xxlarge-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5070422535211268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6970
- Accuracy: 0.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 0.8066 | 0.4366 |
| No log | 2.0 | 26 | 0.6970 | 0.5070 |
| No log | 3.0 | 39 | 0.7977 | 0.4507 |
| No log | 4.0 | 52 | 0.7906 | 0.4930 |
| No log | 5.0 | 65 | 0.8459 | 0.4366 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anirudh21/bert-base-uncased-finetuned-rte
|
anirudh21
| 2022-01-27T06:57:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6642599277978339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8075
- Accuracy: 0.6643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 0.6777 | 0.5668 |
| No log | 2.0 | 126 | 0.6723 | 0.6282 |
| No log | 3.0 | 189 | 0.7238 | 0.6318 |
| No log | 4.0 | 252 | 0.7993 | 0.6354 |
| No log | 5.0 | 315 | 0.8075 | 0.6643 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anas-awadalla/bert-medium-pretrained-finetuned-squad
|
anas-awadalla
| 2022-01-27T06:07:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert_medium_pretrain_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_medium_pretrain_squad
This model is a fine-tuned version of [anas-awadalla/bert-medium-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-medium-pretrained-on-squad) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- "exact_match": 77.95648060548723
- "f1": 85.85300366384631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anirudh21/bert-base-uncased-finetuned-mrpc
|
anirudh21
| 2022-01-27T05:26:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7916666666666666
- name: F1
type: f1
value: 0.8590381426202321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Accuracy: 0.7917
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 |
| No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 |
| No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 |
| No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 |
| No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anirudh21/albert-large-v2-finetuned-wnli
|
anirudh21
| 2022-01-27T05:02:43Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-large-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-wnli
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 17 | 0.7292 | 0.4366 |
| No log | 2.0 | 34 | 0.6919 | 0.5352 |
| No log | 3.0 | 51 | 0.7084 | 0.4648 |
| No log | 4.0 | 68 | 0.7152 | 0.5352 |
| No log | 5.0 | 85 | 0.7343 | 0.5211 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anas-awadalla/bert-medium-pretrained-on-squad
|
anas-awadalla
| 2022-01-27T03:59:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert_medium_pretrain_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_medium_pretrain_squad
This model is a fine-tuned version of [prajjwal1/bert-medium](https://huggingface.co/prajjwal1/bert-medium) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
glob-asr/base-spanish-asr
|
glob-asr
| 2022-01-27T03:35:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-custom
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2245
- eval_wer: 0.2082
- eval_runtime: 801.6784
- eval_samples_per_second: 18.822
- eval_steps_per_second: 2.354
- epoch: 0.76
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xlsr-53-rm-vallader-with-lm
|
anuragshas
| 2022-01-26T16:38:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xlsr-53-rm-vallader-with-lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-rm-vallader-with-lm
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xlsr-53-rm-vallader](https://huggingface.co/anuragshas/wav2vec2-large-xlsr-53-rm-vallader) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4552
- Wer: 0.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.112
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2379 | 3.12 | 100 | 0.4041 | 0.3396 |
| 0.103 | 6.25 | 200 | 0.4400 | 0.3337 |
| 0.0664 | 9.38 | 300 | 0.4239 | 0.3315 |
| 0.0578 | 12.5 | 400 | 0.4303 | 0.3267 |
| 0.0446 | 15.62 | 500 | 0.4575 | 0.3274 |
| 0.041 | 18.75 | 600 | 0.4451 | 0.3223 |
| 0.0402 | 21.88 | 700 | 0.4507 | 0.3206 |
| 0.0374 | 25.0 | 800 | 0.4649 | 0.3208 |
| 0.0371 | 28.12 | 900 | 0.4552 | 0.3206 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anton-l/sew-mid-100k-ft-keyword-spotting
|
anton-l
| 2022-01-26T14:43:39Z | 237 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: sew-mid-100k-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-ft-keyword-spotting
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0975
- Accuracy: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5999 | 1.0 | 399 | 0.2262 | 0.9635 |
| 0.4271 | 2.0 | 798 | 0.1230 | 0.9697 |
| 0.3778 | 3.0 | 1197 | 0.1052 | 0.9731 |
| 0.3227 | 4.0 | 1596 | 0.0975 | 0.9757 |
| 0.3081 | 5.0 | 1995 | 0.0962 | 0.9753 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Iskaj/hf-challenge-test
|
Iskaj
| 2022-01-26T11:21:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
SophieTr/fine-tune-Pegasus-large
|
SophieTr
| 2022-01-26T07:56:10Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: fine-tune-Pegasus-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus-large
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
danielbubiola/bangla_asr
|
danielbubiola
| 2022-01-26T07:42:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bangla_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla_asr
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 157.8652
- Wer: 0.4507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2601.5363 | 7.46 | 500 | 259.6630 | 0.6863 |
| 417.7386 | 14.93 | 1000 | 156.6117 | 0.5275 |
| 262.9455 | 22.39 | 1500 | 155.0886 | 0.5006 |
| 178.7715 | 29.85 | 2000 | 155.1077 | 0.4840 |
| 132.448 | 37.31 | 2500 | 163.8623 | 0.4770 |
| 116.3943 | 44.78 | 3000 | 161.5531 | 0.4609 |
| 87.1653 | 52.24 | 3500 | 165.6857 | 0.4597 |
| 80.5606 | 59.7 | 4000 | 157.8652 | 0.4507 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
kxiaoqiangrexian/bert_test
|
kxiaoqiangrexian
| 2022-01-26T06:52:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
vuiseng9/bert-mnli
|
vuiseng9
| 2022-01-26T06:48:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This model is developed with transformers v4.9.1.
```
m = 0.8444
eval_samples = 9815
mm = 0.8495
eval_samples = 9832
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-mnli
NEPOCH=3
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
python run_glue.py \
--model_name_or_path bert-base-uncased \
--task_name mnli \
--max_seq_length 128 \
--do_train \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs $NEPOCH \
--logging_steps 1 \
--evaluation_strategy steps \
--save_steps 3000 \
--do_eval \
--per_device_eval_batch_size 128 \
--eval_steps 250 \
--output_dir $OUTDIR
--overwrite_output_dir
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path vuiseng9/bert-mnli \
--task_name mnli \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4
|
DrishtiSharma
| 2022-01-26T01:35:38Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6178
- Wer: 0.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2793 | 27.27 | 300 | 3.0737 | 1.0 |
| 1.5348 | 54.55 | 600 | 0.6312 | 0.6334 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Ajay191191/autonlp-Test-530014983
|
Ajay191191
| 2022-01-25T22:28:49Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Ajay191191/autonlp-data-Test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Ajay191191/autonlp-data-Test
co2_eq_emissions: 55.10196329868386
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
chmanoj/xls-r-demo-test
|
chmanoj
| 2022-01-25T19:44:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8786
- Wer: 1.3460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
akilesh96/autonlp-mrcooper_text_classification-529614927
|
akilesh96
| 2022-01-25T19:43:57Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:akilesh96/autonlp-data-mrcooper_text_classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "Not Many People Know About The City 1200 Feet Below Detroit"
- text: "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital."
- text: "I’m sorry if you made it this far, but I’m just genuinely idk, I feel like I shouldn’t give up, it’s just getting harder to come back from stuff like this."
datasets:
- akilesh96/autonlp-data-mrcooper_text_classification
co2_eq_emissions: 5.999771405025692
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
lucianpopa/autonlp-SST1-529214890
|
lucianpopa
| 2022-01-25T17:30:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:lucianpopa/autonlp-data-SST1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- lucianpopa/autonlp-data-SST1
co2_eq_emissions: 49.618294309910624
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529214890
- CO2 Emissions (in grams): 49.618294309910624
## Validation Metrics
- Loss: 0.7135734558105469
- Accuracy: 0.7042338838232481
- Macro F1: 0.6164041045783032
- Micro F1: 0.7042338838232481
- Weighted F1: 0.7028309161791009
- Macro Precision: 0.6497438111060598
- Micro Precision: 0.7042338838232481
- Weighted Precision: 0.7076651075198755
- Macro Recall: 0.6023419083862918
- Micro Recall: 0.7042338838232481
- Weighted Recall: 0.7042338838232481
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-SST1-529214890
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
FitoDS/xls-r-ab-test
|
FitoDS
| 2022-01-25T13:49:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Iacopo/Shakespear-GPT2
|
Iacopo
| 2022-01-25T13:35:35Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of Shakespeare's plays.
## Model description
The model is the original gpt-2 model fine-tuned on a custom dataset.
## Intended uses & limitations
The model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.
## Training and evaluation data
Trained with Shakespeare's plays corpus.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.11.0
|
awsaf49/deep-chimpact
|
awsaf49
| 2022-01-25T12:59:16Z | 9 | 1 |
tf-keras
|
[
"tf-keras",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# [Deep Chimpact](https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/390/)
> Depth Estimation for Wildlife Conservation (1st place solution)
<div align=center> <img src="https://user-images.githubusercontent.com/36858976/138281204-c3cbcb77-11ca-448b-a693-cb3cfa3c5181.png" width=800>
## Overview
Healthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size.
However, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than **10 minutes** on average to label distance for every **1 minute** of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to **monitor wildlife populations**.
> Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time.
Along the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!
|
dhanesh123in/layoutlmv2-finetuned-funsd-test
|
dhanesh123in
| 2022-01-25T12:33:29Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.11.0
|
SamMorgan/yolo_v4_tflite
|
SamMorgan
| 2022-01-25T10:15:51Z | 0 | 4 |
tf-keras
|
[
"tf-keras",
"tflite",
"object detection",
"computer vision",
"darknet",
"yolo",
"object-detection",
"en",
"dataset:coco",
"dataset:imagenette",
"arxiv:2004.10934",
"license:mit",
"region:us"
] |
object-detection
| 2022-03-02T23:29:04Z |
---
language: en
tags:
- object detection
- computer vision
- darknet
- yolo
datasets:
- coco
- imagenette
license: mit
thumbnail: https://github.com/hunglc007/tensorflow-yolov4-tflite
pipeline_tag: object-detection
---
# YOLOv4
YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in [this paper](https://arxiv.org/abs/2004.10934), that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on [Github](https://github.com/AlexeyAB/darknet).
This YOLOv4 library, inspired by previous YOLOv3 implementations here:
* [Yolov3 tensorflow](https://github.com/YunYang1994/tensorflow-yolov3)
* [Yolov3 tf2](https://github.com/zzh8829/yolov3-tf2)uses Tensorflow 2.0 and is available on this [Github](https://github.com/hunglc007/tensorflow-yolov4-tflite).
### Limitations and biases
Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences.
The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology.
### How to use YOLOv4tflite
You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own!
```bash
# install git lfs
git lfs install
# if presented with the error "git: 'lfs' is not a git command. See 'git --help'", try running these linux commands:
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
# change directory to base
cd ..
# install git-lfs
sudo apt-get install git-lfs
# for message "Git LFS initialized"
git lfs install
# change directory to yolo_v4_tflite
cd ./yolo_v4_tflite
# clone this repo into your notebook
git clone https://huggingface.co/SamMorgan/yolo_v4_tflite
# Run demo tensor flow for an example of how this model works
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg --output ./test.jpg
# Try with your own image
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image <insert path to image of choice> --output <insert path to output location of choice>
```
### Evaluate on COCO 2017 Dataset
```bash
# run script in /script/get_coco_dataset_2017.sh to download COCO 2017 Dataset
# preprocess coco dataset
cd data
mkdir dataset
cd ..
cd scripts
python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl
python coco_annotation.py --coco_path ./coco
cd ..
# evaluate yolov4 model
python evaluate.py --weights ./data/yolov4.weights
cd mAP/extra
python remove_space.py
cd ..
python main.py --output results_yolov4_tf
```
#### mAP50 on COCO 2017 Dataset
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 | 55.43 | 52.32 | |
| YoloV4 | 61.96 | 57.33 | |
### Benchmark
```bash
python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights
```
#### TensorRT performance
| YoloV4 416 images/s | FP32 | FP16 | INT8 |
|---------------------|----------|----------|----------|
| Batch size 1 | 55 | 116 | |
| Batch size 8 | 70 | 152 | |
#### Tesla P100
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 FPS | 40.6 | 49.4 | 61.3 |
| YoloV4 FPS | 33.4 | 41.7 | 50.0 |
#### Tesla K80
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 FPS | 10.8 | 12.9 | 17.6 |
| YoloV4 FPS | 9.6 | 11.7 | 16.0 |
#### Tesla T4
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 FPS | 27.6 | 32.3 | 45.1 |
| YoloV4 FPS | 24.0 | 30.3 | 40.1 |
#### Tesla P4
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 FPS | 20.2 | 24.2 | 31.2 |
| YoloV4 FPS | 16.2 | 20.2 | 26.5 |
#### Macbook Pro 15 (2.3GHz i7)
| Detection | 512x512 | 416x416 | 320x320 |
|-------------|---------|---------|---------|
| YoloV3 FPS | | | |
| YoloV4 FPS | | | |
### Traning your own model
```bash
# Prepare your dataset
# If you want to train from scratch:
In config.py set FISRT_STAGE_EPOCHS=0
# Run script:
python train.py
# Transfer learning:
python train.py --weights ./data/yolov4.weights
```
The training performance is not fully reproduced yet, so I recommended to use Alex's [Darknet](https://github.com/AlexeyAB/darknet) to train your own data, then convert the .weights to tensorflow or tflite.
### References
* YOLOv4: Optimal Speed and Accuracy of Object Detection [YOLOv4](https://arxiv.org/abs/2004.10934).
* [darknet](https://github.com/AlexeyAB/darknet)
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
|
deepdoctection
| 2022-01-25T09:23:24Z | 0 | 0 | null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- Tensorflow
license: apache-2.0
datasets:
- Pubtabnet
---
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c) .
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.filter_categories(categories="CELL")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1",
"TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"]
build_train_config=["max_datapoints=500000"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=4000"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
NimaBoscarino/aot-gan-places2
|
NimaBoscarino
| 2022-01-25T08:43:40Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"scene-recognition",
"scene-generation",
"generative-adversarial-network",
"dataset:places2",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
tags:
- scene-recognition
- scene-generation
- generative-adversarial-network
metrics:
- L1
- PSNR
- SSIM
- FID
datasets:
- places2
---
# AOT-GAN Places2
AOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places.
This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as
```
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
```
## Dataset
The Places2 dataset can be found here: http://places2.csail.mit.edu/download.html
|
NimaBoscarino/aot-gan-celebahq
|
NimaBoscarino
| 2022-01-25T08:38:46Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"face-recognition",
"face-generation",
"face-segmentation",
"generative-adversarial-network",
"dataset:celeba-hq",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
tags:
- face-recognition
- face-generation
- face-segmentation
- generative-adversarial-network
metrics:
- L1
- PSNR
- SSIM
- FID
datasets:
- celeba-hq
---
# AOT-GAN CelebA-HQ
AOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits.
This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as
```
@inproceedings{yan2021agg,
author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining},
title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting},
booktitle = {Arxiv},
pages={-},
year = {2020}
}
```
## Dataset
The CelebA-HQ dataset was created with this codebase: https://github.com/tkarras/progressive_growing_of_gans, owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International.
|
z-uo/glowtts-female-it
|
z-uo
| 2022-01-25T07:14:49Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"text-to-speech",
"it",
"dataset:z-uo/female-LJSpeech-italian",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
tags:
- text-to-speech
language:
- it
model-index:
- name: glowtts-male-it
results: []
datasets:
- z-uo/female-LJSpeech-italian
---
# Coqui Model for TTS
```
pip install TTS
git clone https://huggingface.co/z-uo/glowtts-female-it
# predict one
tts --text "ciao pluto" --model_path "glowtts-female-it/best_model.pth.tar" --config_path "glowtts-female-it/config.json"
# predict server
tts-server --model_path "glowtts-female-it/best_model.pth.tar" --config_path "glowtts-female-it/config.json"
firefox localhost:5002
```
More information about training script in [this repo](https://github.com/nicolalandro/train_coqui_tts_ita).
|
arman0320/bert-base-cased-wikitext2
|
arman0320
| 2022-01-25T05:51:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0963 | 1.0 | 2346 | 7.0570 |
| 6.9063 | 2.0 | 4692 | 6.8721 |
| 6.8585 | 3.0 | 7038 | 6.8931 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anirudh21/electra-base-discriminator-finetuned-wnli
|
anirudh21
| 2022-01-25T04:41:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: electra-base-discriminator-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-wnli
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6893
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6893 | 0.5634 |
| No log | 2.0 | 80 | 0.7042 | 0.4225 |
| No log | 3.0 | 120 | 0.7008 | 0.3803 |
| No log | 4.0 | 160 | 0.6998 | 0.5634 |
| No log | 5.0 | 200 | 0.7016 | 0.5352 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
shunxing1234/test_model
|
shunxing1234
| 2022-01-25T03:14:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
Model_name
description
tutorial
|
emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
|
emrecan
| 2022-01-24T23:55:40Z | 275,571 | 35 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- tr
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- nli_tr
- emrecan/stsb-mt-turkish
widget:
source_sentence: "Bu çok mutlu bir kişi"
sentences:
- "Bu mutlu bir köpek"
- "Bu sevincinden havalara uçan bir insan"
- "Çok kar yağıyor"
---
# emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on Turkish machine translated versions of [NLI](https://huggingface.co/datasets/nli_tr) and [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) datasets, using example [training scripts]( https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) from sentence-transformers GitHub repository.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
model = SentenceTransformer('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
model = AutoModel.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Evaluation results on test and development sets are given below:
| Split | Epoch | cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
|------------|-------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------|--------------|
| test | - | 0.834 | 0.830 | 0.820 | 0.819 | 0.819 | 0.818 | 0.799 | 0.789 |
| validation | 1 | 0.850 | 0.848 | 0.831 | 0.835 | 0.83 | 0.83 | 0.80 | 0.806 |
| validation | 2 | 0.857 | 0.857 | 0.844 | 0.848 | 0.844 | 0.848 | 0.813 | 0.810 |
| validation | 3 | 0.860 | 0.859 | 0.846 | 0.851 | 0.846 | 0.850 | 0.825 | 0.822 |
| validation | 4 | 0.859 | 0.860 | 0.846 | 0.851 | 0.846 | 0.851 | 0.825 | 0.823 |
## Training
Training scripts [`training_nli_v2.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py) and [`training_stsbenchmark_continue_training.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark_continue_training.py) were used to train the model.
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 200,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
surrey-nlp/en_abbreviation_detection_roberta_lar
|
surrey-nlp
| 2022-01-24T19:26:12Z | 5 | 5 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
widget:
- text: "Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons."
- text: "RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1."
- text: "Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI)."
model-index:
- name: en_abbreviation_detection_roberta_lar
results:
- task:
name: AbbreviationDetection
type: token-classification
metrics:
- name: Precision
type: precision
value: 0.9611772641
- name: Recall
type: recall
value: 0.9446958783
- name: F Score
type: f_score
value: 0.9528653083
---
| Feature | Description |
| --- | --- |
| **Name** | `en_abbreviation_detection_roberta_lar` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `transformer`, `abbreviationDetection` |
| **Components** | `transformer`, `abbreviationDetection` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | PLOSDataset-LREC22-Submitted |
| **License** | cc-by-sa-4.0 |
| **Author** | [Diptesh Kanojia](https://dipteshkanojia.github.io) |
### Label Scheme
<details>
<summary>View label scheme (3 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`abbreviationDetection`** | `AC`, `LF`, `O` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 95.29 |
| `ENTS_P` | 96.12 |
| `ENTS_R` | 94.47 |
| `TRANSFORMER_LOSS` | 287952.16 |
| `NER_LOSS` | 608954.79 |
|
anas-awadalla/bert-small-finetuned-squad
|
anas-awadalla
| 2022-01-24T19:25:29Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-small-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-squad
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3138
- eval_runtime: 46.6577
- eval_samples_per_second: 231.13
- eval_steps_per_second: 14.446
- epoch: 4.0
- step: 22132
{'exact_match': 71.05960264900662, 'f1': 80.8260245470904}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
birgermoell/wav2vec2-common_voice-tr-demo
|
birgermoell
| 2022-01-24T18:52:26Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5528
- Wer: 0.3811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.74 | 100 | 3.4444 | 1.0 |
| No log | 1.47 | 200 | 2.9421 | 1.0 |
| No log | 2.21 | 300 | 2.2802 | 1.0137 |
| No log | 2.94 | 400 | 0.9683 | 0.7611 |
| 3.7264 | 3.68 | 500 | 0.7941 | 0.6594 |
| 3.7264 | 4.41 | 600 | 0.6695 | 0.5751 |
| 3.7264 | 5.15 | 700 | 0.6507 | 0.5314 |
| 3.7264 | 5.88 | 800 | 0.5731 | 0.4927 |
| 3.7264 | 6.62 | 900 | 0.5723 | 0.4580 |
| 0.4592 | 7.35 | 1000 | 0.5913 | 0.4479 |
| 0.4592 | 8.09 | 1100 | 0.5562 | 0.4423 |
| 0.4592 | 8.82 | 1200 | 0.5566 | 0.4292 |
| 0.4592 | 9.56 | 1300 | 0.5492 | 0.4303 |
| 0.4592 | 10.29 | 1400 | 0.5665 | 0.4331 |
| 0.2121 | 11.03 | 1500 | 0.5610 | 0.4084 |
| 0.2121 | 11.76 | 1600 | 0.5703 | 0.4014 |
| 0.2121 | 12.5 | 1700 | 0.5669 | 0.3898 |
| 0.2121 | 13.24 | 1800 | 0.5586 | 0.3962 |
| 0.2121 | 13.97 | 1900 | 0.5656 | 0.3897 |
| 0.1326 | 14.71 | 2000 | 0.5565 | 0.3813 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
EColi/sponsorblock-base-v1
|
EColi
| 2022-01-24T17:23:23Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [/1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/](https://huggingface.co//1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 2518227880
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 0.0867 | 0.07 | 75000 | 0.0742 |
| 0.0783 | 0.13 | 150000 | 0.0695 |
| 0.0719 | 0.2 | 225000 | 0.0732 |
| 0.0743 | 0.27 | 300000 | 0.0663 |
| 0.0659 | 0.34 | 375000 | 0.0686 |
| 0.0664 | 0.4 | 450000 | 0.0683 |
| 0.0637 | 0.47 | 525000 | 0.0680 |
| 0.0655 | 0.54 | 600000 | 0.0641 |
| 0.0676 | 0.6 | 675000 | 0.0644 |
| 0.0704 | 0.67 | 750000 | 0.0645 |
| 0.0687 | 0.74 | 825000 | 0.0610 |
| 0.059 | 0.81 | 900000 | 0.0652 |
| 0.0666 | 0.87 | 975000 | 0.0619 |
| 0.0624 | 0.94 | 1050000 | 0.0619 |
| 0.0625 | 1.01 | 1125000 | 0.0667 |
| 0.0614 | 1.03 | 1150000 | 0.0658 |
| 0.0597 | 1.05 | 1175000 | 0.0683 |
| 0.0629 | 1.07 | 1200000 | 0.0691 |
| 0.0603 | 1.1 | 1225000 | 0.0678 |
| 0.0601 | 1.12 | 1250000 | 0.0746 |
| 0.0606 | 1.14 | 1275000 | 0.0691 |
| 0.0671 | 1.16 | 1300000 | 0.0702 |
| 0.0625 | 1.19 | 1325000 | 0.0661 |
| 0.0617 | 1.21 | 1350000 | 0.0688 |
| 0.0579 | 1.23 | 1375000 | 0.0679 |
| 0.0663 | 1.25 | 1400000 | 0.0634 |
| 0.0583 | 1.28 | 1425000 | 0.0638 |
| 0.0623 | 1.3 | 1450000 | 0.0681 |
| 0.0615 | 1.32 | 1475000 | 0.0670 |
| 0.0592 | 1.34 | 1500000 | 0.0666 |
| 0.0626 | 1.37 | 1525000 | 0.0666 |
| 0.063 | 1.39 | 1550000 | 0.0647 |
| 0.0648 | 1.41 | 1575000 | 0.0653 |
| 0.0611 | 1.43 | 1600000 | 0.0700 |
| 0.0622 | 1.46 | 1625000 | 0.0634 |
| 0.0617 | 1.48 | 1650000 | 0.0651 |
| 0.0613 | 1.5 | 1675000 | 0.0634 |
| 0.0639 | 1.52 | 1700000 | 0.0661 |
| 0.0615 | 1.54 | 1725000 | 0.0644 |
| 0.0605 | 1.57 | 1750000 | 0.0662 |
| 0.0622 | 1.59 | 1775000 | 0.0656 |
| 0.0585 | 1.61 | 1800000 | 0.0633 |
| 0.0628 | 1.63 | 1825000 | 0.0625 |
| 0.0638 | 1.66 | 1850000 | 0.0662 |
| 0.0599 | 1.68 | 1875000 | 0.0664 |
| 0.0583 | 1.7 | 1900000 | 0.0668 |
| 0.0543 | 1.72 | 1925000 | 0.0631 |
| 0.06 | 1.75 | 1950000 | 0.0629 |
| 0.0615 | 1.77 | 1975000 | 0.0644 |
| 0.0587 | 1.79 | 2000000 | 0.0663 |
| 0.0647 | 1.81 | 2025000 | 0.0654 |
| 0.0604 | 1.84 | 2050000 | 0.0639 |
| 0.0641 | 1.86 | 2075000 | 0.0636 |
| 0.0604 | 1.88 | 2100000 | 0.0636 |
| 0.0654 | 1.9 | 2125000 | 0.0652 |
| 0.0588 | 1.93 | 2150000 | 0.0638 |
| 0.0616 | 1.95 | 2175000 | 0.0657 |
| 0.0598 | 1.97 | 2200000 | 0.0646 |
| 0.0633 | 1.99 | 2225000 | 0.0645 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
haimasree/DeepSTARR
|
haimasree
| 2022-01-24T16:21:18Z | 0 | 0 | null |
[
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
|
huggingtweets/yu_kisub21
|
huggingtweets
| 2022-01-24T15:24:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/yu_kisub21/1643037750346/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1476997379857723392/L6czpqmI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ゆう🇲🇾英語を軸に人生に革新を🔥</div>
<div style="text-align: center; font-size: 14px;">@yu_kisub21</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ゆう🇲🇾英語を軸に人生に革新を🔥.
| Data | ゆう🇲🇾英語を軸に人生に革新を🔥 |
| --- | --- |
| Tweets downloaded | 1580 |
| Retweets | 366 |
| Short tweets | 1137 |
| Tweets kept | 77 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fswx6qh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yu_kisub21's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35tec8b2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yu_kisub21')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dbsamu/electra-small-discriminator-finetuned-ner
|
dbsamu
| 2022-01-24T14:27:41Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-small-discriminator-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.7330965535385425
- name: Recall
type: recall
value: 0.7542632861138681
- name: F1
type: f1
value: 0.7435293071244329
- name: Accuracy
type: accuracy
value: 0.8883011190233978
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-ner
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3685
- Precision: 0.7331
- Recall: 0.7543
- F1: 0.7435
- Accuracy: 0.8883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5465 | 1.0 | 1250 | 0.4158 | 0.6932 | 0.7201 | 0.7064 | 0.8735 |
| 0.4037 | 2.0 | 2500 | 0.3817 | 0.7191 | 0.7470 | 0.7328 | 0.8828 |
| 0.3606 | 3.0 | 3750 | 0.3685 | 0.7331 | 0.7543 | 0.7435 | 0.8883 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anirudh21/bert-base-uncased-finetuned-wnli
|
anirudh21
| 2022-01-24T13:33:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6854
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6854 | 0.5634 |
| No log | 2.0 | 80 | 0.6983 | 0.3239 |
| No log | 3.0 | 120 | 0.6995 | 0.5352 |
| No log | 4.0 | 160 | 0.6986 | 0.5634 |
| No log | 5.0 | 200 | 0.6996 | 0.5634 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet
|
deepdoctection
| 2022-01-24T13:02:44Z | 0 | 1 | null |
[
"Tensorflow",
"dataset:Publaynet",
"arxiv:1908.07836",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- Tensorflow
license: apache-2.0
datasets:
- Publaynet
---
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836).
This model is different from the model used the paper.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
publaynet = DatasetRegistry.get_dataset("publaynet")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml")
path_weights = ""
dataset_train = publaynet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1",
"PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50",
"BACKBONE.FREEZE_AT=0"]
build_train_config=["max_datapoints=335703"]
dataset_val = publaynet
build_val_config = ["max_datapoints=2000"]
coco_metric = MetricRegistry.get_metric("coco")
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
nimelinia/rut5-reply-headline-model
|
nimelinia
| 2022-01-24T12:31:54Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model was trained from rut5-base-multitask with pair of questions and answers (in Russian).
The model demonstrate interesting behavior with option "reply" and "headline".
When model creates a headline for paragraph of text, it not only uses phrases from text, but also generate new words and sometimes new meanings.
Examples of questions and answers:
> Как зовут отца Александра Сергеевича Пушкина?
> - Пушкин
> Где купить вкусное мороженое?
> - В супермаркете
> Красивая ли Мона Лиза?
> - Очень красивая
Examples of headlines:
> Власти Пекина из-за пандемии COVID-19 призвали жителей города отказаться от помощи и избегать любого контакта с олимпийскими машинами, попавшими в ДТП. Об этом сообщает South China Morning Post.
> - Китайский губернатор призвал жителей Пекина отказаться от помощи
> Казахский народ должен поддержать своего президента Касым-Жомарт Токаева на фоне угрозы повторения массовых беспорядков, но и властям страны следует провести демократические реформы для снижения недовольства. Об этом в интервью изданию Orda заявил бывший генеральный продюсер гостелеканала «Хабар», экс-глава канала «Ел Арна» Серик Абас-Шах.
> - Казахский народ должен поддержать Токаева
> Позиция России по макроэкономическим показателям является лучшей в мире. Об этом сказал ТАСС российский исполнительный директор в Международном валютном фонде (МВФ) Алексей Можин.
> - Российская экономика является лучшей в мире
|
hfl/cino-base-v2
|
hfl
| 2022-01-24T10:34:45Z | 124 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
BillelBenoudjit/jplu-wikiann
|
BillelBenoudjit
| 2022-01-24T09:43:59Z | 0 | 0 | null |
[
"fr",
"dataset:wikiann",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language:
- fr
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: jplu-wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: default
metrics:
- name: precision
type: precision
value: 0.897994120055078
- name: recall
type: recall
value: 0.9097421203438395
- name: f1
type: f1
value: 0.9038299466242158
- name: accuracy
type: accuracy
value: 0.9464171271196716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jplu-wikiann
This model is a fine-tuned version of [jplu/tf-camembert-base](https://huggingface.co/jplu/tf-camembert-base) on the wikiann dataset.
It achieves the following results on the evaluation set:
- precision: 0.8980
- recall: 0.9097
- f1: 0.9038
- accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 16
- eval_batch_size: 32
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
hfl/cino-large
|
hfl
| 2022-01-24T09:28:57Z | 5 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
haji2438/bertweet-base-SNS_BRANDS_200k
|
haji2438
| 2022-01-24T08:55:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bertweet-base-SNS_BRANDS_200k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-SNS_BRANDS_200k
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0428 | 1.0 | 5882 | 0.0336 |
| 0.0276 | 2.0 | 11764 | 0.0241 |
| 0.0251 | 3.0 | 17646 | 0.0243 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
st1992/paraphrase-MiniLM-L12-tagalog-v2
|
st1992
| 2022-01-24T05:48:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# st1992/paraphrase-MiniLM-L12-tagalog-v2
paraphrase-MiniLM-L12-v2 finetuned on Tagalog language: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers) : same as other sentence-transformer models
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('st1992/paraphrase-MiniLM-L12-tagalog-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['hindi po', 'tulog na']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('st1992/paraphrase-MiniLM-L12-tagalog-v2')
model = AutoModel.from_pretrained('st1992/paraphrase-MiniLM-L12-tagalog-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
danielbubiola/daniel_asr
|
danielbubiola
| 2022-01-24T05:30:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: daniel_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel_asr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4565
- Wer: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4909 | 4.0 | 500 | 1.3485 | 0.8887 |
| 0.5887 | 8.0 | 1000 | 0.4957 | 0.4641 |
| 0.2207 | 12.0 | 1500 | 0.4621 | 0.3971 |
| 0.125 | 16.0 | 2000 | 0.4339 | 0.3756 |
| 0.0829 | 20.0 | 2500 | 0.4618 | 0.3613 |
| 0.0601 | 24.0 | 3000 | 0.4564 | 0.3535 |
| 0.0456 | 28.0 | 3500 | 0.4565 | 0.3423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lucianpopa/autonlp-TREC-classification-522314623
|
lucianpopa
| 2022-01-24T02:31:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:lucianpopa/autonlp-data-TREC-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- lucianpopa/autonlp-data-TREC-classification
co2_eq_emissions: 15.186006626915715
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 522314623
- CO2 Emissions (in grams): 15.186006626915715
## Validation Metrics
- Loss: 0.24612033367156982
- Accuracy: 0.9643183897529735
- Macro F1: 0.9493690949638435
- Micro F1: 0.9643183897529735
- Weighted F1: 0.9642384162837268
- Macro Precision: 0.9372705571897225
- Micro Precision: 0.9643183897529735
- Weighted Precision: 0.9652870438320825
- Macro Recall: 0.9649638583139503
- Micro Recall: 0.9643183897529735
- Weighted Recall: 0.9643183897529735
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-TREC-classification-522314623
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
jiobiala24/wav2vec2-base-checkpoint-8
|
jiobiala24
| 2022-01-24T01:26:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-8
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9561
- Wer: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 |
| 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 |
| 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 |
| 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 |
| 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 |
| 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 |
| 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 |
| 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 |
| 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 |
| 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 |
| 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 |
| 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 |
| 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 |
| 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 |
| 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 |
| 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 |
| 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 |
| 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.