modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
neal49/distilbert-yelp | 2c01f4f76b10b9a3205f1cc72367ac8afa412555 | 2022-05-08T07:58:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | neal49 | null | neal49/distilbert-yelp | 7 | null | transformers | 14,400 | Entry not found |
sam999/distilbert-base-uncased-finetuned-squad | d36279c86ccbda4497c02661aa40cffc3ca23f5d | 2022-05-08T18:17:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | sam999 | null | sam999/distilbert-base-uncased-finetuned-squad | 7 | null | transformers | 14,401 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8853 | 0.2 | 1107 | 1.6908 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
binay1999/bert-finetuned-text-classification | e53b38f1b1af994edc2c72916e29700735047033 | 2022-05-09T13:14:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | binay1999 | null | binay1999/bert-finetuned-text-classification | 7 | null | transformers | 14,402 | Entry not found |
domischwimmbeck/bert-base-german-cased-own-data-ner | 2d52f993106a7429743174462a4773b0883dcc64 | 2022-05-20T09:38:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | domischwimmbeck | null | domischwimmbeck/bert-base-german-cased-own-data-ner | 7 | null | transformers | 14,403 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-own-data-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-own-data-ner
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0535
- Precision: 0.7134
- Recall: 0.8536
- F1: 0.7772
- Accuracy: 0.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.8 | 32 | 0.0308 | 0.7593 | 0.8 | 0.7791 | 0.9917 |
| No log | 1.6 | 64 | 0.0342 | 0.7756 | 0.8393 | 0.8062 | 0.9911 |
| No log | 2.4 | 96 | 0.0457 | 0.7764 | 0.8679 | 0.8196 | 0.9906 |
| No log | 3.2 | 128 | 0.0383 | 0.7524 | 0.8464 | 0.7966 | 0.9911 |
| No log | 4.0 | 160 | 0.0420 | 0.7539 | 0.8536 | 0.8007 | 0.9907 |
| No log | 4.8 | 192 | 0.0535 | 0.7134 | 0.8536 | 0.7772 | 0.9895 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
binay1999/ditilbert-finetuned-text-classification | 414a7b737e3b5ff9f00802c342a235d5ab4fb0b9 | 2022-05-10T05:53:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | binay1999 | null | binay1999/ditilbert-finetuned-text-classification | 7 | null | transformers | 14,404 | Entry not found |
pglauner/distilbert-base-uncased-finetuned-emotion | 389aab1a989739d086a334d97465cc9a3583c25d | 2022-05-10T17:42:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pglauner | null | pglauner/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265216393152228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 |
| 0.2582 | 2.0 | 500 | 0.2251 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
guhuawuli/distilbert-poem_key_words | 02bbcba61fda1c4669edb77ca1c98ee1d2c04442 | 2022-05-12T01:55:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | guhuawuli | null | guhuawuli/distilbert-poem_key_words | 7 | null | transformers | 14,406 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-poem_key_words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-poem_key_words
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 338 | 0.2103 | 0.9378 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bookbot/wav2vec2-adult-child-id-cls | 3198c3ae5b34979dba4d07e0077eeae31a5f12bc | 2022-05-12T12:36:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"id",
"arxiv:2006.11477",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | bookbot | null | bookbot/wav2vec2-adult-child-id-cls | 7 | null | transformers | 14,407 | ---
language: id
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-adult-child-id-cls
results: []
---
# Wav2Vec2 Adult/Child Indonesian Speech Classifier
Wav2Vec2 Adult/Child Indonesian Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a fine-tuned version of [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on a private adult/child Indonesian speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ----------------------------- | ------- | ----------- | ---------------------------------------------------- |
| `wav2vec2-adult-child-id-cls` | 91M | wav2vec 2.0 | Adult/Child Indonesian Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| -------------------------------------------- | ------ | -------- | ------ |
| Adult/Child Indonesian Speech Classification | 0.2603 | 92.22% | 0.9202 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `gradient_accumulation_steps`: 1
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.2415 | 1.0 | 305 | 0.2951 | 0.8804 | 0.8695 |
| 0.202 | 2.0 | 610 | 0.2392 | 0.9124 | 0.9081 |
| 0.2161 | 3.0 | 915 | 0.2508 | 0.9199 | 0.9161 |
| 0.1348 | 4.0 | 1220 | 0.2748 | 0.9153 | 0.9126 |
| 0.162 | 5.0 | 1525 | 0.2603 | 0.9222 | 0.9202 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 Adult/Child Indonesian Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.0
- Tokenizers 0.12.1
|
nielsr/pix2seq-simple | 40d35145fee7cd254502e797e3b2982eb8b44743 | 2022-05-11T10:07:50.000Z | [
"pytorch",
"pix2seq",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | nielsr | null | nielsr/pix2seq-simple | 7 | null | transformers | 14,408 | Entry not found |
ceggian/sbert_pt_reddit_mnr_256 | 5826be53a7bd0fe43daa5de5d340c6c27a21a26b | 2022-05-11T18:03:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_mnr_256 | 7 | null | sentence-transformers | 14,409 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
enoriega/kw_pubmed_5000_0.0003 | e5cca2e7635dc36ebfb260eec4a5db02777b322e | 2022-05-12T09:02:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | enoriega | null | enoriega/kw_pubmed_5000_0.0003 | 7 | null | transformers | 14,410 | Entry not found |
anuragshas/wav2vec2-xls-r-300m-or-cv9-with-lm | cdb050e67edc37171819980e39e6ce9639e6ba83 | 2022-05-17T22:40:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:mozilla-foundation/common_voice_9_0",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-or-cv9-with-lm | 7 | null | transformers | 14,411 | ---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Odia
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_9_0
name: Common Voice 9
args: or
metrics:
- type: wer
value: 44.343
name: Test WER
- name: Test CER
type: cer
value: 10.989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7886
- Wer: 0.5495
- Cer: 0.1311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3071
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.5875 | 66.62 | 400 | 3.4289 | 1.0 | 1.0 |
| 1.4065 | 133.31 | 800 | 0.7243 | 0.6619 | 0.1734 |
| 1.007 | 199.92 | 1200 | 0.6611 | 0.5831 | 0.1457 |
| 0.7984 | 266.62 | 1600 | 0.6387 | 0.5520 | 0.1332 |
| 0.6117 | 333.31 | 2000 | 0.7424 | 0.5682 | 0.1376 |
| 0.4926 | 399.92 | 2400 | 0.7627 | 0.5514 | 0.1314 |
| 0.416 | 466.62 | 2800 | 0.7816 | 0.5604 | 0.1320 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
gary109/STAS_yolos-small | 6c03e7922f73eacc392669bb544006b074c669ab | 2022-05-12T14:48:32.000Z | [
"pytorch",
"yolos",
"object-detection",
"transformers"
]
| object-detection | false | gary109 | null | gary109/STAS_yolos-small | 7 | null | transformers | 14,412 | Entry not found |
ali-issa/2-wav2vec2-arabic-gpu-colab-similar-to-german | 223888abd04749ad87da00c4b279c09a227ed067 | 2022-05-14T10:05:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ali-issa | null | ali-issa/2-wav2vec2-arabic-gpu-colab-similar-to-german | 7 | null | transformers | 14,413 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab-similar-to-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab-similar-to-german
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6127
- Wer: 0.4322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.4731 | 2.83 | 400 | 2.9055 | 1.0 |
| 2.0367 | 5.67 | 800 | 0.7727 | 0.6700 |
| 0.8081 | 8.51 | 1200 | 0.6407 | 0.5320 |
| 0.5753 | 11.35 | 1600 | 0.5982 | 0.4709 |
| 0.4604 | 14.18 | 2000 | 0.5999 | 0.4651 |
| 0.3902 | 17.02 | 2400 | 0.6001 | 0.4469 |
| 0.3443 | 19.85 | 2800 | 0.5957 | 0.4404 |
| 0.3152 | 22.69 | 3200 | 0.5911 | 0.4304 |
| 0.2924 | 25.53 | 3600 | 0.6170 | 0.4392 |
| 0.2779 | 28.37 | 4000 | 0.6127 | 0.4322 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ahmeddbahaa/mbart-large-50-finetuned-persian | 0f8a1bca38e3dfe752c9cceaaaf3718bd5f998c8 | 2022-05-15T04:01:56.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"persian",
"MBart50",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/mbart-large-50-finetuned-persian | 7 | null | transformers | 14,414 | ---
tags:
- summarization
- persian
- MBart50
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mbart-large-50-finetuned-persian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-persian
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1932
- Rouge-1: 26.11
- Rouge-2: 8.11
- Rouge-l: 21.09
- Gen Len: 37.29
- Bertscore: 71.08
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 5.5612 | 1.0 | 1476 | 4.5015 | 17.07 | 3.14 | 13.54 | 47.49 | 66.83 |
| 4.3049 | 2.0 | 2952 | 4.1055 | 22.63 | 5.89 | 18.03 | 40.43 | 69.23 |
| 3.8154 | 3.0 | 4428 | 3.9822 | 24.57 | 7.15 | 19.74 | 37.35 | 70.36 |
| 3.3401 | 4.0 | 5904 | 4.0088 | 25.84 | 7.96 | 20.95 | 37.56 | 70.83 |
| 2.8879 | 5.0 | 7380 | 4.1932 | 26.24 | 8.26 | 21.23 | 37.78 | 71.05 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Jeevesh8/6ep_bert_ft_cola-73 | 1a8b525b3e0a7a69e7671513fdad9ae912f03a04 | 2022-05-14T14:00:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/6ep_bert_ft_cola-73 | 7 | null | transformers | 14,415 | Entry not found |
VictorZhu/Anchor-Classification-DMV | be337e323a32eb839e9c884c6b613afc9ef7e3ea | 2022-06-08T15:58:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | VictorZhu | null | VictorZhu/Anchor-Classification-DMV | 7 | null | transformers | 14,416 | Entry not found |
bekirbakar/wav2vec2-large-xlsr-53-tr-fine-tuning-00 | 8e04582343c227c3d217976d8a5b4ab8b74e53f5 | 2022-06-16T13:31:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | bekirbakar | null | bekirbakar/wav2vec2-large-xlsr-53-tr-fine-tuning-00 | 7 | null | transformers | 14,417 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-tr-fine-tuning-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-tr-fine-tuning-00
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3974
- Wer: 0.4784
## Training Procedure
### Training Hyper-parameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training Results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0376 | 4.21 | 400 | 2.4295 | 1.0 |
| 0.8375 | 8.42 | 800 | 0.4717 | 0.6291 |
| 0.3246 | 12.63 | 1200 | 0.4066 | 0.5528 |
| 0.216 | 16.84 | 1600 | 0.4022 | 0.5149 |
| 0.1664 | 21.05 | 2000 | 0.3972 | 0.5013 |
| 0.1413 | 25.26 | 2400 | 0.3982 | 0.4894 |
| 0.1197 | 29.47 | 2800 | 0.3974 | 0.4784 |
### Framework Versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
chrisvinsen/xlsr-wav2vec2-final-2 | fd692f8a8794aff8c88e8eac1411e182efe5b7eb | 2022-05-19T00:11:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-final-2 | 7 | null | transformers | 14,418 | Entry not found |
ankitkupadhyay/mt5-small-finetuned-multilingual-xlsum | d3b793b04d69bfc77561ed842a6ab5bb9f3d66d1 | 2022-05-16T22:44:17.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"multilingual model",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ankitkupadhyay | null | ankitkupadhyay/mt5-small-finetuned-multilingual-xlsum | 7 | null | transformers | 14,419 | ---
license: apache-2.0
tags:
- multilingual model
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-multilingual-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-multilingual-xlsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7979
- Rouge1: 9.2017
- Rouge2: 2.3976
- Rougel: 7.7055
- Rougelsum: 7.7347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 4.4524 | 1.0 | 3375 | 2.9251 | 8.1565 | 1.9058 | 6.7949 | 6.8196 |
| 3.6707 | 2.0 | 6750 | 2.8524 | 8.7884 | 2.147 | 7.339 | 7.3678 |
| 3.5273 | 3.0 | 10125 | 2.8184 | 9.1157 | 2.3886 | 7.6228 | 7.6592 |
| 3.4452 | 4.0 | 13500 | 2.8028 | 9.2619 | 2.406 | 7.7607 | 7.7921 |
| 3.4074 | 5.0 | 16875 | 2.7979 | 9.2017 | 2.3976 | 7.7055 | 7.7347 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huawei-noah/AutoTinyBERT-S4 | f7b81320c31659f1ad8213b92e182d7c181d7906 | 2022-05-16T14:57:40.000Z | [
"pytorch",
"transformers",
"license:other"
]
| null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-S4 | 7 | null | transformers | 14,420 | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
Amalq/autotrain-smm4h_large_roberta_clean-874027878 | 2be057f967885a1e04b37a466ee374703c8d720c | 2022-05-16T18:44:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:Amalq/autotrain-data-smm4h_large_roberta_clean",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Amalq | null | Amalq/autotrain-smm4h_large_roberta_clean-874027878 | 7 | null | transformers | 14,421 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Amalq/autotrain-data-smm4h_large_roberta_clean
co2_eq_emissions: 9.123490454955585
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 874027878
- CO2 Emissions (in grams): 9.123490454955585
## Validation Metrics
- Loss: 0.35724225640296936
- Accuracy: 0.8571428571428571
- Precision: 0.7637362637362637
- Recall: 0.8910256410256411
- AUC: 0.9267555361305361
- F1: 0.8224852071005917
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Amalq/autotrain-smm4h_large_roberta_clean-874027878
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Amalq/autotrain-smm4h_large_roberta_clean-874027878", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Amalq/autotrain-smm4h_large_roberta_clean-874027878", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Suhong/distilbert-base-uncased-emotion-climateChange | 0fc5939ef7e7f23d8c1e3d0cd916e1116d083f3a | 2022-05-19T01:19:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Suhong | null | Suhong/distilbert-base-uncased-emotion-climateChange | 7 | null | transformers | 14,422 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-emotion-climateChange
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion-climateChange
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7189
- Accuracy: 0.8416
- F1: 0.7735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 23 | 0.9234 | 0.8416 | 0.7735 |
| No log | 2.0 | 46 | 0.7189 | 0.8416 | 0.7735 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alk/pegasus-cnn-dailymail | ca147188f73b22343e95d014b76d94b6d0e69620 | 2022-05-17T05:36:34.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | alk | null | alk/pegasus-cnn-dailymail | 7 | null | transformers | 14,423 | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: pegasus-cnn-dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn-dailymail
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5344 | 0.6 | 500 | 1.4497 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anuj55/all-MiniLM-L6-v2-finetuned-polifact | 4bca151bef856d77b89d65e1c8944eeb4b93ce66 | 2022-05-17T12:28:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | anuj55 | null | anuj55/all-MiniLM-L6-v2-finetuned-polifact | 7 | null | transformers | 14,424 | Entry not found |
AnonymousSub/longformer-base-4096_squad2.0 | 7a4ab27dcf55e80d91d211aa43e5ebc989e0e5bb | 2022-05-18T23:50:21.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | AnonymousSub | null | AnonymousSub/longformer-base-4096_squad2.0 | 7 | null | transformers | 14,425 | Entry not found |
leonweber/bunsen_base_best | b070da5af192ce39140ace1c2c3b1f91b97087ce | 2022-05-28T09:48:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | leonweber | null | leonweber/bunsen_base_best | 7 | null | transformers | 14,426 | Entry not found |
ankitkupadhyay/outputs | 90cfb3b8565fa3c6b018bcff8871d1467deb1352 | 2022-05-19T19:13:39.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | ankitkupadhyay | null | ankitkupadhyay/outputs | 7 | null | transformers | 14,427 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
- Pearson: 0.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 214 | 0.0256 | 0.7816 |
| No log | 2.0 | 428 | 0.0251 | 0.8115 |
| 0.0355 | 3.0 | 642 | 0.0257 | 0.8186 |
| 0.0355 | 4.0 | 856 | 0.0220 | 0.8255 |
| 0.0133 | 5.0 | 1070 | 0.0226 | 0.8287 |
| 0.0133 | 6.0 | 1284 | 0.0220 | 0.8321 |
| 0.0133 | 7.0 | 1498 | 0.0224 | 0.8314 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
domischwimmbeck/bert-base-german-cased-20000-ner | 144f21480096eb8d4cd67104dc21d43d3b042c45 | 2022-05-20T13:29:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | domischwimmbeck | null | domischwimmbeck/bert-base-german-cased-20000-ner | 7 | null | transformers | 14,428 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-20000-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-20000-ner
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0826
- Precision: 0.8904
- Recall: 0.8693
- F1: 0.8797
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.11 | 64 | 0.0840 | 0.8076 | 0.7842 | 0.7957 | 0.9752 |
| No log | 0.23 | 128 | 0.0787 | 0.8119 | 0.7735 | 0.7922 | 0.9746 |
| No log | 0.34 | 192 | 0.0677 | 0.8264 | 0.8362 | 0.8313 | 0.9794 |
| No log | 0.45 | 256 | 0.0630 | 0.8440 | 0.8125 | 0.8280 | 0.9801 |
| No log | 0.57 | 320 | 0.0664 | 0.8035 | 0.8391 | 0.8209 | 0.9782 |
| No log | 0.68 | 384 | 0.0674 | 0.8850 | 0.8285 | 0.8558 | 0.9819 |
| No log | 0.79 | 448 | 0.0631 | 0.8834 | 0.8598 | 0.8714 | 0.9825 |
| 0.094 | 0.9 | 512 | 0.0572 | 0.8933 | 0.8462 | 0.8691 | 0.9832 |
| 0.094 | 1.02 | 576 | 0.0728 | 0.8520 | 0.8681 | 0.8600 | 0.9795 |
| 0.094 | 1.13 | 640 | 0.0784 | 0.8496 | 0.8717 | 0.8605 | 0.9800 |
| 0.094 | 1.24 | 704 | 0.0721 | 0.8868 | 0.8527 | 0.8695 | 0.9814 |
| 0.094 | 1.36 | 768 | 0.0700 | 0.8755 | 0.8362 | 0.8554 | 0.9808 |
| 0.094 | 1.47 | 832 | 0.0590 | 0.8662 | 0.8610 | 0.8636 | 0.9822 |
| 0.094 | 1.58 | 896 | 0.0615 | 0.8692 | 0.8764 | 0.8728 | 0.9821 |
| 0.094 | 1.7 | 960 | 0.0670 | 0.8812 | 0.8557 | 0.8683 | 0.9826 |
| 0.0413 | 1.81 | 1024 | 0.0623 | 0.9061 | 0.8557 | 0.8802 | 0.9843 |
| 0.0413 | 1.92 | 1088 | 0.0570 | 0.8891 | 0.8770 | 0.8830 | 0.9833 |
| 0.0413 | 2.04 | 1152 | 0.0643 | 0.8859 | 0.8859 | 0.8859 | 0.9831 |
| 0.0413 | 2.15 | 1216 | 0.0705 | 0.8824 | 0.8740 | 0.8782 | 0.9830 |
| 0.0413 | 2.26 | 1280 | 0.0698 | 0.8818 | 0.8557 | 0.8685 | 0.9824 |
| 0.0413 | 2.37 | 1344 | 0.0826 | 0.8904 | 0.8693 | 0.8797 | 0.9832 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
connectivity/feather_berts_15 | 30574a68bff7ce81225235d8c766fb936547963d | 2022-05-21T14:27:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_15 | 7 | null | transformers | 14,429 | Entry not found |
connectivity/feather_berts_44 | 6960cc352fa81ea9cf4b3fc5aa67e6c7b188c4f9 | 2022-05-21T14:28:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_44 | 7 | null | transformers | 14,430 | Entry not found |
connectivity/feather_berts_99 | 8df41b4f6982e33855cc2bf9547fe970195aa877 | 2022-05-21T14:31:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/feather_berts_99 | 7 | null | transformers | 14,431 | Entry not found |
connectivity/bert_ft_qqp-0 | df16873724715a510537ba690937349ed5f78e0f | 2022-05-21T16:30:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-0 | 7 | null | transformers | 14,432 | Entry not found |
connectivity/bert_ft_qqp-2 | 8db10ad4228b58e276599907be10e68af227bac3 | 2022-05-21T16:31:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-2 | 7 | null | transformers | 14,433 | Entry not found |
connectivity/bert_ft_qqp-3 | 518fdfe86d7486cfd4236507bc12280c8d6a4900 | 2022-05-21T16:31:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-3 | 7 | null | transformers | 14,434 | Entry not found |
connectivity/bert_ft_qqp-4 | 5b625109533c930996caa0123dc3b62810229e40 | 2022-05-21T16:31:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-4 | 7 | null | transformers | 14,435 | Entry not found |
connectivity/bert_ft_qqp-11 | 414286baa930fe6ddf6930c530824f4d07fbc956 | 2022-05-21T16:31:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-11 | 7 | null | transformers | 14,436 | Entry not found |
north/t5_large_NCC | e9ccebf514a58de7c7c32efc6d3d643965074c3d | 2022-06-01T19:41:38.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | north | null | north/t5_large_NCC | 7 | null | transformers | 14,437 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|✔|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/large/norwegian_NCC_plus_English_t5x_large/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
connectivity/cola_6ep_ft-39 | 5748587c89c41c0f46fb3d2fa7632bc8bd3a5ade | 2022-05-21T16:43:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-39 | 7 | null | transformers | 14,438 | Entry not found |
connectivity/cola_6ep_ft-40 | ef9f166f83ba7d1369aa076ed65ee6bc688d853b | 2022-05-21T16:43:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-40 | 7 | null | transformers | 14,439 | Entry not found |
connectivity/cola_6ep_ft-43 | 04cc999c690c7f1d5328f55568c4211386beb441 | 2022-05-21T16:43:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-43 | 7 | null | transformers | 14,440 | Entry not found |
connectivity/cola_6ep_ft-45 | e02d6c9d77124bb705ba3c88e03f5932f9ca00a8 | 2022-05-21T16:43:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-45 | 7 | null | transformers | 14,441 | Entry not found |
connectivity/cola_6ep_ft-47 | da01cad61b621631691510605c197acb51b695a1 | 2022-05-21T16:43:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/cola_6ep_ft-47 | 7 | null | transformers | 14,442 | Entry not found |
connectivity/bert_ft_qqp-85 | 1a92bf2fbf6d06fafc49b92c6b9ae104255cb631 | 2022-05-21T16:37:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-85 | 7 | null | transformers | 14,443 | Entry not found |
globuslabs/ScholarBERT | 88ed6b5d13eb476724adcc0ae59f039eef179fa7 | 2022-05-24T03:18:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:2205.11342",
"transformers",
"science",
"multi-displinary",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | globuslabs | null | globuslabs/ScholarBERT | 7 | null | transformers | 14,444 | ---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_100 Model
This is the **ScholarBERT_100** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
```
|
deutschmann/mdr-roberta-test | e8defb06f1270e3c8c7eefdb168f3287ed590cdb | 2022-05-23T08:53:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | deutschmann | null | deutschmann/mdr-roberta-test | 7 | null | transformers | 14,445 | Entry not found |
roschmid/distilbert-base-uncased-finetuned-TT2-exam | 0118ca0b8118ec7e47841e8f79d37aca5116ddf0 | 2022-05-23T11:10:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | roschmid | null | roschmid/distilbert-base-uncased-finetuned-TT2-exam | 7 | null | transformers | 14,446 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-TT2-exam
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9221537106364237
- name: Recall
type: recall
value: 0.9369056941492337
- name: F1
type: f1
value: 0.9294711725209478
- name: Accuracy
type: accuracy
value: 0.983509936931069
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-TT2-exam
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9222
- Recall: 0.9369
- F1: 0.9295
- Accuracy: 0.9835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2509 | 1.0 | 879 | 0.0733 | 0.8855 | 0.9212 | 0.9030 | 0.9777 |
| 0.0505 | 2.0 | 1758 | 0.0618 | 0.9221 | 0.9330 | 0.9275 | 0.9827 |
| 0.0309 | 3.0 | 2637 | 0.0620 | 0.9222 | 0.9369 | 0.9295 | 0.9835 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/dbmdzHateSpeech | 19d28958cef24f7f3fd10ebda8727cc7e25c7a5e | 2022-05-23T17:02:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/dbmdzHateSpeech | 7 | null | transformers | 14,447 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dbmdzHateSpeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbmdzHateSpeech
This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7919
- Accuracy: 0.706
- F1: 0.3524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/mBERTnews | d32a233dffbbe43aedddfba7c451af9271b0e404 | 2022-05-23T18:02:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/mBERTnews | 7 | null | transformers | 14,448 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mBERTnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERTnews
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1136
- Accuracy: 0.9739
- F1: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AfnanAl/mT5small-ArabicSummary | e83cc6c570051e0ed490fae096195be9a72d23aa | 2022-05-25T05:00:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | AfnanAl | null | AfnanAl/mT5small-ArabicSummary | 7 | null | transformers | 14,449 | Entry not found |
nam7197/vi-nli-xml-roberta-base | 2189f9a6a08ffb3090201ba6f51f39922b291b18 | 2022-07-16T01:43:34.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | nam7197 | null | nam7197/vi-nli-xml-roberta-base | 7 | null | transformers | 14,450 | # Dataset: https://huggingface.co/datasets/xnli/viewer/vi/train
# Github: https://github.com/namlv97/vi-nli-xlm-roberta-base
```python
>>> import torch
>>> from transformers import AutoTokenizer,AutoModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
>>> model=AutoModelForSequenceClassification.from_pretrained('nam7197/vi-nli-xml-roberta-base')
>>> premise="Vâng, tôi thậm chí không nghĩ về điều đó, nhưng tôi đã rất thất vọng, và, tôi lại nói chuyện với anh ta lần nữa."
>>> hypothesis="Tôi đã không nói chuyện với anh ta nữa."
>>> label=2 #contradiction
>>> inputs=tokenizer(premise,hypothesis,return_tensors='pt')
>>> model.eval()
>>> with torch.no_grad():
>>> outputs=model(**inputs)
>>> probs= torch.nn.functional.softmax(outputs.logits,dim=-1)
>>> pred_label=torch.argmax(probs,dim=-1)
```
# Performance
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| entailment | 0.79256 | 0.77784 | 0.78513 | 1670 |
| neutral | 0.77192 | 0.70120 | 0.73486 | 1670 |
| contradiction| 0.76429 | 0.84850 | 0.80420 | 1670 |
| accuracy | | | 0.77585 | 5010 |
| macro avg | 0.77626 | 0.77585 | 0.77473 | 5010 |
| weighted avg | 0.77626 | 0.77585 | 0.77473 | 5010 | |
jrmax/bart-base-r3d3 | 8cc752f8679d679816d62f930b7c0e23bc9dd9e3 | 2022-05-25T14:02:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | jrmax | null | jrmax/bart-base-r3d3 | 7 | null | transformers | 14,451 | Entry not found |
MadFace/t5-arxiv | aad48f3089ce650cd9d2463f42c206230fec877b | 2022-05-26T08:00:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | MadFace | null | MadFace/t5-arxiv | 7 | null | transformers | 14,452 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arxiv
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3852
- Rouge1: 18.0722
- Rouge2: 6.8453
- Rougel: 14.3659
- Rougelsum: 16.4137
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5169 | 1.0 | 12500 | 2.3852 | 18.0722 | 6.8453 | 14.3659 | 16.4137 | 19.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
alice-hml/mBART_french_correction | c7c580f39e2c0cf866d1b96171bcb6dc4a63a0de | 2022-05-26T13:15:38.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:other",
"autotrain_compatible"
]
| text2text-generation | false | alice-hml | null | alice-hml/mBART_french_correction | 7 | null | transformers | 14,453 | ---
license: other
---
|
aioxlabs/dvoice-amharic | cd233859f9c53f043d6d36b4d2eb7fad13545a45 | 2022-05-28T08:22:00.000Z | [
"wav2vec2",
"feature-extraction",
"dar",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
]
| automatic-speech-recognition | false | aioxlabs | null | aioxlabs/dvoice-amharic | 7 | null | speechbrain | 14,454 | ---
language: "dar"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Amharic (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Amharic dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 6.71 | 25.50 | 6.57 | 24.92 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Amharic)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-amharic", savedir="pretrained_models/asr-wav2vec2-dvoice-amh")
asr_model.transcribe_file('./the_path_to_your_audio_file')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# About DVoice
DVoice is a community initiative that aims to provide Africa low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrived from social medias. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke.
For this project, AIOX Labs the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network and System Security, Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution. |
leonweber/foo | e986d212914fa6db9944d8affb699e6dbf59e6b8 | 2022-05-29T09:29:28.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | leonweber | null | leonweber/foo | 7 | null | transformers | 14,455 | Entry not found |
huggingtweets/algodtrading | 8ab2d6821cd3189147da465a935b0041fdc552b3 | 2022-05-27T22:21:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/algodtrading | 7 | null | transformers | 14,456 | ---
language: en
thumbnail: http://www.huggingtweets.com/algodtrading/1653690066290/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509493999987474434/nB7rOJnT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Algod🫐</div>
<div style="text-align: center; font-size: 14px;">@algodtrading</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Algod🫐.
| Data | Algod🫐 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 56 |
| Short tweets | 391 |
| Tweets kept | 2802 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mz6oljo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @algodtrading's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1oouvcmj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1oouvcmj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/algodtrading')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hoanhtu/vi-large | 54bb9f78672c171e8029c0adab5e37c7a1df9e5c | 2022-06-05T08:36:06.000Z | [
"pytorch",
"tf",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | hoanhtu | null | hoanhtu/vi-large | 7 | null | transformers | 14,457 | Entry not found |
Anjoe/german-poetry-distilbert | f41de1f960e115b238081f713847376856141dfb | 2022-07-21T14:27:30.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Anjoe | null | Anjoe/german-poetry-distilbert | 7 | null | transformers | 14,458 | Entry not found |
KoichiYasuoka/deberta-base-thai | a9b10052949afdfdf3700ba25d58d3ee636ab199 | 2022-07-16T10:52:25.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"th",
"transformers",
"thai",
"masked-lm",
"wikipedia",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-thai | 7 | null | transformers | 14,459 | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# deberta-base-thai
## Model Description
This is a DeBERTa(V2) model pre-trained on Thai Wikipedia texts. You can fine-tune `deberta-base-thai` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-base-thai-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-base-thai-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-thai")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-thai")
```
|
ceggian/bart_post_trained_reddit_batch64 | b47ad5c15f68e74aa029261fbb48ed82a9935f8d | 2022-05-30T18:05:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ceggian | null | ceggian/bart_post_trained_reddit_batch64 | 7 | null | transformers | 14,460 | Entry not found |
BigSalmon/InformalToFormalLincoln49 | 87440f099309bdc0cd2a14b9b772ffe6e971d284 | 2022-06-07T01:12:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln49 | 7 | null | transformers | 14,461 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln49")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln49")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
malra/segformer-b0-finetuned-segments-sidewalk-4 | 4d91df6747492b36e06ad6c213271529a708c732 | 2022-05-31T15:42:53.000Z | [
"pytorch",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-segmentation | false | malra | null | malra/segformer-b0-finetuned-segments-sidewalk-4 | 7 | null | transformers | 14,462 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-4
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5207
- Mean Iou: 0.1023
- Mean Accuracy: 0.1567
- Overall Accuracy: 0.6612
- Per Category Iou: [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0]
- Per Category Accuracy: [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.8255 | 1.0 | 25 | 3.0220 | 0.0892 | 0.1429 | 0.6352 | [0.0, 0.3631053229188519, 0.6874502125236047, 0.0, 0.012635239862746197, 0.001133215250040838, 0.0, 0.00463024415429387, 2.6557099661207286e-05, 0.0, 0.3968535016422742, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4820466790242289, 0.0, 0.00693999220077067, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6134928158666486, 0.05160593984758798, 0.5016270369795023, 0.0, 0.0, 0.00023524914354608678, 0.0] | [nan, 0.6625398055826, 0.851744092156527, 0.0, 0.01307675614921835, 0.001170877257777663, nan, 0.004771009467501389, 2.6941417811356193e-05, 0.0, 0.9316713675735513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7310221003907382, 0.0, 0.0070371168820434, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.948375993368795, 0.056265031783493576, 0.5061367774453964, 0.0, 0.0, 0.00023723449281691698, 0.0] |
| 2.5443 | 2.0 | 50 | 2.5207 | 0.1023 | 0.1567 | 0.6612 | [0.0, 0.37997208823402434, 0.7030895600821837, 0.0, 0.0020740824048893942, 0.0006611109803275343, 0.0, 0.0009644717061794479, 0.0, 0.0, 0.44780560238339745, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4962679673706645, 0.0, 0.008267299447856608, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6719286019431624, 0.1932540547332544, 0.6762198255750292, 0.0, 0.0, 0.0003312368464636427, 0.0] | [nan, 0.7085417733756095, 0.8643251797889624, 0.0, 0.0020922282164545967, 0.0006691672739475508, nan, 0.0009725011389865425, 0.0, 0.0, 0.9224475476880146, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7984415122785299, 0.0, 0.008394275137866055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9294223049507054, 0.2306496542338313, 0.7045666997791757, 0.0, 0.0, 0.0003315891206418271, 0.0] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
yukta10/finetuning-sentiment-model-3000-samples | 32985a807eb6110f939af0c0821ef62dda76ed81 | 2022-05-31T18:29:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | yukta10 | null | yukta10/finetuning-sentiment-model-3000-samples | 7 | null | transformers | 14,463 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [federicopascual/finetuning-sentiment-model-3000-samples](https://huggingface.co/federicopascual/finetuning-sentiment-model-3000-samples) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
joaogante/test_text | e7e523d376bfb794ce53256716572d91e87bf46d | 2022-06-15T16:53:59.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | joaogante | null | joaogante/test_text | 7 | null | transformers | 14,464 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
malra/segformer-b5-segments-warehouse1 | 9a8a20c7811870c7f3e9db71acc2bc80b882d562 | 2022-05-31T20:54:00.000Z | [
"pytorch",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-segmentation | false | malra | null | malra/segformer-b5-segments-warehouse1 | 7 | null | transformers | 14,465 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-segments-warehouse1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-segments-warehouse1
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1610
- Mean Iou: 0.6952
- Mean Accuracy: 0.8014
- Overall Accuracy: 0.9648
- Per Category Iou: [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979]
- Per Category Accuracy: [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.1656 | 1.0 | 787 | 0.1917 | 0.5943 | 0.6937 | 0.9348 | [0.0, 0.8760430595457738, 0.8113714411434076, 0.9533787339343942, 0.8499988352439646, 0.9330256290984922, 0.964368918196211, 0.6984009498117659, 0.9341093239597545, 0.288411561596369, 0.0, 0.6496866199024376, 0.4510074387900882, 0.5206343319728309, 0.6377305875444397, 0.5391733301507737, 0.1395685713288422, 0.390702947845805, 0.6999919374344916, 0.548023343373494] | [nan, 0.9502542152644661, 0.9516900451328754, 0.9788975544390225, 0.921821413759201, 0.9534230318615367, 0.9778020069070933, 0.8108538425970355, 0.970571911491369, 0.2993067645848501, 0.0, 0.7454496363566233, 0.5849840255591054, 0.5858306866277158, 0.7137540570947559, 0.6925710548100606, 0.16576498144808574, 0.4165357186026834, 0.8142326593390103, 0.6474578532983408] |
| 0.0948 | 2.0 | 1574 | 0.2058 | 0.6310 | 0.7305 | 0.9442 | [0.0, 0.904077233776714, 0.8616556242304713, 0.9604692135700761, 0.8306854004041632, 0.9459690932012119, 0.9714777936344227, 0.7463801249809481, 0.9197830038961162, 0.4759644364074744, 0.0, 0.7133768631713745, 0.4878118726699168, 0.5403469048526253, 0.6267211124010835, 0.6280780328151242, 0.11116434156063161, 0.4757211293446132, 0.7386220435315599, 0.6814722192019137] | [nan, 0.9530795697109564, 0.9481439135801821, 0.9753750826203033, 0.9328161802391284, 0.9783733696392768, 0.9831560736299451, 0.8544532947139754, 0.9700176894451403, 0.5598936405938401, 0.0, 0.8212854589792271, 0.5434504792332269, 0.5765256977221256, 0.7602586827898242, 0.745275787709383, 0.12024542420662065, 0.5128732019823522, 0.8080522939565592, 0.8363729371469241] |
| 0.0595 | 3.0 | 2361 | 0.1363 | 0.6578 | 0.7540 | 0.9494 | [0.0, 0.9109388123768081, 0.8466263269727539, 0.965583073696094, 0.8848508600101197, 0.9507919193853351, 0.9742807972055659, 0.7672266040033193, 0.9571650494933543, 0.5580972230045627, 0.0, 0.7572676505482382, 0.5338298840118263, 0.5743160573368553, 0.6964399439112182, 0.6369583059750492, 0.19255896751223853, 0.49017131449756574, 0.7563405327946686, 0.7018448645266491] | [nan, 0.9587813659877967, 0.9568298005631468, 0.9842947615263231, 0.9380059570384915, 0.9734457175747111, 0.9839202800499454, 0.863077218359317, 0.9757816512090675, 0.6272609287455287, 0.0, 0.8589569413670591, 0.5999361022364217, 0.6161844118746441, 0.7983763527021668, 0.793146442915981, 0.2242190576871256, 0.5288397085810358, 0.8216978654762351, 0.8232729860771318] |
| 0.0863 | 4.0 | 3148 | 0.1706 | 0.6597 | 0.7678 | 0.9537 | [0.0, 0.5911845175607978, 0.8922572171811833, 0.9657396689703207, 0.8726664918778465, 0.948172990516989, 0.9741643734457509, 0.7832072821045744, 0.9578631876788363, 0.5869565217391305, 0.0, 0.7602876424039574, 0.5747447162194254, 0.6642950791717092, 0.6978602093118107, 0.7122118073263809, 0.21745086578505152, 0.5091171801864137, 0.763416879968237, 0.7220314268720861] | [nan, 0.9656626144746107, 0.9588916966191391, 0.9766109980050623, 0.9234167566678667, 0.9783156758536367, 0.9891284919047324, 0.8876447135391675, 0.9773653302095363, 0.6623721946123896, 0.0, 0.8391697702425289, 0.6185942492012779, 0.6961703584876796, 0.8060121894956657, 0.8277923697200732, 0.24677155234956366, 0.5498060503499884, 0.8475353565667555, 0.8369956852453183] |
| 0.0849 | 5.0 | 3935 | 0.1529 | 0.6489 | 0.7616 | 0.9535 | [0.0, 0.34717493700692625, 0.9200786785121082, 0.9707860061715432, 0.9064316496153364, 0.9571373496125165, 0.9765647396031262, 0.7914886053951578, 0.9636858999629485, 0.5253852888123762, 0.0, 0.7668434757450091, 0.6228696113699357, 0.5646135260344276, 0.7194371537530142, 0.7276571750775304, 0.13134474327628362, 0.5398065590178835, 0.8087983436006237, 0.7371620697069805] | [nan, 0.9673995855258336, 0.9622823082917784, 0.9832096263122092, 0.9590923200613435, 0.9794833291868915, 0.9849481430590119, 0.8741570190973889, 0.9814726613968338, 0.5661042702035389, 0.0, 0.8519369313384734, 0.674888178913738, 0.5955861885708164, 0.7973710835377057, 0.8440933293815855, 0.139191177994735, 0.5807830511082053, 0.8902258318640507, 0.8387304835194164] |
| 0.0652 | 6.0 | 4722 | 0.1776 | 0.6701 | 0.7802 | 0.9598 | [0.0, 0.442020662403383, 0.9221209597093164, 0.9723970198449976, 0.9094898951877407, 0.958969887541612, 0.9774286126326331, 0.8043337900190548, 0.9641322534475246, 0.524194500874002, 0.0, 0.7732021981650511, 0.6714277552419585, 0.6791383524722951, 0.7265590222386986, 0.7252668038047013, 0.25612624095650144, 0.512317443386938, 0.8223912256195354, 0.7602526763224181] | [nan, 0.9667776521571092, 0.968306375662177, 0.9871287057126554, 0.9515142073239339, 0.9800501491032743, 0.9870913605013194, 0.8911998464531551, 0.9789458602211063, 0.5619638504637396, 0.0, 0.8429926328466184, 0.750926517571885, 0.7091730161871252, 0.8058454540303847, 0.8431735260151052, 0.2957320232987169, 0.5489159698031933, 0.8944742469145065, 0.8592366887593968] |
| 0.0516 | 7.0 | 5509 | 0.2204 | 0.6782 | 0.7854 | 0.9562 | [0.0, 0.5972965874238374, 0.9024890361234837, 0.9727685140940331, 0.915582953759141, 0.9598962357171329, 0.9798718588278901, 0.8112726586102719, 0.9047252363294271, 0.6408527982442389, 0.0, 0.7886848740988032, 0.676712646342877, 0.5672950158399087, 0.7336613818739761, 0.7298649456617311, 0.3028603088856569, 0.5060868673401364, 0.8269845785168136, 0.7471687598272396] | [nan, 0.9698273468544609, 0.9632905651879291, 0.9861640741314249, 0.9551792854314081, 0.9817079843391511, 0.9899518141518776, 0.8996100259110301, 0.9832172012468946, 0.6987812984710835, 0.0, 0.8565569379384828, 0.7460702875399361, 0.593452450290354, 0.8111955580377016, 0.848355084979611, 0.3625810998486827, 0.5422458600265925, 0.8997261507296395, 0.834927271918509] |
| 0.1051 | 8.0 | 6296 | 0.1860 | 0.6731 | 0.7789 | 0.9575 | [0.0, 0.44805540920356957, 0.9045125103512419, 0.9742941726927242, 0.9171717803896707, 0.9608739687771942, 0.9806696534895757, 0.8165927346840907, 0.9677688538979997, 0.6195552331193943, 0.0, 0.795984684169727, 0.6862710467443778, 0.573071397774824, 0.7390593444665892, 0.746059006435751, 0.2037963564144674, 0.5303406505500898, 0.8387988518436741, 0.7590468131997875] | [nan, 0.9709112878685233, 0.966379770128131, 0.9872427322752713, 0.9529925896087971, 0.9834568092767589, 0.9900317817435064, 0.8913394344939497, 0.9851288999243455, 0.6704124592447216, 0.0, 0.871338387626268, 0.7448562300319489, 0.5994265432176736, 0.8121846392929121, 0.8435414473616973, 0.2212134402918558, 0.5609595288067426, 0.8906947518475448, 0.8579244695520661] |
| 0.0619 | 9.0 | 7083 | 0.2919 | 0.6996 | 0.7903 | 0.9579 | [0.0, 0.934913158921961, 0.9053172937262943, 0.9749731654503406, 0.8705131863049136, 0.9625421596476281, 0.9801264786114002, 0.8223383305806123, 0.9066864104553713, 0.6468175775129386, 0.0, 0.7950479182280621, 0.7176821075997429, 0.5689160215594734, 0.7424713897302829, 0.7480081111150989, 0.3071719253739231, 0.5035704204000125, 0.8359422295252097, 0.7696666024282135] | [nan, 0.9682325320018036, 0.9702179964865137, 0.9871538608460199, 0.9606411126417358, 0.9816951395784177, 0.9890656141613147, 0.9035010425481796, 0.9836680314909386, 0.689949669209585, 0.0, 0.8547140781629688, 0.7850479233226837, 0.5903872774743949, 0.8138309496636962, 0.8520138583707216, 0.3614203096822337, 0.5292682658813446, 0.9065161120906329, 0.8882611983452693] |
| 0.081 | 10.0 | 7870 | 0.2470 | 0.6804 | 0.7921 | 0.9583 | [0.0, 0.4404433924045006, 0.9318621565838054, 0.9751204660574527, 0.8701648407446415, 0.9625333515302946, 0.9811772580795882, 0.8257730976318673, 0.9694596723226286, 0.6262599628453287, 0.0, 0.8035308913444122, 0.7247258740455824, 0.5731919576321138, 0.7446832704519876, 0.7540709586972932, 0.2964031339031339, 0.5176075672651548, 0.8402309249924604, 0.7699341552529259] | [nan, 0.9683524762943433, 0.9703483634609842, 0.9874040565137937, 0.9560906426120769, 0.9828287794111833, 0.9897414692905638, 0.9071739528715878, 0.9809845681174846, 0.6616061536513564, 0.0, 0.8707555296507566, 0.8066453674121405, 0.5982298533423343, 0.8269010675926151, 0.8575633386818196, 0.3450448769769707, 0.5489928903442743, 0.9145158870090407, 0.8764289844757795] |
| 0.0595 | 11.0 | 8657 | 0.1520 | 0.6754 | 0.7803 | 0.9583 | [0.0, 0.43998949915443775, 0.9316636729918347, 0.974311900634481, 0.90408659589869, 0.9621039259469353, 0.9814528086580536, 0.8173484866921386, 0.9299168519752622, 0.5981595278841879, 0.0, 0.79896542666047, 0.7130791649318979, 0.5767892232828117, 0.7434904893608313, 0.7476740572849074, 0.2818679619421856, 0.5013427236914975, 0.8417679322268942, 0.7636900967723242] | [nan, 0.9604694708457627, 0.9682111157218825, 0.9850226034689381, 0.9629913194164226, 0.9838887233262218, 0.9906282066977372, 0.8790295141463755, 0.9828138682520776, 0.6217973473457631, 0.0, 0.8472869246956067, 0.7660702875399361, 0.601589754313674, 0.8233235396482367, 0.8360910400932068, 0.3211657649814481, 0.5272243772183335, 0.8880687999399782, 0.8793425559361239] |
| 0.0607 | 12.0 | 9444 | 0.1907 | 0.6792 | 0.7814 | 0.9611 | [0.0, 0.4394265102382861, 0.9325678358934418, 0.9751503005414947, 0.9213536629526586, 0.9630218995457999, 0.9808145244188059, 0.8160516650442948, 0.9402095421968347, 0.5678403556289702, 0.0, 0.7897903639847522, 0.717973174366617, 0.6351749265433101, 0.7451406149738536, 0.7539060338307724, 0.2810049109433409, 0.5169863186167534, 0.8447414560224139, 0.7628612943763745] | [nan, 0.964392093449931, 0.9699039597844642, 0.9860071181495944, 0.9689476561441872, 0.9817555601847723, 0.9915172012546744, 0.8703445207331861, 0.9829836512368835, 0.5919660662847014, 0.0, 0.8320126171608817, 0.7695846645367412, 0.6606869598697208, 0.8177192854656857, 0.8353858575122385, 0.31786995004456603, 0.541465665967056, 0.8991915819484563, 0.8640852275254659] |
| 0.054 | 13.0 | 10231 | 0.1756 | 0.6845 | 0.7854 | 0.9633 | [0.0, 0.44063089620853896, 0.9319015227980866, 0.9747420439658205, 0.9230841377589553, 0.9626774348954341, 0.9806204202647846, 0.824089995398513, 0.9682449901582629, 0.6269069221957562, 0.0, 0.7878031759942226, 0.7230044147476434, 0.6870255399578931, 0.7273836360818303, 0.7465091396254238, 0.25750268946841265, 0.5202245077135331, 0.8455619310735664, 0.7623883906475817] | [nan, 0.9684613146338701, 0.9659761462687484, 0.985573907589379, 0.969242630837417, 0.9846717514218756, 0.9904148523034052, 0.8905935109009535, 0.9873657317056209, 0.6548320724256909, 0.0, 0.8321711888159841, 0.7743769968051119, 0.7167465941354711, 0.7672955669410517, 0.8485288256155018, 0.28777231930020936, 0.5469380130325374, 0.8955527628765427, 0.8564788043236511] |
| 0.0908 | 14.0 | 11018 | 0.1677 | 0.6922 | 0.7956 | 0.9641 | [0.0, 0.4710389646938612, 0.9277225664822271, 0.9753445134184554, 0.9250469473155007, 0.9640090632546157, 0.9817333061419466, 0.8297056239192101, 0.970059681920668, 0.647379308685926, 0.0, 0.79693329490141, 0.7458423929012165, 0.6895638439061885, 0.7486849253355593, 0.7520096317485606, 0.30687537928818764, 0.49287677819238446, 0.848826224760963, 0.7700556938025832] | [nan, 0.9666066204807101, 0.9697912533607226, 0.9863864033340946, 0.9658514745108883, 0.9826761492096202, 0.9913739259863396, 0.9020659030037601, 0.9838249561044068, 0.6815485423063531, 0.0, 0.8412997732853904, 0.8109904153354632, 0.7185046709734403, 0.8232134618653327, 0.8490091673735526, 0.35638330949567815, 0.5181697306682197, 0.9016768578609746, 0.8671989680174369] |
| 0.0584 | 15.0 | 11805 | 0.1610 | 0.6952 | 0.8014 | 0.9648 | [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979] | [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
neal49/roberta-yelp | 0e3c4c84ab49ed8c77781637579d31875c0bb9b0 | 2022-06-01T05:39:18.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | neal49 | null | neal49/roberta-yelp | 7 | null | transformers | 14,466 | Entry not found |
Paoloant/distilbert-base-uncased-finetuned-emotion | d28fd928b4ee2abe6da5a40858aedbe8fce09625 | 2022-06-01T19:02:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Paoloant | null | Paoloant/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,467 | Entry not found |
ederkamphorst/bert-base-portuguese-cased-finetuned-acordao_v2 | c22fe00178bd9882da169fe9a2e732e0a330ae1f | 2022-06-02T03:37:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | ederkamphorst | null | ederkamphorst/bert-base-portuguese-cased-finetuned-acordao_v2 | 7 | null | transformers | 14,468 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-portuguese-cased-finetuned-acordao_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased-finetuned-acordao_v2
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9871 | 1.0 | 313 | 0.9519 |
| 0.965 | 2.0 | 626 | 0.9325 |
| 0.9501 | 3.0 | 939 | 0.9257 |
| 0.929 | 4.0 | 1252 | 0.9098 |
| 0.9276 | 5.0 | 1565 | 0.9018 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
leonweber/biomuppet | 9d3b2af26fa416c59c6fdad63d0e6baf10afde9f | 2022-06-03T09:58:25.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | leonweber | null | leonweber/biomuppet | 7 | null | transformers | 14,469 | Entry not found |
Classroom-workshop/assignment1-jane | 9d87d97585346301ee7677baaecf74ec93ebbf3e | 2022-06-02T15:21:22.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:mit",
"model-index"
]
| automatic-speech-recognition | false | Classroom-workshop | null | Classroom-workshop/assignment1-jane | 7 | null | transformers | 14,470 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
gciaffoni/wav2vec2-large-xls-r-300m-it-colab | bf62ab8ae04133f2ac6f6949e53ae3bb6881b48b | 2022-06-02T22:10:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | gciaffoni | null | gciaffoni/wav2vec2-large-xls-r-300m-it-colab | 7 | null | transformers | 14,471 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-it-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-it-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5632 | 3.19 | 1000 | 0.2289 | 0.2470 |
| 0.1489 | 6.39 | 2000 | 0.1799 | 0.1877 |
| 0.0803 | 9.58 | 3000 | 0.1660 | 0.1648 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kevincstowe/concept2seq | b90f1d48f02cb08bc4fac86df4bb565250be01d1 | 2022-06-07T17:42:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | kevincstowe | null | kevincstowe/concept2seq | 7 | null | transformers | 14,472 | Entry not found |
Jeevesh8/lecun_feather_berts-7 | e7fb1f8baeb9d5ac92371c0ac20dd54b7b5bd24c | 2022-06-04T06:52:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-7 | 7 | null | transformers | 14,473 | Entry not found |
mmillet/rubert-tiny2_finetuned_emotion_experiment_modified_CE_LOSS_resampling | bb430adbfd6619fdd09b45856d7b7886738ddd96 | 2022-06-05T18:12:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | mmillet | null | mmillet/rubert-tiny2_finetuned_emotion_experiment_modified_CE_LOSS_resampling | 7 | null | transformers | 14,474 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-tiny2_finetuned_emotion_experiment_modified_CE_LOSS_resampling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment_modified_CE_LOSS_resampling
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4520
- Accuracy: 0.8621
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1134 | 1.0 | 53 | 0.9494 | 0.7716 | 0.7555 |
| 0.8421 | 2.0 | 106 | 0.7092 | 0.8204 | 0.8172 |
| 0.6488 | 3.0 | 159 | 0.6000 | 0.8319 | 0.8313 |
| 0.5392 | 4.0 | 212 | 0.5368 | 0.8376 | 0.8392 |
| 0.4616 | 5.0 | 265 | 0.4951 | 0.8549 | 0.8544 |
| 0.4138 | 6.0 | 318 | 0.4743 | 0.8621 | 0.8615 |
| 0.3694 | 7.0 | 371 | 0.4607 | 0.8563 | 0.8581 |
| 0.3375 | 8.0 | 424 | 0.4469 | 0.8693 | 0.8697 |
| 0.3049 | 9.0 | 477 | 0.4412 | 0.8649 | 0.8670 |
| 0.2804 | 10.0 | 530 | 0.4469 | 0.8635 | 0.8637 |
| 0.2787 | 11.0 | 583 | 0.4471 | 0.8693 | 0.8683 |
| 0.2284 | 12.0 | 636 | 0.4474 | 0.8693 | 0.8694 |
| 0.2188 | 13.0 | 689 | 0.4530 | 0.8649 | 0.8643 |
| 0.1998 | 14.0 | 742 | 0.4520 | 0.8621 | 0.8616 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
meetyildiz/M-TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad | 9051042eefdd55faaca851b33f70713fec17a5ac | 2022-06-05T13:17:41.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | meetyildiz | null | meetyildiz/M-TurQA-electra-base-turkish-cased-discriminator-finetuned-toqad | 7 | null | transformers | 14,475 | Entry not found |
rg089/gpt2_mwp | bc9b50b0a80cc4d89002323656eb31d05a679a36 | 2022-06-05T16:26:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | rg089 | null | rg089/gpt2_mwp | 7 | null | transformers | 14,476 | Entry not found |
anvay/finetuning-cardiffnlp-sentiment-model | 94c503c9fa2a984a359f56595173ece65460ff38 | 2022-06-05T17:46:13.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | anvay | null | anvay/finetuning-cardiffnlp-sentiment-model | 7 | null | transformers | 14,477 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-cardiffnlp-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-cardiffnlp-sentiment-model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Copninich/distilbert-base-uncased-finetuned-imdb | ceaf13b041b739db09b765a824a06c4678ee84e5 | 2022-06-06T09:36:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Copninich | null | Copninich/distilbert-base-uncased-finetuned-imdb | 7 | null | transformers | 14,478 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jrmax/bart-base-r3d3-pt | aa42143d3c6dbd015db43f2a5325ea43794b0aa3 | 2022-06-06T17:02:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | jrmax | null | jrmax/bart-base-r3d3-pt | 7 | null | transformers | 14,479 | Entry not found |
imamnurby/rob2rand_merged_w_prefix_c_fc_interactive | 0ccf31d1200bfb908d2abbf80f39d113d73fff11 | 2022-06-06T19:48:00.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | imamnurby | null | imamnurby/rob2rand_merged_w_prefix_c_fc_interactive | 7 | null | transformers | 14,480 | ---
tags:
- generated_from_trainer
model-index:
- name: rob2rand_merged_w_prefix_c_fc_interactive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob2rand_merged_w_prefix_c_fc_interactive
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.7.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BraveOni/2ch-text-classification | 14163ae1e1e646230cf28f00e72e78cb00617ba1 | 2022-06-07T04:18:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:BraveOni/autotrain-data-2ch-text-classification",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | BraveOni | null | BraveOni/2ch-text-classification | 7 | null | transformers | 14,481 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- BraveOni/autotrain-data-2ch-text-classification
co2_eq_emissions: 0.08564281067919652
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 955631800
- CO2 Emissions (in grams): 0.08564281067919652
## Validation Metrics
- Loss: 0.34108611941337585
- Accuracy: 0.8671983356449375
- Precision: 0.7883283877349159
- Recall: 0.8250517598343685
- AUC: 0.9236450689447471
- F1: 0.8062721294891249
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/BraveOni/autotrain-2ch-text-classification-955631800
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("BraveOni/autotrain-2ch-text-classification-955631800", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("BraveOni/autotrain-2ch-text-classification-955631800", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Jatin-WIAI/malayalam_relevance_clf | a5b307d49eccbdb0578a425fcf42e164b16cd82e | 2022-06-07T07:11:22.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Jatin-WIAI | null | Jatin-WIAI/malayalam_relevance_clf | 7 | null | transformers | 14,482 | Entry not found |
kangaroo927/test_auto_protocol | 2d17096c06d97ab1c4c671ed6a2b3f4dac4a1738 | 2022-06-25T00:21:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | kangaroo927 | null | kangaroo927/test_auto_protocol | 7 | null | transformers | 14,483 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test_auto_protocol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_auto_protocol
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.1
- Pytorch 1.6.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Bingsu/vitB32_bert_ko_small_clip | 9f975897a86e74675f126d4b71195135b403af52 | 2022-06-29T05:36:16.000Z | [
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"ko",
"arxiv:2004.09813",
"transformers",
"clip",
"license:mit"
]
| feature-extraction | false | Bingsu | null | Bingsu/vitB32_bert_ko_small_clip | 7 | null | transformers | 14,484 | ---
tags:
- clip
language: ko
license: mit
---
# vitB32_bert_ko_small_clip
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) + [lassl/bert-ko-small](https://huggingface.co/lassl/bert-ko-small) CLIP Model
[training code(github)](https://github.com/Bing-su/KoCLIP_training_code)
## Train
SBERT의 [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)를 참고하여, `openai/clip-vit-base-patch32` 텍스트 모델의 가중치를 `lassl/bert-ko-small`로 복제하였습니다. 논문과는 달리 mean pooling을 사용하지 않고, huggingface모델의 기본 pooling을 그대로 사용하였습니다.
사용한 데이터: [Aihub 한국어-영어 번역(병렬) 말뭉치](https://aihub.or.kr/aidata/87)
## How to Use
#### 1.
```python
import requests
from PIL import Image
from transformers import VisionTextDualEncoderProcessor, VisionTextDualEncoderModel # or Auto...
model = VisionTextDualEncoderModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = VisionTextDualEncoderProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```
```pycon
>>> probs
tensor([[0.9756, 0.0244]], grad_fn=<SoftmaxBackward0>)
```
#### 2.
```python
from transformers import AutoModel, AutoProcessor, pipeline
model = AutoModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = AutoProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
pipe = pipeline("zero-shot-image-classification", model=model, feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "고양이 두 마리와 리모컨 두 개"], hypothesis_template="{}")
```
```pycon
>>> result
[{'score': 0.871887743473053, 'label': '고양이 두 마리와 리모컨 두 개'},
{'score': 0.12316706776618958, 'label': '고양이 두 마리'},
{'score': 0.004945191089063883, 'label': '고양이 한 마리'}]
```
|
mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear | 950a9b9a4c9a042e28ad52e0de54d028bfffed22 | 2022-06-08T19:38:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mmillet | null | mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear | 7 | null | transformers | 14,485 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5751
- Accuracy: 0.8716
- F1: 0.8713
- Precision: 0.8721
- Recall: 0.8716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8851 | 1.0 | 69 | 0.4740 | 0.8361 | 0.8346 | 0.8364 | 0.8361 |
| 0.4404 | 2.0 | 138 | 0.4018 | 0.8643 | 0.8625 | 0.8672 | 0.8643 |
| 0.305 | 3.0 | 207 | 0.3754 | 0.8800 | 0.8795 | 0.8794 | 0.8800 |
| 0.2441 | 4.0 | 276 | 0.3942 | 0.8758 | 0.8748 | 0.8752 | 0.8758 |
| 0.1837 | 5.0 | 345 | 0.4005 | 0.8873 | 0.8870 | 0.8877 | 0.8873 |
| 0.1573 | 6.0 | 414 | 0.4468 | 0.8716 | 0.8718 | 0.8730 | 0.8716 |
| 0.1292 | 7.0 | 483 | 0.4582 | 0.8747 | 0.8750 | 0.8758 | 0.8747 |
| 0.0949 | 8.0 | 552 | 0.5110 | 0.8601 | 0.8601 | 0.8628 | 0.8601 |
| 0.0729 | 9.0 | 621 | 0.5415 | 0.8674 | 0.8674 | 0.8681 | 0.8674 |
| 0.058 | 10.0 | 690 | 0.5751 | 0.8716 | 0.8713 | 0.8721 | 0.8716 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
StanfordAIMI/stanford-deidentifier-only-radiology-reports | 0b5f66f47cd3bbb4ac0c2d905276b355fe20f92b | 2022-07-18T03:49:00.000Z | [
"pytorch",
"bert",
"en",
"dataset:radreports",
"transformers",
"token-classification",
"sequence-tagger-model",
"pubmedbert",
"uncased",
"radiology",
"biomedical",
"license:mit"
]
| token-classification | false | StanfordAIMI | null | StanfordAIMI/stanford-deidentifier-only-radiology-reports | 7 | 1 | transformers | 14,486 | ---
widget:
- text: "PROCEDURE: Chest xray. COMPARISON: last seen on 1/1/2020 and also record dated of March 1st, 2019. FINDINGS: patchy airspace opacities. IMPRESSION: The results of the chest xray of January 1 2020 are the most concerning ones. The patient was transmitted to another service of UH Medical Center under the responsability of Dr. Perez. We used the system MedClinical data transmitter and sent the data on 2/1/2020, under the ID 5874233. We received the confirmation of Dr Perez. He is reachable at 567-493-1234."
- text: "Dr. Curt Langlotz chose to schedule a meeting on 06/23."
tags:
- token-classification
- sequence-tagger-model
- pytorch
- transformers
- pubmedbert
- uncased
- radiology
- biomedical
datasets:
- radreports
language:
- en
license: mit
---
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings. |
Peltarion/dnabert-minilm | c618bc2efdaddf4157ce7ff7afc9a8f5bff3ed0e | 2022-07-02T11:28:46.000Z | [
"pytorch",
"bert",
"transformers",
"DNA",
"license:mit"
]
| null | false | Peltarion | null | Peltarion/dnabert-minilm | 7 | null | transformers | 14,487 | ---
tags:
- DNA
license: mit
---
## MiniDNA model
This is a distilled version of [DNABERT](https://github.com/jerryji1993/DNABERT) by using MiniLM technique. It has a BERT architecture with 6 layers and 768 hidden units, pre-trained on 6-mer DNA sequences. For more details on the pre-training scheme and methods, please check the original [thesis report](http://www.diva-portal.org/smash/record.jsf?dswid=846&pid=diva2%3A1676068&c=1&searchType=SIMPLE&language=en&query=joana+palés&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)..
## How to Use
The model can be used to fine-tune on a downstream genomic task, e.g. promoter identification.
```python
import torch
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('Peltarion/dnabert-minilm')
```
More details on how to fine-tune the model, dataset and additional source codes are available on [github.com/joanaapa/Distillation-DNABERT-Promoter](https://github.com/joanaapa/Distillation-DNABERT-Promoter). |
q2-jlbar/segformer-b0-finetuned-brooks-or-dunn | d7017bbe1a73a1c18d49b473b608cf1edff0ebfc | 2022-06-09T19:47:36.000Z | [
"pytorch",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-segmentation | false | q2-jlbar | null | q2-jlbar/segformer-b0-finetuned-brooks-or-dunn | 7 | null | transformers | 14,488 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-brooks-or-dunn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-brooks-or-dunn
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the q2-jlbar/BrooksOrDunn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1158
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5153 | 4.0 | 20 | 0.5276 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.4082 | 8.0 | 40 | 0.3333 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.3157 | 12.0 | 60 | 0.2773 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2911 | 16.0 | 80 | 0.2389 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2395 | 20.0 | 100 | 0.1982 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.2284 | 24.0 | 120 | 0.1745 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1818 | 28.0 | 140 | 0.1595 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1549 | 32.0 | 160 | 0.1556 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1351 | 36.0 | 180 | 0.1387 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1254 | 40.0 | 200 | 0.1263 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1412 | 44.0 | 220 | 0.1190 | nan | nan | nan | [nan, nan] | [nan, nan] |
| 0.1179 | 48.0 | 240 | 0.1158 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
speechbrain/asr-wav2vec2-dvoice-fongbe | 18935354dca914349e6d8c311ffbfa8ac1ab6ca0 | 2022-06-10T01:01:01.000Z | [
"wav2vec2",
"feature-extraction",
"fon",
"dataset:Dvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
]
| automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-dvoice-fongbe | 7 | null | speechbrain | 14,489 | ---
language: "fon"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- Dvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Fongbe (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Fongbe dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 4.16 | 9.19 | 3.98 | 9.00 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and is trained with the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install transformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Fongbe)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-dvoice-fongbe", savedir="pretrained_models/asr-wav2vec2-dvoice-fongbe")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-dvoice-fongbe/example_fongbe.wav')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/DVoice/ASR/CTC
python train_with_wav2vec2.py hparams/train_fon_with_wav2vec.yaml --data_folder=/localscratch/ALFFA_PUBLIC/ASR/FONGBE/data/
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1vNT7RjRuELs7pumBHmfYsrOp9m46D0ym?usp=sharing).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# About DVoice
DVoice is a community initiative that aims to provide African low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrieved from social media. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola, and Soninke.
For this project, AIOX Labs and the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London, and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes, or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business-ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems, and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network, and System Security, and Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution.
|
Wikram/Legal-key-to-text | 8021a86d5a3b2156809fc7fd1fdb625ec8207147 | 2022-06-10T02:17:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Wikram | null | Wikram/Legal-key-to-text | 7 | null | transformers | 14,490 | Task:
Given a set of input keywords, generate a corresponding text output for a section in the legal domain.
Dataset:
We used the Contract Understanding Atticus Dataset (CUAD).
It is a corpus of 13,000+ labels in 510 commercial legal contracts.
They have been manually labeled under the supervision of experienced lawyers to identify 41 types of legal clauses (e.g. licenses, warranty, governing law, insurance, etc…).
Workflow:

You can connect me at [email protected] |
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-wikilingua-ar | 2c8370ace02b98a5d8463a0a86e9a361bd5ab3a7 | 2022-06-10T14:19:32.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/mT5_multilingual_XLSum-finetuned-wikilingua-ar | 7 | null | transformers | 14,491 | ---
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mT5_multilingual_XLSum-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-wikilingua-ar
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5540
- Rouge-1: 27.46
- Rouge-2: 9.0
- Rouge-l: 22.59
- Gen Len: 43.41
- Bertscore: 73.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ritheshSree/animal-classifier | c0af2d988c5d7fcb783d0465d72a0e10efa8ce9b | 2022-06-10T05:38:54.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | ritheshSree | null | ritheshSree/animal-classifier | 7 | null | transformers | 14,492 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animal-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# animal-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

#### snake

#### tiger
 |
TurkuNLP/bert-large-finnish-cased-v1 | 60c5d609509da25d38b3ce3da8f58dbec6b3f2c2 | 2022-06-10T08:46:17.000Z | [
"pytorch",
"fi",
"transformers",
"license:apache-2.0"
]
| null | false | TurkuNLP | null | TurkuNLP/bert-large-finnish-cased-v1 | 7 | null | transformers | 14,493 | ---
license: apache-2.0
language: fi
---
This is the large variant of FinBERT (TurkuNLP/bert-base-finnish-cased-v1). The training data is exactly the same. |
flood/distilbert-base-uncased-distilled-clinc | ce631bfc49676efd2895268a4fb052339f66187b | 2022-06-10T08:03:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | flood | null | flood/distilbert-base-uncased-distilled-clinc | 7 | null | transformers | 14,494 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9309677419354838
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0389
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6206 | 1.0 | 318 | 0.3251 | 0.6610 |
| 0.2571 | 2.0 | 636 | 0.1366 | 0.8584 |
| 0.1392 | 3.0 | 954 | 0.0813 | 0.9081 |
| 0.0967 | 4.0 | 1272 | 0.0598 | 0.9152 |
| 0.0779 | 5.0 | 1590 | 0.0503 | 0.9229 |
| 0.0675 | 6.0 | 1908 | 0.0451 | 0.9271 |
| 0.0615 | 7.0 | 2226 | 0.0425 | 0.9326 |
| 0.058 | 8.0 | 2544 | 0.0403 | 0.9316 |
| 0.0557 | 9.0 | 2862 | 0.0393 | 0.9306 |
| 0.0544 | 10.0 | 3180 | 0.0389 | 0.9310 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jeevesh8/std_pnt_04_feather_berts-58 | 8956f1415982dbdf3970fca1857b141dd2c4464a | 2022-06-12T06:03:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-58 | 7 | null | transformers | 14,495 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-11 | a68310ec23449326c778cd97b8365ae21a999b8c | 2022-06-12T06:04:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-11 | 7 | null | transformers | 14,496 | Entry not found |
Jeevesh8/std_pnt_04_feather_berts-50 | b0c09a4c9e4d18c59a6eff0d276ddd3314741d90 | 2022-06-12T06:03:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_pnt_04_feather_berts-50 | 7 | null | transformers | 14,497 | Entry not found |
eslamxm/mbert2mbert-finetuned-ar-wikilingua | 7ecdfd5ef0be91eed7970e52732c79c44000e0db | 2022-06-12T19:37:00.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"ar",
"mbert",
"mbert2mbert",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/mbert2mbert-finetuned-ar-wikilingua | 7 | null | transformers | 14,498 | ---
tags:
- summarization
- ar
- encoder-decoder
- mbert
- mbert2mbert
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mbert2mbert-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert2mbert-finetuned-ar-wikilingua
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6753
- Rouge-1: 15.19
- Rouge-2: 5.45
- Rouge-l: 14.64
- Gen Len: 20.0
- Bertscore: 67.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eunbeee/ainize-kobart-news-eb-finetuned-meetings-papers | 419ad720392282543fde1f5cbc40bf00af8797b2 | 2022-06-12T11:02:29.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | eunbeee | null | eunbeee/ainize-kobart-news-eb-finetuned-meetings-papers | 7 | null | transformers | 14,499 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ainize-kobart-news-eb-finetuned-meetings-papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ainize-kobart-news-eb-finetuned-meetings-papers
This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3289
- Rouge1: 17.3988
- Rouge2: 7.0454
- Rougel: 17.3877
- Rougelsum: 17.42
- Gen Len: 19.9473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.1402 | 1.0 | 7588 | 0.2930 | 17.1421 | 7.0141 | 17.1211 | 17.1473 | 19.9374 |
| 0.0997 | 2.0 | 15176 | 0.2842 | 17.1692 | 6.8824 | 17.1557 | 17.1985 | 19.9435 |
| 0.0692 | 3.0 | 22764 | 0.3052 | 17.4241 | 7.1083 | 17.4028 | 17.4472 | 19.9453 |
| 0.0556 | 4.0 | 30352 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
| 0.0533 | 5.0 | 37940 | 0.3289 | 17.3988 | 7.0454 | 17.3877 | 17.42 | 19.9473 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.