modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
facebook/maskformer-swin-small-ade | 4b60066e3064ec1472883edfc4c6d2296359214d | 2022-04-04T16:02:03.000Z | [
"pytorch",
"maskformer",
"dataset:ade-20k",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-small-ade | 1 | null | transformers | 30,700 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- ade-20k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# Mask
Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
facebook/maskformer-swin-tiny-coco | a1e0c132b7da81eb1d33153b58f058694b89c324 | 2022-04-04T16:02:11.000Z | [
"pytorch",
"maskformer",
"dataset:coco",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-tiny-coco | 1 | null | transformers | 30,701 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- coco
---
# Mask
Mask model trained on coco. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
taesu/ts-test2 | 19938845c7f57033206e968057bb7670eb2b778c | 2022-03-02T11:18:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | taesu | null | taesu/ts-test2 | 1 | null | transformers | 30,702 | Entry not found |
spy24/autonlp-AUS-to-US2-606817121 | 6a6e7f8433b6003190fd8145244733decf2c6d42 | 2022-03-02T10:00:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-AUS-to-US2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-AUS-to-US2-606817121 | 1 | 1 | transformers | 30,703 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-AUS-to-US2
co2_eq_emissions: 1.1512164322839105
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 606817121
- CO2 Emissions (in grams): 1.1512164322839105
## Validation Metrics
- Loss: 2.0312094688415527
- Rouge1: 34.8844
- Rouge2: 5.2023
- RougeL: 34.6339
- RougeLsum: 34.8555
- Gen Len: 3.1792
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-AUS-to-US2-606817121
``` |
spy24/autonlp-US-to-AUS3-606917136 | ddf0331dd86bfedf2a7dfb97591c6201405b8cf9 | 2022-03-02T10:03:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-US-to-AUS3",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-US-to-AUS3-606917136 | 1 | null | transformers | 30,704 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US-to-AUS3
co2_eq_emissions: 1.2956300881026077
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 606917136
- CO2 Emissions (in grams): 1.2956300881026077
## Validation Metrics
- Loss: 2.2489309310913086
- Rouge1: 31.0639
- Rouge2: 2.2447
- RougeL: 31.1492
- RougeLsum: 31.1753
- Gen Len: 3.4798
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-AUS3-606917136
``` |
spy24/autonlp-US_to_AUS-607117159 | 33bc95bcba895d7dedc92c7d55b97f244a591aa3 | 2022-03-02T10:35:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-US_to_AUS",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-US_to_AUS-607117159 | 1 | 1 | transformers | 30,705 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US_to_AUS
co2_eq_emissions: 1.4276876566788055
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 607117159
- CO2 Emissions (in grams): 1.4276876566788055
## Validation Metrics
- Loss: 1.5177973508834839
- Rouge1: 46.134
- Rouge2: 10.578
- RougeL: 45.8856
- RougeLsum: 46.0088
- Gen Len: 3.7283
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US_to_AUS-607117159
``` |
creynier/wav2vec2-base-swbd-turn-small-4 | e277b089e479633479cfabdb5b9ebef91876874e | 2022-03-02T18:40:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-small-4 | 1 | null | transformers | 30,706 | Entry not found |
ncoop57/multi_prog_code_clippy | 9fc4aaac4bb5341573b512fa26cc0f1ca53381a3 | 2022-03-03T13:13:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | ncoop57 | null | ncoop57/multi_prog_code_clippy | 1 | null | transformers | 30,707 | Entry not found |
Kuray107/librispeech-100h-supervised | a199f9cbc19520b6e2c5a66a41b8cfa66def43cf | 2022-03-06T08:07:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/librispeech-100h-supervised | 1 | null | transformers | 30,708 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: librispeech-100h-supervised
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# librispeech-100h-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Wer: 0.0345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.8277 | 0.42 | 500 | 2.9071 | 1.0 |
| 2.0261 | 0.84 | 1000 | 0.3060 | 0.2496 |
| 0.2181 | 1.26 | 1500 | 0.1172 | 0.0873 |
| 0.1255 | 1.68 | 2000 | 0.0894 | 0.0637 |
| 0.0971 | 2.1 | 2500 | 0.0821 | 0.0560 |
| 0.078 | 2.52 | 3000 | 0.0751 | 0.0500 |
| 0.0706 | 2.94 | 3500 | 0.0721 | 0.0456 |
| 0.0609 | 3.36 | 4000 | 0.0755 | 0.0464 |
| 0.0572 | 3.78 | 4500 | 0.0705 | 0.0431 |
| 0.0528 | 4.2 | 5000 | 0.0715 | 0.0423 |
| 0.0481 | 4.62 | 5500 | 0.0691 | 0.0403 |
| 0.0471 | 5.04 | 6000 | 0.0743 | 0.0401 |
| 0.0412 | 5.46 | 6500 | 0.0757 | 0.0399 |
| 0.0416 | 5.88 | 7000 | 0.0688 | 0.0378 |
| 0.0391 | 6.3 | 7500 | 0.0704 | 0.0383 |
| 0.0367 | 6.72 | 8000 | 0.0742 | 0.0387 |
| 0.0349 | 7.14 | 8500 | 0.0732 | 0.0388 |
| 0.033 | 7.56 | 9000 | 0.0719 | 0.0374 |
| 0.0327 | 7.98 | 9500 | 0.0750 | 0.0369 |
| 0.0292 | 8.4 | 10000 | 0.0734 | 0.0368 |
| 0.0303 | 8.82 | 10500 | 0.0733 | 0.0365 |
| 0.0283 | 9.24 | 11000 | 0.0766 | 0.0357 |
| 0.0269 | 9.66 | 11500 | 0.0761 | 0.0350 |
| 0.0268 | 10.08 | 12000 | 0.0802 | 0.0359 |
| 0.0245 | 10.42 | 12500 | 0.0758 | 0.0354 |
| 0.023 | 10.84 | 13000 | 0.0775 | 0.0349 |
| 0.0186 | 11.26 | 13500 | 0.0817 | 0.0355 |
| 0.0176 | 11.68 | 14000 | 0.0853 | 0.0354 |
| 0.0163 | 12.1 | 14500 | 0.0880 | 0.0347 |
| 0.0156 | 12.52 | 15000 | 0.0864 | 0.0357 |
| 0.0141 | 12.94 | 15500 | 0.0897 | 0.0355 |
| 0.0134 | 13.36 | 16000 | 0.0915 | 0.0349 |
| 0.013 | 13.78 | 16500 | 0.0928 | 0.0350 |
| 0.0097 | 13.42 | 17000 | 0.0955 | 0.0345 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
yoavgur/gpt2-bash-history-baseline | 2214af1f99cd46346e82f5e84d6539a9214cf6b5 | 2022-03-02T23:02:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | yoavgur | null | yoavgur/gpt2-bash-history-baseline | 1 | null | transformers | 30,709 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-bash-history-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-bash-history-baseline
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 158 | 2.1038 |
| No log | 2.0 | 316 | 2.0349 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
jiobiala24/wav2vec2-base-1 | 3282e19f8f8b8f4ffa2b0053819f0f6f79a46cf8 | 2022-03-03T10:47:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-1 | 1 | null | transformers | 30,710 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9254
- Wer: 0.3216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6597 | 2.2 | 1000 | 0.8904 | 0.5388 |
| 0.4751 | 4.41 | 2000 | 0.7009 | 0.3976 |
| 0.3307 | 6.61 | 3000 | 0.7068 | 0.3672 |
| 0.2574 | 8.81 | 4000 | 0.7320 | 0.3544 |
| 0.2096 | 11.01 | 5000 | 0.7803 | 0.3418 |
| 0.177 | 13.22 | 6000 | 0.7768 | 0.3423 |
| 0.1521 | 15.42 | 7000 | 0.8113 | 0.3375 |
| 0.1338 | 17.62 | 8000 | 0.8153 | 0.3325 |
| 0.1168 | 19.82 | 9000 | 0.8851 | 0.3306 |
| 0.104 | 22.03 | 10000 | 0.8811 | 0.3277 |
| 0.0916 | 24.23 | 11000 | 0.8722 | 0.3254 |
| 0.083 | 26.43 | 12000 | 0.9527 | 0.3265 |
| 0.0766 | 28.63 | 13000 | 0.9254 | 0.3216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
StivenLancheros/Roberta-base-bne-NER-EN-ES | a9f30c54b9180834ea27b669167f6dc2213a2e69 | 2021-11-12T13:12:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2002",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/Roberta-base-bne-NER-EN-ES | 1 | null | transformers | 30,711 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-bne-finetuned-ner-finetuned2-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
args: es
metrics:
- name: Precision
type: precision
value: 0.8697727272727273
- name: Recall
type: recall
value: 0.8793658088235294
- name: F1
type: f1
value: 0.8745429616087752
- name: Accuracy
type: accuracy
value: 0.9808778791829639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-ner-finetuned2-ner
This model is a fine-tuned version of [StivenLancheros/roberta-base-bne-finetuned-ner](https://huggingface.co/StivenLancheros/roberta-base-bne-finetuned-ner) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1067
- Precision: 0.8698
- Recall: 0.8794
- F1: 0.8745
- Accuracy: 0.9809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0582 | 1.0 | 1665 | 0.0852 | 0.8697 | 0.8759 | 0.8728 | 0.9800 |
| 0.0297 | 2.0 | 3330 | 0.0919 | 0.8841 | 0.8867 | 0.8854 | 0.9817 |
| 0.0121 | 3.0 | 4995 | 0.0950 | 0.8751 | 0.8807 | 0.8779 | 0.9812 |
| 0.0056 | 4.0 | 6660 | 0.1067 | 0.8698 | 0.8794 | 0.8745 | 0.9809 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
cammy/bart-large-cnn-finetuned-weaksup-100-pad-early | e86bf1fadee55590a6141b6cf7fbf664137861d7 | 2022-03-03T06:29:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-100-pad-early | 1 | null | transformers | 30,712 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-100-pad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-100-pad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0714
- Rouge1: 26.6767
- Rouge2: 8.6321
- Rougel: 17.4235
- Rougelsum: 21.6089
- Gen Len: 66.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.0405 | 26.8313 | 10.4295 | 19.1329 | 23.8101 | 64.6 |
| No log | 2.0 | 200 | 1.0714 | 26.6767 | 8.6321 | 17.4235 | 21.6089 | 66.1 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sadkat/technoai | 689d5df4b7333dd72343143b985de7965e2735a1 | 2022-03-03T09:20:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sadkat | null | sadkat/technoai | 1 | null | transformers | 30,713 | ---
tags:
- conversational
---
#technoai model |
kookyklavicle/sean-diaz-bot | 14558dcd7a5ec5c8d8c0eac86e17057a78f0d451 | 2022-03-03T11:20:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kookyklavicle | null | kookyklavicle/sean-diaz-bot | 1 | null | transformers | 30,714 | ---
tags:
- conversational
---
# Sean Diaz Model
|
kookyklavicle/sean-diaz | 7e8f1d51d67ee5718d6d6a6c4afece7a8e778ee1 | 2022-03-10T09:46:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kookyklavicle | null | kookyklavicle/sean-diaz | 1 | null | transformers | 30,715 | ---
tags:
- conversational
---
# Sean Diaz (Life is Strange 2) Chat Model
|
Bistolero/aka | 50093c06b7bf4293956f95212ee22f8cbedc1cd8 | 2022-03-03T18:59:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/aka | 1 | null | transformers | 30,716 | Entry not found |
batterydata/batteryonlybert-cased | 4cdf2ccc7f1e3c45b5843bd018eb9e766967ca33 | 2022-03-05T16:04:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:batterypapers",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | batterydata | null | batterydata/batteryonlybert-cased | 1 | null | transformers | 30,717 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- batterypapers
---
# BatteryOnlyBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatteryOnlyBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryOnlyBERT model was pretrained on the full text of battery papers only. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,500,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryonlybert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased')
model = BertModel.from_pretrained('batterydata/batteryonlybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryonlybert-uncased')
model = TFBertModel.from_pretrained('batterydata/batteryonlybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.1012.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
cammy/bart-large-cnn-finetuned-new-100-doc-pad-early | 9fffcf0aed233062afa3da579d3a5b1004d3430d | 2022-03-04T01:13:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-new-100-doc-pad-early | 1 | null | transformers | 30,718 | Entry not found |
jiobiala24/wav2vec2-base-2 | c69327d473cfd4025a27d29a863129a0e808856e | 2022-03-04T15:56:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-2 | 1 | null | transformers | 30,719 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-2
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-1](https://huggingface.co/jiobiala24/wav2vec2-base-1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Wer: 0.3076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4206 | 1.96 | 1000 | 0.6022 | 0.3435 |
| 0.3278 | 3.93 | 2000 | 0.6191 | 0.3344 |
| 0.2604 | 5.89 | 3000 | 0.6170 | 0.3288 |
| 0.2135 | 7.86 | 4000 | 0.6590 | 0.3239 |
| 0.1805 | 9.82 | 5000 | 0.7359 | 0.3289 |
| 0.1582 | 11.79 | 6000 | 0.7450 | 0.3276 |
| 0.1399 | 13.75 | 7000 | 0.7914 | 0.3218 |
| 0.1252 | 15.72 | 8000 | 0.8254 | 0.3185 |
| 0.1095 | 17.68 | 9000 | 0.8524 | 0.3184 |
| 0.1 | 19.65 | 10000 | 0.8340 | 0.3165 |
| 0.0905 | 21.61 | 11000 | 0.8846 | 0.3161 |
| 0.0819 | 23.58 | 12000 | 0.8994 | 0.3142 |
| 0.0763 | 25.54 | 13000 | 0.9018 | 0.3134 |
| 0.0726 | 27.5 | 14000 | 0.9552 | 0.3081 |
| 0.0668 | 29.47 | 15000 | 0.9415 | 0.3076 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
aaraki/opus-mt-en-ro-finetuned-en-to-ro | 091aa3cee98200f21905a0640879ab7ea9708eda | 2022-03-22T01:39:06.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | aaraki | null | aaraki/opus-mt-en-ro-finetuned-en-to-ro | 1 | null | transformers | 30,720 | Entry not found |
ssardorf/pegasus-summ | 56e0c06848953c25c4424c1fc19528ba481b1f06 | 2022-04-26T10:16:01.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ssardorf | null | ssardorf/pegasus-summ | 1 | null | transformers | 30,721 | Entry not found |
mmaguero/gn-bert-small-cased | 376acd914909c29fdb5751c1746b371f7609986a | 2022-03-06T08:02:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | mmaguero | null | mmaguero/gn-bert-small-cased | 1 | null | transformers | 30,722 | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteĩ táva oĩva [MASK] retãme "
---
# BERT-i-small-cased (gnBERT-small-cased)
A pre-trained BERT model for **Guarani** (6 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
|
jish/distilgpt2-finetuned-wikitext2 | 590c63e1eddfcfc50e39547d13df7c457e5b1be6 | 2022-03-04T15:14:19.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | jish | null | jish/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 30,723 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.633 | 2.0 | 4668 | 3.6455 |
| 3.6078 | 3.0 | 7002 | 3.6423 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Ameer05/model-token-repo | e27616fca46d4737a9d4bee9504bda94f7a5342a | 2022-03-04T15:09:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ameer05 | null | Ameer05/model-token-repo | 1 | null | transformers | 30,724 | Entry not found |
Aktsvigun/bart-base-tapt-xsum | 4fc5ccb97196f21691ef21b494ff5581f172b5f8 | 2022-03-04T16:09:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base-tapt-xsum | 1 | null | transformers | 30,725 | Entry not found |
akadriu/wav2vec2-large-xlsr-53-Total_2e-4_2 | dd6bf8b07f28873467f7d6aef577b549c18678e4 | 2022-03-05T05:18:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-Total_2e-4_2 | 1 | null | transformers | 30,726 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-Total_2e-4_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total_2e-4_2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2733
- Wer: 0.2116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.2741 | 0.1 | 200 | 2.9070 | 0.9707 |
| 2.034 | 0.2 | 400 | 0.7240 | 0.6798 |
| 1.0037 | 0.3 | 600 | 0.5651 | 0.5368 |
| 0.8834 | 0.4 | 800 | 0.4709 | 0.4669 |
| 0.7973 | 0.5 | 1000 | 0.4305 | 0.4261 |
| 0.7489 | 0.6 | 1200 | 0.4017 | 0.3763 |
| 0.7507 | 0.7 | 1400 | 0.3662 | 0.3481 |
| 0.7108 | 0.8 | 1600 | 0.3604 | 0.3513 |
| 0.7151 | 0.9 | 1800 | 0.3563 | 0.3406 |
| 0.6755 | 1.0 | 2000 | 0.3365 | 0.3210 |
| 0.6038 | 1.1 | 2200 | 0.3394 | 0.3053 |
| 0.6109 | 1.2 | 2400 | 0.3179 | 0.2844 |
| 0.5999 | 1.3 | 2600 | 0.3166 | 0.2773 |
| 0.6291 | 1.4 | 2800 | 0.3134 | 0.2733 |
| 0.626 | 1.5 | 3000 | 0.3060 | 0.2690 |
| 0.6188 | 1.6 | 3200 | 0.3038 | 0.2644 |
| 0.5757 | 1.7 | 3400 | 0.3015 | 0.2566 |
| 0.5943 | 1.8 | 3600 | 0.2925 | 0.2494 |
| 0.6043 | 1.9 | 3800 | 0.2858 | 0.2491 |
| 0.5874 | 2.0 | 4000 | 0.2874 | 0.2452 |
| 0.5263 | 2.1 | 4200 | 0.2800 | 0.2364 |
| 0.5282 | 2.2 | 4400 | 0.2848 | 0.2387 |
| 0.4953 | 2.3 | 4600 | 0.2793 | 0.2360 |
| 0.5428 | 2.4 | 4800 | 0.2863 | 0.2414 |
| 0.5618 | 2.5 | 5000 | 0.2788 | 0.2350 |
| 0.5395 | 2.6 | 5200 | 0.2765 | 0.2325 |
| 0.5178 | 2.7 | 5400 | 0.2787 | 0.2351 |
| 0.5264 | 2.8 | 5600 | 0.2755 | 0.2312 |
| 0.5222 | 2.9 | 5800 | 0.2692 | 0.2258 |
| 0.5184 | 3.0 | 6000 | 0.2681 | 0.2242 |
| 0.4826 | 3.1 | 6200 | 0.2736 | 0.2224 |
| 0.479 | 3.2 | 6400 | 0.2896 | 0.2353 |
| 0.4938 | 3.3 | 6600 | 0.2744 | 0.2252 |
| 0.4772 | 3.4 | 6800 | 0.2735 | 0.2242 |
| 0.4831 | 3.5 | 7000 | 0.2721 | 0.2225 |
| 0.4869 | 3.6 | 7200 | 0.2710 | 0.2194 |
| 0.4515 | 3.7 | 7400 | 0.2692 | 0.2196 |
| 0.4732 | 3.8 | 7600 | 0.2729 | 0.2269 |
| 0.4683 | 3.9 | 7800 | 0.2713 | 0.2211 |
| 0.4674 | 4.0 | 8000 | 0.2642 | 0.2116 |
| 0.4239 | 4.1 | 8200 | 0.2773 | 0.2176 |
| 0.4306 | 4.2 | 8400 | 0.2779 | 0.2191 |
| 0.441 | 4.3 | 8600 | 0.2758 | 0.2136 |
| 0.4343 | 4.4 | 8800 | 0.2797 | 0.2203 |
| 0.4059 | 4.5 | 9000 | 0.2763 | 0.2159 |
| 0.4399 | 4.6 | 9200 | 0.2755 | 0.2123 |
| 0.4131 | 4.7 | 9400 | 0.2741 | 0.2124 |
| 0.4331 | 4.8 | 9600 | 0.2728 | 0.2101 |
| 0.4288 | 4.9 | 9800 | 0.2730 | 0.2110 |
| 0.4341 | 5.0 | 10000 | 0.2733 | 0.2116 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
voice/wav2vec2-large-xlsr-common1000asli-demo-colab | 4bfaac1362649e124937508f1fcc0ff2dfe00d6c | 2022-04-06T09:17:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | voice | null | voice/wav2vec2-large-xlsr-common1000asli-demo-colab | 1 | null | transformers | 30,727 | Entry not found |
Kuray107/swbd-5percent-supervised | cabf9cd145c2d78b339b721842afe22848c1533e | 2022-03-06T16:14:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/swbd-5percent-supervised | 1 | null | transformers | 30,728 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swbd-5percent-supervised
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swbd-5percent-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6970
- Wer: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.8534 | 0.64 | 1000 | 2.9535 | 1.0 |
| 1.8605 | 1.28 | 2000 | 0.7878 | 0.3719 |
| 0.9862 | 1.92 | 3000 | 0.5906 | 0.2684 |
| 0.8405 | 2.56 | 4000 | 0.5555 | 0.2151 |
| 0.6972 | 3.2 | 5000 | 0.5905 | 0.1992 |
| 0.6033 | 3.84 | 6000 | 0.4867 | 0.1781 |
| 0.5393 | 4.48 | 7000 | 0.5447 | 0.1805 |
| 0.529 | 5.12 | 8000 | 0.5398 | 0.1746 |
| 0.5072 | 5.77 | 9000 | 0.5093 | 0.1706 |
| 0.4331 | 6.41 | 10000 | 0.4990 | 0.1627 |
| 0.4837 | 7.05 | 11000 | 0.5319 | 0.1634 |
| 0.3867 | 7.69 | 12000 | 0.4866 | 0.1595 |
| 0.345 | 8.33 | 13000 | 0.5202 | 0.1582 |
| 0.372 | 8.97 | 14000 | 0.5396 | 0.1547 |
| 0.355 | 9.61 | 15000 | 0.5992 | 0.1493 |
| 0.3258 | 10.25 | 16000 | 0.5247 | 0.1527 |
| 0.3327 | 10.89 | 17000 | 0.5664 | 0.1512 |
| 0.3422 | 11.53 | 18000 | 0.5819 | 0.1456 |
| 0.2815 | 12.17 | 19000 | 0.5692 | 0.1453 |
| 0.2719 | 12.81 | 20000 | 0.5012 | 0.1476 |
| 0.2838 | 13.45 | 21000 | 0.5286 | 0.1454 |
| 0.2418 | 14.09 | 22000 | 0.6238 | 0.1486 |
| 0.2412 | 14.73 | 23000 | 0.5889 | 0.1456 |
| 0.2227 | 15.37 | 24000 | 0.5901 | 0.1459 |
| 0.2129 | 16.02 | 25000 | 0.5959 | 0.1454 |
| 0.2071 | 16.66 | 26000 | 0.6259 | 0.1427 |
| 0.2185 | 17.3 | 27000 | 0.6581 | 0.1437 |
| 0.1982 | 17.94 | 28000 | 0.6194 | 0.1411 |
| 0.1928 | 18.58 | 29000 | 0.5940 | 0.1409 |
| 0.1885 | 19.22 | 30000 | 0.6733 | 0.1417 |
| 0.1835 | 19.86 | 31000 | 0.6363 | 0.1393 |
| 0.1756 | 20.5 | 32000 | 0.6675 | 0.1382 |
| 0.1776 | 21.14 | 33000 | 0.6147 | 0.1407 |
| 0.1758 | 21.78 | 34000 | 0.6405 | 0.1420 |
| 0.1645 | 22.42 | 35000 | 0.6999 | 0.1401 |
| 0.1631 | 23.06 | 36000 | 0.6224 | 0.1385 |
| 0.1494 | 23.7 | 37000 | 0.6639 | 0.1374 |
| 0.1472 | 24.34 | 38000 | 0.6471 | 0.1373 |
| 0.1514 | 24.98 | 39000 | 0.6570 | 0.1395 |
| 0.1527 | 25.62 | 40000 | 0.6876 | 0.1375 |
| 0.1514 | 26.27 | 41000 | 0.6835 | 0.1376 |
| 0.1344 | 26.91 | 42000 | 0.6987 | 0.1372 |
| 0.1267 | 27.55 | 43000 | 0.7026 | 0.1362 |
| 0.1384 | 28.19 | 44000 | 0.7021 | 0.1366 |
| 0.1264 | 28.83 | 45000 | 0.7016 | 0.1355 |
| 0.1227 | 29.47 | 46000 | 0.6970 | 0.1352 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
crabz/slovakbert-upos | 6e97f3283ef756771374064d469e7cac377e852a | 2022-03-06T12:31:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | crabz | null | crabz/slovakbert-upos | 1 | null | transformers | 30,729 | ---
license: mit
inference: false
---
|
crabz/distil-slovakbert-upos | 9156f9ef1d2a819eb746bb05ff45a8ab87f6bc6a | 2022-03-06T12:38:56.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:universal_dependencies",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | crabz | null | crabz/distil-slovakbert-upos | 1 | null | transformers | 30,730 | ---
tags:
- generated_from_trainer
datasets:
- universal_dependencies
metrics:
- precision
- recall
- f1
- accuracy
inference: false
model-index:
- name: distil-slovakbert-upos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: universal_dependencies sk_snk
type: universal_dependencies
args: sk_snk
metrics:
- name: Precision
type: precision
value: 0.9771104035797263
- name: Recall
type: recall
value: 0.9785418821096173
- name: F1
type: f1
value: 0.9778256189451022
- name: Accuracy
type: accuracy
value: 0.9800851200513933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-slovakbert-upos
This model is a fine-tuned version of [crabz/distil-slovakbert](https://huggingface.co/crabz/distil-slovakbert) on the universal_dependencies sk_snk dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1207
- Precision: 0.9771
- Recall: 0.9785
- F1: 0.9778
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 266 | 0.2168 | 0.9570 | 0.9554 | 0.9562 | 0.9610 |
| 0.3935 | 2.0 | 532 | 0.1416 | 0.9723 | 0.9736 | 0.9730 | 0.9740 |
| 0.3935 | 3.0 | 798 | 0.1236 | 0.9722 | 0.9735 | 0.9728 | 0.9747 |
| 0.0664 | 4.0 | 1064 | 0.1195 | 0.9722 | 0.9741 | 0.9732 | 0.9766 |
| 0.0664 | 5.0 | 1330 | 0.1160 | 0.9764 | 0.9772 | 0.9768 | 0.9789 |
| 0.0377 | 6.0 | 1596 | 0.1194 | 0.9763 | 0.9776 | 0.9770 | 0.9790 |
| 0.0377 | 7.0 | 1862 | 0.1188 | 0.9740 | 0.9755 | 0.9748 | 0.9777 |
| 0.024 | 8.0 | 2128 | 0.1188 | 0.9762 | 0.9777 | 0.9769 | 0.9793 |
| 0.024 | 9.0 | 2394 | 0.1207 | 0.9774 | 0.9789 | 0.9781 | 0.9802 |
| 0.0184 | 10.0 | 2660 | 0.1207 | 0.9771 | 0.9785 | 0.9778 | 0.9801 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.11.0
|
willcai/wav2vec2-large-xls-r-300m-tr-colab | 880c495acdf1823b7cc5fc76bf6f1a496c4fdf9a | 2022-03-08T03:06:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | willcai | null | willcai/wav2vec2-large-xls-r-300m-tr-colab | 1 | null | transformers | 30,731 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4121
- Wer: 0.3112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1868 | 1.83 | 400 | 0.9812 | 0.8398 |
| 0.691 | 3.67 | 800 | 0.5571 | 0.6298 |
| 0.3555 | 5.5 | 1200 | 0.4676 | 0.4779 |
| 0.2451 | 7.34 | 1600 | 0.4572 | 0.4541 |
| 0.1844 | 9.17 | 2000 | 0.4743 | 0.4389 |
| 0.1541 | 11.01 | 2400 | 0.4583 | 0.4300 |
| 0.1277 | 12.84 | 2800 | 0.4565 | 0.3950 |
| 0.1122 | 14.68 | 3200 | 0.4761 | 0.4087 |
| 0.0975 | 16.51 | 3600 | 0.4654 | 0.3786 |
| 0.0861 | 18.35 | 4000 | 0.4503 | 0.3667 |
| 0.0775 | 20.18 | 4400 | 0.4600 | 0.3581 |
| 0.0666 | 22.02 | 4800 | 0.4350 | 0.3504 |
| 0.0627 | 23.85 | 5200 | 0.4211 | 0.3349 |
| 0.0558 | 25.69 | 5600 | 0.4390 | 0.3333 |
| 0.0459 | 27.52 | 6000 | 0.4218 | 0.3185 |
| 0.0439 | 29.36 | 6400 | 0.4121 | 0.3112 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sunitha/CV_Custom_DS | 1a0a98bb254338bb58c9454045723b27ba8ad9cb | 2022-03-06T06:26:13.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/CV_Custom_DS | 1 | null | transformers | 30,732 | Entry not found |
Freak55/DialoGPT-small-Phoenix-Wright | 7e947c22c67f82200d26fb20e0096048710e2de9 | 2022-03-06T06:59:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Freak55 | null | Freak55/DialoGPT-small-Phoenix-Wright | 1 | null | transformers | 30,733 | ---
tags:
- conversational
--- |
P4RZ1V4L/DialoGPT-medium-tonystark | 80727bbdeee4f10643fa6ee783ebef8dc88b32cf | 2022-03-06T10:22:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | P4RZ1V4L | null | P4RZ1V4L/DialoGPT-medium-tonystark | 1 | null | transformers | 30,734 | ---
tags:
- conversational
---
0 Tony Stark DialoGPT Model |
akadriu/wav2vec2-large-xlsr-53-Total2e-4_4 | f4432a42bfdc0c2df2961aa5c92d925a467e6845 | 2022-03-06T19:58:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-Total2e-4_4 | 1 | null | transformers | 30,735 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-Total2e-4_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total2e-4_4
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2474
- Wer: 0.1951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.5015 | 0.1 | 200 | 2.9261 | 0.9707 |
| 2.9197 | 0.2 | 400 | 2.7757 | 0.9707 |
| 1.7594 | 0.3 | 600 | 0.6117 | 0.5746 |
| 1.0908 | 0.4 | 800 | 0.4673 | 0.4530 |
| 0.9441 | 0.5 | 1000 | 0.4142 | 0.4010 |
| 0.8688 | 0.6 | 1200 | 0.3909 | 0.3675 |
| 0.849 | 0.7 | 1400 | 0.3649 | 0.3360 |
| 0.8223 | 0.8 | 1600 | 0.3532 | 0.3334 |
| 0.821 | 0.9 | 1800 | 0.3513 | 0.3185 |
| 0.7839 | 1.0 | 2000 | 0.3373 | 0.3039 |
| 0.714 | 1.1 | 2200 | 0.3210 | 0.2922 |
| 0.7129 | 1.2 | 2400 | 0.3216 | 0.2860 |
| 0.7076 | 1.3 | 2600 | 0.3279 | 0.2843 |
| 0.73 | 1.4 | 2800 | 0.3111 | 0.2662 |
| 0.7256 | 1.5 | 3000 | 0.3032 | 0.2625 |
| 0.72 | 1.6 | 3200 | 0.3066 | 0.2571 |
| 0.6754 | 1.7 | 3400 | 0.2999 | 0.2581 |
| 0.6859 | 1.8 | 3600 | 0.2935 | 0.2562 |
| 0.6966 | 1.9 | 3800 | 0.2858 | 0.2469 |
| 0.6791 | 2.0 | 4000 | 0.2857 | 0.2393 |
| 0.6412 | 2.1 | 4200 | 0.2815 | 0.2392 |
| 0.6356 | 2.2 | 4400 | 0.2836 | 0.2343 |
| 0.6048 | 2.3 | 4600 | 0.2824 | 0.2422 |
| 0.6473 | 2.4 | 4800 | 0.2805 | 0.2316 |
| 0.659 | 2.5 | 5000 | 0.2775 | 0.2262 |
| 0.6412 | 2.6 | 5200 | 0.2729 | 0.2249 |
| 0.6167 | 2.7 | 5400 | 0.2719 | 0.2227 |
| 0.6226 | 2.8 | 5600 | 0.2661 | 0.2193 |
| 0.6168 | 2.9 | 5800 | 0.2615 | 0.2172 |
| 0.6145 | 3.0 | 6000 | 0.2608 | 0.2148 |
| 0.593 | 3.1 | 6200 | 0.2643 | 0.2123 |
| 0.5919 | 3.2 | 6400 | 0.2617 | 0.2131 |
| 0.6115 | 3.3 | 6600 | 0.2589 | 0.2114 |
| 0.5859 | 3.4 | 6800 | 0.2591 | 0.2100 |
| 0.5919 | 3.5 | 7000 | 0.2564 | 0.2103 |
| 0.5873 | 3.6 | 7200 | 0.2572 | 0.2074 |
| 0.561 | 3.7 | 7400 | 0.2561 | 0.2056 |
| 0.5808 | 3.8 | 7600 | 0.2538 | 0.2062 |
| 0.5701 | 3.9 | 7800 | 0.2517 | 0.2029 |
| 0.5722 | 4.0 | 8000 | 0.2523 | 0.2007 |
| 0.5508 | 4.1 | 8200 | 0.2570 | 0.2023 |
| 0.5591 | 4.2 | 8400 | 0.2502 | 0.2029 |
| 0.5697 | 4.3 | 8600 | 0.2478 | 0.1991 |
| 0.5689 | 4.4 | 8800 | 0.2492 | 0.2021 |
| 0.5345 | 4.5 | 9000 | 0.2498 | 0.2005 |
| 0.5726 | 4.6 | 9200 | 0.2492 | 0.1983 |
| 0.5382 | 4.7 | 9400 | 0.2487 | 0.1974 |
| 0.5614 | 4.8 | 9600 | 0.2481 | 0.1957 |
| 0.5568 | 4.9 | 9800 | 0.2477 | 0.1955 |
| 0.5631 | 5.0 | 10000 | 0.2474 | 0.1951 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
adalbertojunior/test-256-uncased-3 | c3160aa02e4da52d5c1fa8f7327dc3218c1fe877 | 2022-03-06T13:38:25.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adalbertojunior | null | adalbertojunior/test-256-uncased-3 | 1 | null | transformers | 30,736 | Entry not found |
princeton-nlp/datamux-ner-2 | d291b4d93253607a578d4d6de39192c6d2bc2c29 | 2022-03-06T17:06:14.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-ner-2 | 1 | null | transformers | 30,737 | Entry not found |
princeton-nlp/datamux-ner-5 | 2d7be9d3bf0126fe7feec8b392ed9a1f546d7342 | 2022-03-06T17:08:02.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-ner-5 | 1 | null | transformers | 30,738 | Entry not found |
princeton-nlp/datamux-ner-20 | 07ae7c4c793789af6bf7d283b7924a4b6fb1884f | 2022-03-06T17:12:26.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-ner-20 | 1 | null | transformers | 30,739 | Entry not found |
princeton-nlp/datamux-ner-40 | d00597575e3ba5072b1bcf4c0ec77fb37beed7db | 2022-03-06T17:13:45.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-ner-40 | 1 | null | transformers | 30,740 | Entry not found |
phosseini/atomic-bert-large | 35da6e6c896b3b7218e0b6b1137d915e55d3f581 | 2022-04-13T05:15:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | phosseini | null | phosseini/atomic-bert-large | 1 | null | transformers | 30,741 | Entry not found |
nairaxo/dev-darija | 0778688a465028635a95776bdbfce93b55282fb6 | 2022-03-20T05:58:06.000Z | [
"wav2vec2",
"feature-extraction",
"ar",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | nairaxo | null | nairaxo/dev-darija | 1 | null | speechbrain | 30,742 | ---
language: "ar"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# About DVoice
DVoice is a community initiative that aims to provide African languages and dialects with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each language. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling the recordings. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke.
This Darija ASR model is the first results that we obtained with the constructed dataset.
# wav2vec 2.0 with CTC/Attention trained on DVoice Darija (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [DVoice](https://zenodo.org/record/6342622) Darija dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 5.51 | 18.46 | 5.85 | 18.28 |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Darija)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="nairaxo/dvoice-darija", savedir="pretrained_models/asr-wav2vec2-dvoice-dar")
asr_model.transcribe_file('./the_path_to_your_audio_file')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain |
Splend1dchan/byt5small-squad-5000 | 0b6bf2a8bf75bb12b3ee6e1422a8b6d0c85956cd | 2022-03-07T04:39:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-squad-5000 | 1 | null | transformers | 30,743 | Byt5 trained on squad, input = 512, output = 256, 5000 steps
Tokenizer is Byt5 |
akshaychaudhary/distilbert-base-uncased-finetuned-devops-ner | 2c66a0bbb09196ba020939cb5c3be2046c8522ba | 2022-03-07T06:58:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akshaychaudhary | null | akshaychaudhary/distilbert-base-uncased-finetuned-devops-ner | 1 | null | transformers | 30,744 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-devops-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-devops-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6065
- Precision: 0.0254
- Recall: 0.1371
- F1: 0.0428
- Accuracy: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 144 | 0.8566 | 0.0300 | 0.1573 | 0.0503 | 0.7742 |
| No log | 2.0 | 288 | 1.3542 | 0.0283 | 0.1532 | 0.0477 | 0.7641 |
| No log | 3.0 | 432 | 1.6065 | 0.0254 | 0.1371 | 0.0428 | 0.7637 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Splend1dchan/byt5small-squad | 9423c0514b6b4007f950344694fa70c5cfa2aa34 | 2022-03-07T15:36:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-squad | 1 | null | transformers | 30,745 | Entry not found |
Kevincp560/distilbart-cnn-12-3-finetuned-pubmed | 6f187736946ff2604fb4bda678c2a2e057b3ab03 | 2022-03-07T15:55:27.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/distilbart-cnn-12-3-finetuned-pubmed | 1 | null | transformers | 30,746 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-3-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 40.5642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-3-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-3](https://huggingface.co/sshleifer/distilbart-cnn-12-3) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1743
- Rouge1: 40.5642
- Rouge2: 16.9812
- Rougel: 25.3449
- Rougelsum: 36.46
- Gen Len: 141.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.469 | 1.0 | 4000 | 2.2956 | 38.3713 | 15.2594 | 23.6734 | 34.1634 | 141.707 |
| 2.2527 | 2.0 | 8000 | 2.1994 | 39.5939 | 16.2376 | 24.6363 | 35.5106 | 141.831 |
| 2.0669 | 3.0 | 12000 | 2.1780 | 40.078 | 16.6705 | 25.1119 | 35.9605 | 141.8475 |
| 1.9275 | 4.0 | 16000 | 2.1669 | 40.0825 | 16.6169 | 24.9702 | 36.0191 | 141.928 |
| 1.8102 | 5.0 | 20000 | 2.1743 | 40.5642 | 16.9812 | 25.3449 | 36.46 | 141.95 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
sunitha/AQG_CV_Squad | 3c26561d0335136dc6b0f4ba64a7b6ce7d9f56ec | 2022-03-07T10:39:11.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/AQG_CV_Squad | 1 | null | transformers | 30,747 | Entry not found |
Splend1dchan/byt5small-glue-mprc2 | 3bd79d32b9f531382dc1d168207162039321a126 | 2022-03-07T12:47:22.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-glue-mprc2 | 1 | null | transformers | 30,748 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: byt5small-glue-mprc2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5small-glue-mprc2
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0a0+bf2bbd9
- Datasets 1.12.1
- Tokenizers 0.11.6
|
kenjis2542/mt5-small-finetuned-5k-th-to-en | 08dfb7e5d9ca2d2e6db75c1d162ed0d962c7b987 | 2022-03-07T14:11:40.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kenjis2542 | null | kenjis2542/mt5-small-finetuned-5k-th-to-en | 1 | null | transformers | 30,749 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-5k-th-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-5k-th-to-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
pki/wav2vec2-large-xlsr-53 | 58efcbc8e00a47cdb595097005612239642721e7 | 2022-03-07T18:37:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | pki | null | pki/wav2vec2-large-xlsr-53 | 1 | null | transformers | 30,750 | Entry not found |
huggingtweets/lilbratmia-littlehorney-plusbibi1 | 214916780a0f611bf2146dea159df8bba9cf30a6 | 2022-03-07T21:45:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lilbratmia-littlehorney-plusbibi1 | 1 | null | transformers | 30,751 | ---
language: en
thumbnail: http://www.huggingtweets.com/lilbratmia-littlehorney-plusbibi1/1646689525715/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1386970823681052680/oA_4HBKl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500892464772751365/6uhqt-Jx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1483439308166123530/vKFDbs48_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bibi und Anna & Vanny_Bunny™ & 💞 Mia 💞</div>
<div style="text-align: center; font-size: 14px;">@lilbratmia-littlehorney-plusbibi1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bibi und Anna & Vanny_Bunny™ & 💞 Mia 💞.
| Data | Bibi und Anna | Vanny_Bunny™ | 💞 Mia 💞 |
| --- | --- | --- | --- |
| Tweets downloaded | 1818 | 3230 | 3247 |
| Retweets | 9 | 503 | 134 |
| Short tweets | 341 | 343 | 1189 |
| Tweets kept | 1468 | 2384 | 1924 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hm55g9hx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lilbratmia-littlehorney-plusbibi1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dezdv7k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dezdv7k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lilbratmia-littlehorney-plusbibi1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jiobiala24/wav2vec2-base-cv | c519af7b9d11f5bfa06346391772b4c0f57c392c | 2022-03-08T05:42:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-cv | 1 | null | transformers | 30,752 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-cv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cv
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1562
- Wer: 0.3804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.563 | 3.18 | 500 | 2.9826 | 1.0 |
| 2.0012 | 6.37 | 1000 | 0.9528 | 0.5354 |
| 0.4841 | 9.55 | 1500 | 0.8838 | 0.4325 |
| 0.2748 | 12.74 | 2000 | 0.9437 | 0.4130 |
| 0.1881 | 15.92 | 2500 | 0.9603 | 0.4005 |
| 0.1426 | 19.11 | 3000 | 1.0605 | 0.3955 |
| 0.1134 | 22.29 | 3500 | 1.0733 | 0.3897 |
| 0.0963 | 25.48 | 4000 | 1.1387 | 0.3835 |
| 0.0829 | 28.66 | 4500 | 1.1562 | 0.3804 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
cammy/bart-large-cnn-10k-pad-early-lit | 2a73127266509fae59242accbb4c678184380913 | 2022-03-08T08:20:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-10k-pad-early-lit | 1 | null | transformers | 30,753 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-10k-pad-early-lit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-10k-pad-early-lit
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3758
- Rouge1: 27.7351
- Rouge2: 13.1664
- Rougel: 21.6559
- Rougelsum: 24.648
- Gen Len: 69.343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2516 | 1.0 | 9998 | 0.3540 | 28.1151 | 13.3875 | 22.1496 | 25.1745 | 66.578 |
| 0.1747 | 2.0 | 19996 | 0.3758 | 27.7351 | 13.1664 | 21.6559 | 24.648 | 69.343 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MrAnderson/bert-base-1024-full-trivia | 3850926bcef3c06421a5610bd45f8c24ab25647b | 2022-03-08T10:31:37.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/bert-base-1024-full-trivia | 1 | null | transformers | 30,754 | Entry not found |
akshaychaudhary/distilbert-base-uncased-finetuned-devops1-ner | 0f307ac0ec4e1db6ff274db2e20dc7bbd60e0bfa | 2022-03-08T09:58:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akshaychaudhary | null | akshaychaudhary/distilbert-base-uncased-finetuned-devops1-ner | 1 | null | transformers | 30,755 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-devops1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-devops1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9870
- Precision: 0.0572
- Recall: 0.2689
- F1: 0.0944
- Accuracy: 0.7842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.6027 | 0.0484 | 0.2269 | 0.0798 | 0.7861 |
| No log | 2.0 | 144 | 0.8631 | 0.0573 | 0.2857 | 0.0955 | 0.7771 |
| No log | 3.0 | 216 | 0.9870 | 0.0572 | 0.2689 | 0.0944 | 0.7842 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Ariana2022/tape_katy | e20847414d0383946775de43b348ae763eec2780 | 2022-03-08T23:02:16.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | Ariana2022 | null | Ariana2022/tape_katy | 1 | null | transformers | 30,756 | Entry not found |
sanchit-gandhi/wav2vec2-2-rnd-2-layer | aa2b68fecaa314329ca9ecbea1652afd526ae13e | 2022-03-09T09:50:11.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-2-layer | 1 | null | transformers | 30,757 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2188
- Wer: 0.9238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7093 | 6.73 | 1500 | 5.7514 | 1.2104 |
| 5.642 | 13.45 | 3000 | 5.2188 | 0.9238 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/xlm-roberta-base-finetuned-panx-de | 649d7900cd399ff0620fb3bd66a3cb9dff7f3d3f | 2022-03-09T10:06:47.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | DrishtiSharma | null | DrishtiSharma/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 30,758 | Entry not found |
AlekseyKorshuk/roberta-base-finetuned-ner | 0798cc9cb690af53a42c89639a7ebaf78df94bdf | 2022-03-08T12:33:32.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | AlekseyKorshuk | null | AlekseyKorshuk/roberta-base-finetuned-ner | 1 | null | transformers | 30,759 | Entry not found |
Ramil/wav2vec2-large-xlsr-300m-turkish | 2a06bf2eabdc05c1651bbb1d8e8f8e629d231a0e | 2022-04-05T11:45:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Ramil | null | Ramil/wav2vec2-large-xlsr-300m-turkish | 1 | null | transformers | 30,760 | Entry not found |
ctoraman/RoBERTa-TR-medium-morph-16k | f4f48d530f902825dd14da96e8fc64c7bb047d9d | 2022-04-20T06:57:10.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-morph-16k | 1 | null | transformers | 30,761 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Morph-level 16k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Morph-level, which means that text is split according to a Turkish morphological analyzer (Zemberek). Vocabulary size is 16.7k.
## Note that this model needs a preprocessing step before running, because the tokenizer file is not a morphological anaylzer. That is, the test dataset can not be split into morphemes with the tokenizer file. The user needs to process any test dataset by a Turkish morphological analyzer (Zemberek in this case) before running evaluation.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
OrfeasTsk/bert-base-uncased-finetuned-triviaqa-large-batch | 631b709b588913a6284eed31cba1613ec1bc9f87 | 2022-03-08T18:35:17.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-triviaqa-large-batch | 1 | null | transformers | 30,762 | { 'max_seq_length': 384,
'batch_size': 24,
'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
OrfeasTsk/bert-base-uncased-finetuned-squadv2-large-batch | 64fa5226d891c120383402983d27e03097d778da | 2022-03-08T18:34:54.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-squadv2-large-batch | 1 | null | transformers | 30,763 | { 'max_seq_length': 384,
'batch_size': 24,
'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
anton-l/xtreme_s_xlsr_minds14_fr | 6e9d9f91f4770fc564f93c63282e9d1836ce0865 | 2022-03-11T13:39:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:xtreme_s",
"transformers",
"automatic-speech-recognition",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_minds14_fr | 1 | 1 | transformers | 30,764 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14_fr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3922
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9751 | 10.0 | 50 | 2.0203 | 0.3462 |
| 0.4275 | 20.0 | 100 | 0.7434 | 0.7981 |
| 0.2484 | 30.0 | 150 | 0.7686 | 0.8462 |
| 0.0263 | 40.0 | 200 | 0.3922 | 0.9135 |
| 0.0118 | 50.0 | 250 | 0.4859 | 0.9038 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln26 | 1aa1cf715a5c2dd1708c97d91d98bbbe903559a6 | 2022-03-18T02:37:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln26 | 1 | null | transformers | 30,765 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln26")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln26")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
M-Quan/wav2vec2-demo | bbbd238e2f6da3ccdf65a783e3473b82540f0c8e | 2022-03-09T06:20:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | M-Quan | null | M-Quan/wav2vec2-demo | 1 | null | transformers | 30,766 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4239
- Wer: 0.3508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4093 | 4.0 | 500 | 1.2405 | 0.8685 |
| 0.5597 | 8.0 | 1000 | 0.4538 | 0.4437 |
| 0.2113 | 12.0 | 1500 | 0.4106 | 0.3749 |
| 0.1188 | 16.0 | 2000 | 0.4609 | 0.3775 |
| 0.0776 | 20.0 | 2500 | 0.4239 | 0.3508 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
YoungDeuk/t5-small-finetuned-xsum | 7e2caa442b26c743a6c6e087c44d6e8492c49821 | 2022-03-09T01:51:02.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | YoungDeuk | null | YoungDeuk/t5-small-finetuned-xsum | 1 | null | transformers | 30,767 | Entry not found |
Splend1dchan/byt5small-squad1024-from6000steps | c72adc6af5876908a3cecad9ea72cecab7513a94 | 2022-03-10T18:47:29.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5small-squad1024-from6000steps | 1 | null | transformers | 30,768 | Entry not found |
anton-l/xtreme_s_xlsr_covost2_ru_en | 8160776e83cb2318ee2f2c8a8f81dada2b74756f | 2022-03-10T16:04:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_covost2_ru_en | 1 | null | transformers | 30,769 | Entry not found |
akshaychaudhary/distilbert-base-uncased-finetuned-combinedmodel1-ner | 88ec7ebbf7660bcc9db7b9b5485ba48466266d81 | 2022-03-09T12:59:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akshaychaudhary | null | akshaychaudhary/distilbert-base-uncased-finetuned-combinedmodel1-ner | 1 | null | transformers | 30,770 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-combinedmodel1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-combinedmodel1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3126
- Precision: 0.0289
- Recall: 0.1443
- F1: 0.0481
- Accuracy: 0.7058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 312 | 1.5290 | 0.0431 | 0.2278 | 0.0725 | 0.6990 |
| 0.1106 | 2.0 | 624 | 2.0923 | 0.0341 | 0.1722 | 0.0569 | 0.7041 |
| 0.1106 | 3.0 | 936 | 2.3126 | 0.0289 | 0.1443 | 0.0481 | 0.7058 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
jfealko/wav2vec2-large-xls-r-300m-irish-local | ad5ea3d78844c3705614056273e72de5052fd567 | 2022-03-09T15:01:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jfealko | null | jfealko/wav2vec2-large-xls-r-300m-irish-local | 1 | null | transformers | 30,771 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-irish-local
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-irish-local
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0788
- Wer: 0.7527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 90
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.3839 | 2.94 | 50 | 3.3021 | 1.0 |
| 3.0703 | 5.88 | 100 | 3.1749 | 1.0 |
| 3.1744 | 8.82 | 150 | 3.0452 | 1.0 |
| 2.9719 | 11.76 | 200 | 2.9767 | 1.0 |
| 2.9539 | 14.71 | 250 | 2.9992 | 1.0 |
| 2.9438 | 17.65 | 300 | 2.9767 | 1.0 |
| 2.9296 | 20.59 | 350 | 2.9475 | 1.0 |
| 2.9269 | 23.53 | 400 | 2.9402 | 1.0 |
| 2.9116 | 26.47 | 450 | 2.9255 | 1.0 |
| 2.8326 | 29.41 | 500 | 2.7238 | 1.0 |
| 2.5758 | 32.35 | 550 | 2.3599 | 0.9900 |
| 2.1242 | 35.29 | 600 | 1.8478 | 0.9491 |
| 1.4603 | 38.24 | 650 | 1.5991 | 0.9002 |
| 1.0287 | 41.18 | 700 | 1.5931 | 0.8434 |
| 0.7687 | 44.12 | 750 | 1.6493 | 0.8253 |
| 0.571 | 47.06 | 800 | 1.6889 | 0.8057 |
| 0.4598 | 50.0 | 850 | 1.7521 | 0.7978 |
| 0.3902 | 52.94 | 900 | 1.9074 | 0.7975 |
| 0.318 | 55.88 | 950 | 1.9352 | 0.8133 |
| 0.3026 | 58.82 | 1000 | 2.0157 | 0.8028 |
| 0.2862 | 61.76 | 1050 | 1.9231 | 0.7720 |
| 0.2696 | 64.71 | 1100 | 1.9256 | 0.7644 |
| 0.2528 | 67.65 | 1150 | 2.0277 | 0.7741 |
| 0.2051 | 70.59 | 1200 | 1.9921 | 0.7550 |
| 0.2018 | 73.53 | 1250 | 2.0416 | 0.7615 |
| 0.187 | 76.47 | 1300 | 2.0861 | 0.7635 |
| 0.1749 | 79.41 | 1350 | 2.0926 | 0.7577 |
| 0.1713 | 82.35 | 1400 | 2.0632 | 0.7533 |
| 0.1518 | 85.29 | 1450 | 2.0903 | 0.7542 |
| 0.16 | 88.24 | 1500 | 2.0788 | 0.7527 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ra1/t5-small-finetuned-xsum | 40e3421948fa2fb25131048992d8abfaaf0c7f22 | 2022-03-16T16:50:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ra1 | null | ra1/t5-small-finetuned-xsum | 1 | null | transformers | 30,772 | Entry not found |
vymn/Brain | 9b6a77b3dc080104a4c5eee290698d1b21163191 | 2022-03-11T23:58:10.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vymn | null | vymn/Brain | 1 | null | transformers | 30,773 | Entry not found |
negfir/squeezebert-uncased-finetuned-squad | a6f42c1088851bbc2488e5a4bc903d7e7d754faa | 2022-03-09T18:39:58.000Z | [
"pytorch",
"tensorboard",
"squeezebert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | negfir | null | negfir/squeezebert-uncased-finetuned-squad | 1 | null | transformers | 30,774 | Entry not found |
negfir/Distill_SQuAD | b852ec1545fe092017fdb9665ce8dcde9e33c7e8 | 2022-03-30T16:08:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/Distill_SQuAD | 1 | null | transformers | 30,775 | Entry not found |
paopow/t5_base2 | deb3cc8b206fc31827734ca4c6036843d5aaece5 | 2022-03-10T01:26:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | paopow | null | paopow/t5_base2 | 1 | null | transformers | 30,776 | Entry not found |
BeanBoi50404/DialoGPT-small-PeppaPigButBetter | d46dd8f1cd9fda62c7f32add6468682d0073e9c6 | 2022-03-10T03:27:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BeanBoi50404 | null | BeanBoi50404/DialoGPT-small-PeppaPigButBetter | 1 | null | transformers | 30,777 | ---
tags:
- conversational
---
#Peppa Pig DialoGPT Model |
cammy/bart-large-cnn-100k-lit-evalMA | cbb495b33c2328ec49e4c224a84de245156877b3 | 2022-03-11T10:34:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-100k-lit-evalMA | 1 | null | transformers | 30,778 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-100k-lit-evalMA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100k-lit-evalMA
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7715
- eval_rouge1: 29.7037
- eval_rouge2: 15.0234
- eval_rougeL: 23.5169
- eval_rougeLsum: 26.8682
- eval_gen_len: 68.1209
- eval_runtime: 28898.0987
- eval_samples_per_second: 0.346
- eval_steps_per_second: 0.346
- epoch: 1.0
- step: 100000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-100-lit-evalMA | c2fbdf1744fd39a8f0a7fa4110b084eab568d075 | 2022-03-10T07:49:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-100-lit-evalMA | 1 | null | transformers | 30,779 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-100-lit-evalMA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-lit-evalMA
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1514
- eval_rouge1: 27.8026
- eval_rouge2: 11.2998
- eval_rougeL: 21.4708
- eval_rougeLsum: 24.6333
- eval_gen_len: 62.5
- eval_runtime: 25.6587
- eval_samples_per_second: 0.39
- eval_steps_per_second: 0.39
- epoch: 2.0
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ArnavL/yelp-pretrained | 1a3014cb14dbdb15ecd961dbe0bb2f7d2a8bea2e | 2022-03-10T06:45:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ArnavL | null | ArnavL/yelp-pretrained | 1 | null | transformers | 30,780 | ---
license: mit
---
|
M-Quan/wav2vec2-E | 00b1800c738ed7bead2cfb721a4848d70f60e782 | 2022-03-10T13:41:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | M-Quan | null | M-Quan/wav2vec2-E | 1 | null | transformers | 30,781 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4832
- Wer: 0.3432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5034 | 4.0 | 500 | 1.1620 | 0.8995 |
| 0.5738 | 8.0 | 1000 | 0.4625 | 0.4396 |
| 0.2142 | 12.0 | 1500 | 0.4791 | 0.3965 |
| 0.1219 | 16.0 | 2000 | 0.4677 | 0.3703 |
| 0.0854 | 20.0 | 2500 | 0.4782 | 0.3544 |
| 0.0587 | 24.0 | 3000 | 0.4680 | 0.3516 |
| 0.044 | 28.0 | 3500 | 0.4832 | 0.3432 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
huak95/LST_classic-th-to-en-pt2.2 | 10446ea17c3b36c29cb3101cb6d0bff541ed19f4 | 2022-03-10T10:30:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/LST_classic-th-to-en-pt2.2 | 1 | null | transformers | 30,782 | Entry not found |
huak95/tmp_trainer | 8add6db14eace5786f2784aadac6fec97310fa24 | 2022-03-10T10:53:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/tmp_trainer | 1 | null | transformers | 30,783 | ---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [pong/opus-mt-en-mul-finetuned-en-to-th](https://huggingface.co/pong/opus-mt-en-mul-finetuned-en-to-th) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huak95/TNANA-attacut-th-to-en-pt2 | 854edb03f1e987270149f99b967a3174a7745dc4 | 2022-03-11T03:17:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | huak95 | null | huak95/TNANA-attacut-th-to-en-pt2 | 1 | null | transformers | 30,784 | Entry not found |
Prime2911/DialoGPT-small-handsomejack | 222b45605c57d9aaaffa09e4a248a8ea6129f5c2 | 2022-03-10T18:28:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Prime2911 | null | Prime2911/DialoGPT-small-handsomejack | 1 | null | transformers | 30,785 | ---
tags:
- conversational
---
# Handsome Jack DialoGPT Model |
mfleck/wav2vec2-large-xls-r-300m-german-with-lm | 33d2fb075f558104f2bc172bafd75efe8b31f42f | 2022-03-18T16:48:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mfleck | null | mfleck/wav2vec2-large-xls-r-300m-german-with-lm | 1 | 0 | transformers | 30,786 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-german-with-lm
results: []
---
# wav2vec2-large-xls-r-300m-german-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the German set of the Common Voice dataset.
It achieves a Word Error Rate of 8,8 percent on the evaluation set
## Model description
German wav2vec2-xls-r-300m trained on the full train set of Common Voice dataset with a n-gram language model.
Full code available in [my Github repository](https://github.com/MichaelFleck92/asr-wav2vec)
## Citation
Feel free to cite this work by
```
@misc{mfleck/wav2vec2-large-xls-r-300m-german-with-lm,
title={XLS-R-300 Wav2Vec2 German with language model},
author={Fleck, Michael},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mfleck/wav2vec2-large-xls-r-300m-german-with-lm}},
year={2022}
}
```
## Intended uses & limitations
Inference Usage
```python
from transformers import pipeline
pipe = pipeline(model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
output = pipe("/path/to/file.wav",chunk_length_s=5, stride_length_s=1)
print(output["text"])
```
## Training and evaluation data
Script used for training (takes about 80 hours on a single A100 40GB)
```python
import random
import re
import json
from typing import Any, Dict, List, Optional, Union
import pandas as pd
import numpy as np
import torch
# import soundfile
from datasets import load_dataset, load_metric, Audio
from dataclasses import dataclass, field
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, TrainingArguments, Trainer, Wav2Vec2ForCTC
'''
Most parts of this script are following the tutorial: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
'''
common_voice_train = load_dataset("common_voice", "de", split="train+validation")
# Use train dataset with less training data
#common_voice_train = load_dataset("common_voice", "de", split="train[:3%]")
common_voice_test = load_dataset("common_voice", "de", split="test")
# Remove unused columns
common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
# Remove batches with chars which do not exist in German
print(len(common_voice_train))
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
common_voice_train = common_voice_train.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
common_voice_test = common_voice_test.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
print(len(common_voice_train))
# Remove special chars from transcripts
chars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower()
return batch
common_voice_train = common_voice_train.map(remove_special_characters, num_proc=10)
common_voice_test = common_voice_test.map(remove_special_characters, num_proc=10)
# Show some random transcripts to proof that preprocessing worked as expected
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
print(str(dataset[picks]))
show_random_elements(common_voice_train.remove_columns(["path","audio"]))
# Extract all chars which exist in datasets and add wav2vek tokens
def extract_all_chars(batch):
all_text = " ".join(batch["sentence"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names)
vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)
vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))}
vocab_dict
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
len(vocab_dict)
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
# Create tokenizer and repo at Huggingface
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
repo_name = "wav2vec2-large-xls-r-300m-german-with-lm"
tokenizer.push_to_hub(repo_name)
print("pushed to hub")
# Create feature extractor and processor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# Cast audio column
common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000))
common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))
# Convert audio signal to array and 16khz sampling rate
def prepare_dataset(batch):
audio = batch["audio"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
# Save an audio file to check if it gets loaded correctly
# soundfile.write("/home/debian/trainnew/test.wav",batch["input_values"],audio["sampling_rate"])
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)
print("dataset prepared")
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Use word error rate as metric
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
# Model and training parameters
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-300m",
attention_dropout=0.094,
hidden_dropout=0.01,
feat_proj_dropout=0.04,
mask_time_prob=0.08,
layerdrop=0.04,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=20,
gradient_checkpointing=True,
fp16=True,
save_steps=5000,
eval_steps=5000,
logging_steps=100,
learning_rate=1e-4,
warmup_steps=500,
save_total_limit=3,
push_to_hub=True,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=common_voice_train,
eval_dataset=common_voice_test,
tokenizer=processor.feature_extractor,
)
# Start fine tuning
trainer.train()
# When done push final model to Huggingface hub
trainer.push_to_hub()
```
The model achieves a Word Error Rate of 8,8% using the following script:
```python
import argparse
import re
from typing import Dict
import torch
from datasets import Audio, Dataset, load_dataset, load_metric
from transformers import AutoFeatureExtractor, pipeline
# load dataset
dataset = load_dataset("common_voice", "de", split="test")
# use only 1% of data
#dataset = load_dataset("common_voice", "de", split="test[:1%]")
# load processor
feature_extractor = AutoFeatureExtractor.from_pretrained("mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
sampling_rate = feature_extractor.sampling_rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
# load eval pipeline
# device=0 means GPU, use device=-1 for CPU
asr = pipeline("automatic-speech-recognition", model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm", device=0)
# Remove batches with chars which do not exist in German
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
dataset = dataset.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
# map function to decode audio
def map_to_pred(batch):
prediction = asr(batch["audio"]["array"], chunk_length_s=5, stride_length_s=1)
# Print automatic generated transcript
#print(str(prediction))
batch["prediction"] = prediction["text"]
text = batch["sentence"]
batch["target"] = re.sub(chars_to_ignore_regex, "", text.lower()) + " "
return batch
# run inference on all examples
result = dataset.map(map_to_pred, remove_columns=dataset.column_names)
# load metric
wer = load_metric("wer")
cer = load_metric("cer")
# compute metrics
wer_result = wer.compute(references=result["target"], predictions=result["prediction"])
cer_result = cer.compute(references=result["target"], predictions=result["prediction"])
# print results
result_str = f"WER: {wer_result}\n" f"CER: {cer_result}"
print(result_str)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1396 | 1.42 | 5000 | 0.1449 | 0.1479 |
| 0.1169 | 2.83 | 10000 | 0.1285 | 0.1286 |
| 0.0938 | 4.25 | 15000 | 0.1277 | 0.1230 |
| 0.0924 | 5.67 | 20000 | 0.1305 | 0.1191 |
| 0.0765 | 7.09 | 25000 | 0.1256 | 0.1158 |
| 0.0749 | 8.5 | 30000 | 0.1186 | 0.1092 |
| 0.066 | 9.92 | 35000 | 0.1173 | 0.1068 |
| 0.0581 | 11.34 | 40000 | 0.1225 | 0.1030 |
| 0.0582 | 12.75 | 45000 | 0.1153 | 0.0999 |
| 0.0507 | 14.17 | 50000 | 0.1182 | 0.0971 |
| 0.0491 | 15.59 | 55000 | 0.1136 | 0.0939 |
| 0.045 | 17.01 | 60000 | 0.1140 | 0.0914 |
| 0.0395 | 18.42 | 65000 | 0.1160 | 0.0902 |
| 0.037 | 19.84 | 70000 | 0.1148 | 0.0882 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
OrfeasTsk/bert-base-uncased-finetuned-quac-large-batch | 6a5357f24e19e5a4261b3d11f2c08d1868934704 | 2022-03-10T17:29:10.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-quac-large-batch | 1 | null | transformers | 30,787 | { 'max_seq_length': 384,
'batch_size': 24,
'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
OrfeasTsk/bert-base-uncased-finetuned-newsqa-large-batch | a9fd527d78f7bf6b23f49d21045d04546f97a765 | 2022-03-10T21:28:01.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-newsqa-large-batch | 1 | null | transformers | 30,788 | { 'max_seq_length': 384,
'batch_size': 24,
'learning_rate': {'val': 3e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
atlantis/xlm-roberta-base-finetuned-panx-de | d1010e2c35999f5d0a7fbd985418f702fd8d8b9d | 2022-03-11T01:17:07.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | atlantis | null | atlantis/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 30,789 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8550872422388397
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1333
- F1: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1573 | 0.8137 |
| 0.2142 | 2.0 | 526 | 0.1386 | 0.8466 |
| 0.2142 | 3.0 | 789 | 0.1333 | 0.8551 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lijingxin/xlm-roberta-base-finetuned-panx-it | b7821118c4ad48a7a9f0951e1f1449486d86642d | 2022-03-11T02:22:47.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | lijingxin | null | lijingxin/xlm-roberta-base-finetuned-panx-it | 1 | null | transformers | 30,790 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.830592105263158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2400
- F1: 0.8306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8118 | 1.0 | 70 | 0.3471 | 0.7047 |
| 0.2869 | 2.0 | 140 | 0.2679 | 0.8043 |
| 0.1762 | 3.0 | 210 | 0.2400 | 0.8306 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lijingxin/xlm-roberta-base-finetuned-panx-en | 56ebad957c94f2472d305b22f472a6bfac9e773e | 2022-03-11T02:25:33.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | lijingxin | null | lijingxin/xlm-roberta-base-finetuned-panx-en | 1 | null | transformers | 30,791 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7043040804918949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3814
- F1: 0.7043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1472 | 1.0 | 50 | 0.5820 | 0.4600 |
| 0.5186 | 2.0 | 100 | 0.4105 | 0.6645 |
| 0.3599 | 3.0 | 150 | 0.3814 | 0.7043 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lijingxin/xlm-roberta-base-finetuned-panx-all | 40f4214673ceaa946bd53fb536cd50d456ee2244 | 2022-03-11T02:47:18.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | lijingxin | null | lijingxin/xlm-roberta-base-finetuned-panx-all | 1 | null | transformers | 30,792 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3036 | 1.0 | 835 | 0.1888 | 0.8068 |
| 0.1585 | 2.0 | 1670 | 0.1763 | 0.8415 |
| 0.1027 | 3.0 | 2505 | 0.1748 | 0.8555 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SAI2-EXP/TNANA-en-th-align-finetuned | d7c0f8504a14b794ca3e6ee43a84625f5f936aa5 | 2022-03-10T10:52:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SAI2-EXP | null | SAI2-EXP/TNANA-en-th-align-finetuned | 1 | null | transformers | 30,793 | Entry not found |
momo/MOTOD_fine-tuning | ed8a52a62e27a1514355f678c16a8acb2231ec20 | 2022-03-11T06:48:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | momo | null | momo/MOTOD_fine-tuning | 1 | null | transformers | 30,794 | ---
license: apache-2.0
---
|
ChanP/finetuned-th-to-en | 0e83377c162255bb9f74fef3fee8a8e839df4f0a | 2022-03-11T08:04:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ChanP | null | ChanP/finetuned-th-to-en | 1 | null | transformers | 30,795 | Hi |
zuppif/maskformer-swin-large-coco | 34622e04b9f0fafd212809f2e665735fd9f757a3 | 2022-03-11T14:21:44.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-large-coco | 1 | null | transformers | 30,796 | Entry not found |
zuppif/maskformer-swin-tiny-coco | ab304130d9a84af0042a9d38e1af58e38714c224 | 2022-03-11T14:24:53.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-tiny-coco | 1 | null | transformers | 30,797 | Entry not found |
zuppif/maskformer-swin-base-coco | 41a395d4677c6721a52b353746e731e8d91d234d | 2022-03-11T14:25:46.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-base-coco | 1 | null | transformers | 30,798 | Entry not found |
zuppif/maskformer-swin-base-ade | 2414a16c60f1463efcba694a0b4ff6b8a764a4cf | 2022-03-11T14:27:01.000Z | [
"pytorch",
"maskformer",
"transformers"
] | null | false | zuppif | null | zuppif/maskformer-swin-base-ade | 1 | null | transformers | 30,799 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.