modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huggingtweets/sporeball | huggingtweets | 2022-01-05T08:02:01Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/sporeball/1641369716297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365405536401776642/Z17NbuYy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lux</div>
<div style="text-align: center; font-size: 14px;">@sporeball</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lux.
| Data | lux |
| --- | --- |
| Tweets downloaded | 1150 |
| Retweets | 171 |
| Short tweets | 120 |
| Tweets kept | 859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w9y6gn1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sporeball's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sporeball')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MingZhong/DialogLED-large-5120 | MingZhong | 2022-01-05T07:36:41Z | 67 | 7 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"arxiv:2109.02492",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | [DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492).
## Introduction
DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase.
## Finetuning for Downstream Tasks
Please refer to [our GitHub page](https://github.com/microsoft/DialogLM). |
rdpatilds/con-nlu | rdpatilds | 2022-01-05T05:31:42Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: con-nlu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# con-nlu
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
abdelkader/distilbert-base-uncased-finetuned-emotion | abdelkader | 2022-01-04T23:18:05Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215604730468001
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.3082 | 0.907 | 0.9045 |
| 0.2438 | 2.0 | 500 | 0.2162 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huawei-noah/JABER | huawei-noah | 2022-01-04T20:19:57Z | 1 | 0 | null | [
"pytorch",
"arxiv:2112.04329",
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Overview
<p align="center">
<img src="https://avatars.githubusercontent.com/u/12619994?s=200&v=4" width="150">
</p>
<!-- -------------------------------------------------------------------------------- -->
JABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model.
JABER obtained rank one on [ALUE leaderboard](https://www.alue.org/leaderboard) at `01/09/2021`.
This model is **only compatible** with the code in [this github repo](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/JABER-PyTorch) (not supported by the [Transformers](https://github.com/huggingface/transformers) library)
## Citation
Please cite the following [paper](https://arxiv.org/abs/2112.04329) when using our code and model:
``` bibtex
@misc{ghaddar2021jaber,
title={JABER: Junior Arabic BERt},
author={Abbas Ghaddar and Yimeng Wu and Ahmad Rashid and Khalil Bibi and Mehdi Rezagholizadeh and Chao Xing and Yasheng Wang and Duan Xinyu and Zhefeng Wang and Baoxing Huai and Xin Jiang and Qun Liu and Philippe Langlais},
year={2021},
eprint={2112.04329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
huggingtweets/funnyordie | huggingtweets | 2022-01-04T19:39:10Z | 104 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/894956741573525504/YFg6jiNP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Funny Or Die</div>
<div style="text-align: center; font-size: 14px;">@funnyordie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Funny Or Die.
| Data | Funny Or Die |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 237 |
| Short tweets | 190 |
| Tweets kept | 2823 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zjkuy05u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @funnyordie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/funnyordie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bhan/distilbert-base-uncased-finetuned-squad | bhan | 2022-01-04T19:20:26Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 5.8757 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
Khanh/bert-base-multilingual-cased-finetuned-viquad | Khanh | 2022-01-04T19:07:54Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-viquad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.5534 |
| No log | 2.0 | 130 | 2.1165 |
| No log | 3.0 | 195 | 1.9815 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/xlm-roberta-base-finetuned-squad | Khanh | 2022-01-04T17:49:35Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7665 | 1.0 | 2295 | 0.5231 |
| 0.5236 | 2.0 | 4590 | 0.5539 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
NbAiLab/roberta_des_512_4e4 | NbAiLab | 2022-01-04T16:46:20Z | 3 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use.
|
Khanh/distilbert-base-multilingual-cased-finetuned-squad | Khanh | 2022-01-04T15:53:15Z | 82 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.923 | 1.0 | 579 | 0.8439 |
| 0.8479 | 2.0 | 1158 | 0.6784 |
| 0.6148 | 3.0 | 1737 | 0.6587 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nvidia/megatron-bert-uncased-345m | nvidia | 2022-01-04T15:16:39Z | 0 | 7 | null | [
"arxiv:1909.08053",
"region:us"
]
| null | 2022-03-02T23:29:05Z | <!---
# ##############################################################################################
#
# Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ##############################################################################################
-->
[Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a bidirectional transformer in the style of BERT with text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. This model contains 345 million parameters. It is made up of 24 layers, 16 attention heads with a hidden size of 1024.
Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
# How to run Megatron BERT using Transformers
## Prerequisites
In that guide, we run all the commands from a folder called `$MYDIR` and defined as (in `bash`):
```
export MYDIR=$HOME
```
Feel free to change the location at your convenience.
To run some of the commands below, you'll have to clone `Transformers`.
```
git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
```
## Get the checkpoint from the NVIDIA GPU Cloud
You must create a directory called `nvidia/megatron-bert-uncased-345m`.
```
mkdir -p $MYDIR/nvidia/megatron-bert-uncased-345m
```
You can download the checkpoint from the [NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m). For that you
have to [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU
Cloud (NGC) Registry CLI. Further documentation for downloading models can be
found in the [NGC
documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
Alternatively, you can directly download the checkpoint using:
```
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip
```
## Converting the checkpoint
In order to be loaded into `Transformers`, the checkpoint has to be converted. You should run the following commands for that purpose.
Those commands will create `config.json` and `pytorch_model.bin` in `$MYDIR/nvidia/megatron-bert-{cased,uncased}-345m`.
You can move those files to different directories if needed.
```
python3 $MYDIR/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip
```
As explained in [PR #14956](https://github.com/huggingface/transformers/pull/14956), if when running this conversion
script and you're getting an exception:
```
ModuleNotFoundError: No module named 'megatron.model.enums'
```
you need to tell python where to find the clone of Megatron-LM, e.g.:
```
cd /tmp
git clone https://github.com/NVIDIA/Megatron-LM
PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py ...
```
Or, if you already have it cloned elsewhere, simply adjust the path to the existing path.
If the training was done using a Megatron-LM fork, e.g. [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/) then
you may need to have that one in your path, i.e., /path/to/Megatron-DeepSpeed.
## Masked LM
The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform a `Masked LM` task.
```
import os
import torch
from transformers import BertTokenizer, MegatronBertForMaskedLM
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-uncased-345m')
# The path to the config/checkpoint (see the conversion step above).
directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-uncased-345m')
# Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m.
model = MegatronBertForMaskedLM.from_pretrained(directory)
# Copy to the device and use FP16.
assert torch.cuda.is_available()
device = torch.device("cuda")
model.to(device)
model.eval()
model.half()
# Create inputs (from the BERT example page).
input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device)
label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
## Next sentence prediction
The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform next
sentence prediction.
```
import os
import torch
from transformers import BertTokenizer, MegatronBertForNextSentencePrediction
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-uncased-345m')
# The path to the config/checkpoint (see the conversion step above).
directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-uncased-345m')
# Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m.
model = MegatronBertForNextSentencePrediction.from_pretrained(directory)
# Copy to the device and use FP16.
assert torch.cuda.is_available()
device = torch.device("cuda")
model.to(device)
model.eval()
model.half()
# Create inputs (from the BERT example page).
input = tokenizer('In Italy, pizza served in formal settings is presented unsliced.',
'The sky is blue due to the shorter wavelength of blue light.',
return_tensors='pt').to(device)
label = torch.LongTensor([1]).to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
# Original code
The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
scasutt/Prototype_training | scasutt | 2022-01-04T14:59:34Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training
This model is a fine-tuned version of [scasutt/Prototype_training](https://huggingface.co/scasutt/Prototype_training) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3719
- Wer: 0.4626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3853 | 1.47 | 100 | 0.3719 | 0.4626 |
| 0.3867 | 2.94 | 200 | 0.3719 | 0.4626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/bert-base-multilingual-cased-finetuned-squad | Khanh | 2022-01-04T14:51:33Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1782 | 1.0 | 579 | 0.5258 |
| 0.4938 | 2.0 | 1158 | 0.4639 |
| 0.32 | 3.0 | 1737 | 0.4919 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
sshasnain/wav2vec2-xls-r-timit-trainer | sshasnain | 2022-01-04T14:49:41Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-timit-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1064
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5537 | 4.03 | 500 | 0.6078 | 1.0 |
| 0.5444 | 8.06 | 1000 | 0.4990 | 0.9994 |
| 0.3744 | 12.1 | 1500 | 0.5530 | 1.0 |
| 0.2863 | 16.13 | 2000 | 0.6401 | 1.0 |
| 0.2357 | 20.16 | 2500 | 0.6485 | 1.0 |
| 0.1933 | 24.19 | 3000 | 0.7448 | 0.9994 |
| 0.162 | 28.22 | 3500 | 0.7502 | 1.0 |
| 0.1325 | 32.26 | 4000 | 0.7801 | 1.0 |
| 0.1169 | 36.29 | 4500 | 0.8334 | 1.0 |
| 0.1031 | 40.32 | 5000 | 0.8269 | 1.0 |
| 0.0913 | 44.35 | 5500 | 0.8432 | 1.0 |
| 0.0793 | 48.39 | 6000 | 0.8738 | 1.0 |
| 0.0694 | 52.42 | 6500 | 0.8897 | 1.0 |
| 0.0613 | 56.45 | 7000 | 0.8966 | 1.0 |
| 0.0548 | 60.48 | 7500 | 0.9398 | 1.0 |
| 0.0444 | 64.51 | 8000 | 0.9548 | 1.0 |
| 0.0386 | 68.55 | 8500 | 0.9647 | 1.0 |
| 0.0359 | 72.58 | 9000 | 0.9901 | 1.0 |
| 0.0299 | 76.61 | 9500 | 1.0151 | 1.0 |
| 0.0259 | 80.64 | 10000 | 1.0526 | 1.0 |
| 0.022 | 84.67 | 10500 | 1.0754 | 1.0 |
| 0.0189 | 88.71 | 11000 | 1.0688 | 1.0 |
| 0.0161 | 92.74 | 11500 | 1.0914 | 1.0 |
| 0.0138 | 96.77 | 12000 | 1.1064 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
NikolajMunch/danish-emotion-classification | NikolajMunch | 2022-01-04T12:14:46Z | 28 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment",
"emotion",
"danish",
"da",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
widget:
- text: "Hold da op! Kan det virkelig passe?"
language:
- "da"
tags:
- sentiment
- emotion
- danish
---
# **-- EMODa --**
## BERT-model for danish multi-class classification of emotions
Classifies a danish sentence into one of 6 different emotions:
| Danish emotion | Ekman's emotion |
| ----- | ----- |
| 😞 **Afsky** | Disgust |
| 😨 **Frygt** | Fear |
| 😄 **Glæde** | Joy |
| 😱 **Overraskelse** | Surprise |
| 😢 **Tristhed** | Sadness |
| 😠 **Vrede** | Anger |
# How to use
```python
from transformers import pipeline
model_path = "NikolajMunch/danish-emotion-classification"
classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet")
print(prediction)
# [{'label': 'Tristhed', 'score': 0.9725030660629272}]
```
or
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification")
model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification")
```
|
pierreguillou/bert-base-cased-squad-v1.1-portuguese | pierreguillou | 2022-01-04T09:57:53Z | 2,742 | 35 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"bert-base",
"pt",
"dataset:brWaC",
"dataset:squad",
"dataset:squad_v1_pt",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- question-answering
- bert
- bert-base
- pytorch
datasets:
- brWaC
- squad
- squad_v1_pt
metrics:
- squad
widget:
- text: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
- text: "Onde foi descoberta a Covid-19?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
---
# Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1

## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) on Google Colab.
The language model used is the [BERTimbau Base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (aka "bert-base-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
## Informations on the method used
All the informations are in the blog post : [NLP | Modelo de Question Answering em qualquer idioma baseado no BERT base (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78)
## Notebooks in Google Colab & GitHub
- Google Colab: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://colab.research.google.com/drive/18ueLdi_V321Gz37x4gHq8mb4XZSGWfZx?usp=sharing)
- GitHub: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb)
## Performance
The results obtained are the following:
```
f1 = 82.50
exact match = 70.49
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
context = r"""
A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19,
uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2).
A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China,
em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano.
Acredita-se que o vírus tenha uma origem zoonótica, porque os primeiros casos confirmados
tinham principalmente ligações ao Mercado Atacadista de Frutos do Mar de Huanan, que também vendia animais vivos.
Em 11 de março de 2020, a Organização Mundial da Saúde declarou o surto uma pandemia. Até 8 de fevereiro de 2021,
pelo menos 105 743 102 casos da doença foram confirmados em pelo menos 191 países e territórios,
com cerca de 2 308 943 mortes e 58 851 440 pessoas curadas.
"""
model_name = 'pierreguillou/bert-base-cased-squad-v1.1-portuguese'
nlp = pipeline("question-answering", model=model_name)
question = "Quando começou a pandemia de Covid-19 no mundo?"
result = nlp(question=question, context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Answer: '1 de dezembro de 2019', score: 0.713, start: 328, end: 349
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese")
model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese")
```
Or just clone the model repo:
```python
git lfs install
git clone https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78#c572)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/), [Google Colab](https://colab.research.google.com/) and [AI Lab](https://ailab.unb.br/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021bertbasecasedsquadv11portuguese,
title={Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
``` |
pierreguillou/bert-large-cased-pt-lenerbr | pierreguillou | 2022-01-04T08:52:43Z | 57 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"pt",
"dataset:pierreguillou/lener_br_finetuning_language_model",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language:
- pt
tags:
- generated_from_trainer
datasets:
- pierreguillou/lener_br_finetuning_language_model
model-index:
- name: checkpoints
results:
- task:
name: Fill Mask
type: fill-mask
dataset:
name: pierreguillou/lener_br_finetuning_language_model
type: pierreguillou/lener_br_finetuning_language_model
metrics:
- name: Loss
type: loss
value: 1.127950
widget:
- text: "Com efeito, se tal fosse possível, o Poder [MASK] – que não dispõe de função legislativa – passaria a desempenhar atribuição que lhe é institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, competência que não lhe pertence, com evidente transgressão ao princípio constitucional da separação de poderes."
---
## (BERT large) Language modeling in the legal domain in Portuguese (LeNER-Br)
**bert-large-cased-pt-lenerbr** is a Language Model in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [BERTimbau large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective.
You can check as well the [version base of this model](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr).
## Widget & APP
You can test this model into the widget of this page.
## Blog post
This language model is used to get a NER model on the Portuguese judicial domain. You can check the fine-tuned NER model at [pierreguillou/ner-bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr).
All informations and links are in this blog post: [NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-pt-lenerbr")
model = AutoModelForMaskedLM.from_pretrained("pierreguillou/bert-large-cased-pt-lenerbr")
````
## Training procedure
## Notebook
The notebook of finetuning ([Finetuning_language_model_BERtimbau_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/Finetuning_language_model_BERtimbau_LeNER_Br.ipynb)) is in github.
### Training results
````
Num examples = 3227
Num Epochs = 5
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 4
Total optimization steps = 2015
Step Training Loss Validation Loss
100 1.616700 1.366015
200 1.452000 1.312473
300 1.431100 1.253055
400 1.407500 1.264705
500 1.301900 1.243277
600 1.317800 1.233684
700 1.319100 1.211826
800 1.303800 1.190818
900 1.262800 1.171898
1000 1.235900 1.146275
1100 1.221900 1.149027
1200 1.226200 1.127950
1300 1.201700 1.172729
1400 1.198200 1.145363
```` |
Ayham/albert_gpt2_Full_summarization_cnndm | Ayham | 2022-01-03T23:42:44Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_gpt2_Full_summarization_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_gpt2_Full_summarization_cnndm
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
junnyu/roformer_chinese_small | junnyu | 2022-01-03T15:44:37Z | 493 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"tf2.0",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
hogger32/distilbert-base-uncased-finetuned-squad | hogger32 | 2022-01-03T15:39:48Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.316 | 1.0 | 2363 | 2.0234 |
| 2.0437 | 2.0 | 4726 | 1.7881 |
| 1.9058 | 3.0 | 7089 | 1.7004 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
deepdml/wav2vec2-base-timit-demo-colab | deepdml | 2022-01-03T15:04:23Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4798
- Wer: 0.3474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5229 | 4.0 | 500 | 1.6557 | 1.0422 |
| 0.6618 | 8.0 | 1000 | 0.4420 | 0.4469 |
| 0.2211 | 12.0 | 1500 | 0.4705 | 0.4002 |
| 0.1281 | 16.0 | 2000 | 0.4347 | 0.3688 |
| 0.0868 | 20.0 | 2500 | 0.4653 | 0.3590 |
| 0.062 | 24.0 | 3000 | 0.4747 | 0.3519 |
| 0.0472 | 28.0 | 3500 | 0.4798 | 0.3474 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ronanki/xlmr_02-02-2022 | ronanki | 2022-01-03T13:48:37Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_02-02-2022
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_02-02-2022')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_02-02-2022')
model = AutoModel.from_pretrained('ronanki/xlmr_02-02-2022')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_02-02-2022)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
impyadav/GPT2-FineTuned-Hinglish-Song-Generation | impyadav | 2022-01-03T11:33:54Z | 51 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist)
language:
- Hindi
- Hinglish
|
hiiamsid/sentence_similarity_hindi | hiiamsid | 2022-01-03T11:25:33Z | 236 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"hi",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
language:
- hi
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hiiamsid/sentence_similarity_hindi
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hiiamsid/sentence_similarity_hindi')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
```
cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483
```
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 341 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 137,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Model: [setu4993/LaBSE]
(https://huggingface.co/setu4993/LaBSE)
- Sentence Transformers [Semantic Textual Similarity]
(https://www.sbert.net/examples/training/sts/README.html)
|
vdivya/wav2vec2-base-timit-demo-colab | vdivya | 2022-01-03T09:51:04Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4630
- Wer: 0.3399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4454 | 4.0 | 500 | 1.2920 | 0.9381 |
| 0.5869 | 8.0 | 1000 | 0.4634 | 0.4297 |
| 0.2216 | 12.0 | 1500 | 0.4481 | 0.3778 |
| 0.1283 | 16.0 | 2000 | 0.4651 | 0.3741 |
| 0.0872 | 20.0 | 2500 | 0.4762 | 0.3548 |
| 0.0635 | 24.0 | 3000 | 0.4495 | 0.3513 |
| 0.0482 | 28.0 | 3500 | 0.4630 | 0.3399 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingtweets/chheplo | huggingtweets | 2022-01-03T05:23:33Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/chheplo/1641187409438/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477561163961438208/7HnhxOo__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratik Desai</div>
<div style="text-align: center; font-size: 14px;">@chheplo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratik Desai.
| Data | Pratik Desai |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 362 |
| Short tweets | 139 |
| Tweets kept | 2747 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4tv1dtfa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chheplo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chheplo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pinecone/mpnet-retriever-squad2 | pinecone | 2022-01-03T02:42:15Z | 6 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 5429 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 542,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vinhood/chefberto-italian-cased | vinhood | 2022-01-02T20:24:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: it
license: mit
widget:
- text: "La pasta più semplice è aglio, [MASK] e peperoncino."
- text: "Per fare la carbonara servono le [MASK]."
- text: "A tavola non può mancare del buon [MASK]."
---
# ChefBERTo 👨🍳
**chefberto-italian-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on Italian cooking recipes, approximately 50k sentences (2.6M words).
**Author:** Cristiano De Nobili ([@denocris](https://twitter.com/denocris) on Twitter, [LinkedIn](https://www.linkedin.com/in/cristiano-de-nobili/)) for [VINHOOD](https://www.vinhood.com/en/).
<p>
<img src="https://drive.google.com/uc?export=view&id=1u5aY2wKu-X5DAzbOq7rsgGFW5_lGUAQn" width="400"> </br>
</p>
# Perplexity
Test set: 9k sentences about food.
| Model | Perplexity |
| ------ | ------ |
| chefberto-italian-cased | **1.84** |
| bert-base-italian-xxl-cased | 2.85 |
# Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "vinhood/chefberto-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
``` |
juierror/wav2vec2-large-xls-r-thai-test | juierror | 2022-01-02T14:18:08Z | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-thai-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
stefan-jo/bert-finetuned-ner | stefan-jo | 2022-01-02T13:21:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9378727634194831
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9452329270328937
- name: Accuracy
type: accuracy
value: 0.9866515570730559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9379
- Recall: 0.9527
- F1: 0.9452
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.088 | 1.0 | 1756 | 0.0625 | 0.9203 | 0.9399 | 0.9300 | 0.9835 |
| 0.0383 | 2.0 | 3512 | 0.0614 | 0.9348 | 0.9460 | 0.9404 | 0.9858 |
| 0.0209 | 3.0 | 5268 | 0.0619 | 0.9379 | 0.9527 | 0.9452 | 0.9867 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AlekseyKulnevich/Pegasus-HeaderGeneration | AlekseyKulnevich | 2022-01-02T12:36:45Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | **Usage HuggingFace Transformers for header generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
headers = tokenizer.batch_decode(headers, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
headers = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=20)
tokenizer.batch_decode(headers, skip_special_tokens=True)
```
output:
1. *the impact of climate change on tropical cyclones*
2. *the impact of human induced climate change on tropical cyclones*
3. *the impact of climate change on tropical cyclone formation in the midlatitudes*
4. *how climate change will expand the range of tropical cyclones?*
5. *the impact of climate change on tropical cyclones in the midlatitudes*
6. *global warming will expand the range of tropical cyclones*
7. *climate change will expand the range of tropical cyclones*
8. *the impact of climate change on tropical cyclone formation*
9. *the impact of human induced climate change on tropical cyclone formation*
10. *tropical cyclones in the mid-latitudes*
11. *climate change will expand the range of tropical cyclones in the middle latitudes*
12. *global warming will expand the range of tropical cyclones, a new study says*
13. *the impacts of climate change on tropical cyclones*
14. *the impact of global warming on tropical cyclones*
15. *climate change will expand the range of tropical cyclones, a new study says*
16. *global warming will expand the range of tropical cyclones in the middle latitudes*
17. *the effects of climate change on tropical cyclones*
18. *how climate change will expand the range of tropical cyclones*
19. *climate change will expand the range of tropical cyclones over the equator*
20. *the impact of human induced climate change on tropical cyclones.*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate) |
AlekseyKulnevich/Pegasus-QuestionGeneration | AlekseyKulnevich | 2022-01-02T12:24:37Z | 29 | 1 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | **Usage HuggingFace Transformers for question generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
questions = tokenizer.batch_decode(questions, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
tokenizer.batch_decode(questions, skip_special_tokens=True)
```
output:
1. *What is the impact of human induced climate change on tropical cyclones?*
2. *What is the impact of climate change on tropical cyclones?*
3. *What is the impact of human induced climate change on tropical cyclone formation?*
4. *How many tropical cyclones will occur in the mid-latitudes?*
5. *What is the impact of climate change on the formation of tropical cyclones?*
6. *Is it possible for a tropical cyclone to form in the middle latitudes?*
7. *How many tropical cyclones will be formed in the mid-latitudes?*
8. *How many tropical cyclones will there be in the mid-latitudes?*
9. *How many tropical cyclones will form in the mid-latitudes?*
10. *What is the impact of global warming on tropical cyclones?*
11. *How long does it take for a tropical cyclone to form?*
12. 'What are the impacts of climate change on tropical cyclones?*
13. *What are the effects of climate change on tropical cyclones?*
14. *How many tropical cyclones will be able to form in the middle latitudes?*
15. *What is the impact of climate change on tropical cyclone formation?*
16. *What is the effect of climate change on tropical cyclones?*
17. *How long does it take for a tropical cyclone to form in the middle latitude?*
18. *How many tropical cyclones will occur in the middle latitudes?*
19. *How many tropical cyclones are likely to form in the midlatitudes?*
20. *How many tropical cyclones are likely to form in the middle latitudes?*
21. *How many tropical cyclones are expected to form in the midlatitudes?*
22. *How many tropical cyclones will be formed in the middle latitudes?*
23. *How many tropical cyclones will there be in the middle latitudes?*
24. *How long will it take for a tropical cyclone to form in the middle latitude?*
25. *What is the impact of global warming on tropical cyclone formation?*
26. *How many tropical cyclones will form in the middle latitudes?*
27. *How many tropical cyclones can we expect to form in the middle latitudes?*
28. *Is it possible for a tropical cyclone to form in the middle latitude?*
29. *What is the effect of climate change on tropical cyclone formation?*
30. *What are the effects of climate change on tropical cyclone formation?*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate) |
addy88/perceiver_imdb | addy88 | 2022-01-02T11:20:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"perceiver",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverTokenizer, PerceiverForMaskedLM
tokenizer = PerceiverTokenizer.from_pretrained("addy88/perceiver_imdb")
model = PerceiverForMaskedLM.from_pretrained("addy88/perceiver_imdb")
text = "This is an incomplete sentence where some words are missing."
# prepare input
encoding = tokenizer(text, padding="max_length", return_tensors="pt")
# mask " missing.". Note that the model performs much better if the masked span starts with a space.
encoding.input_ids[0, 52:61] = tokenizer.mask_token_id
inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device)
# forward pass
outputs = model(inputs=inputs, attention_mask=input_mask)
logits = outputs.logits
masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1)
print(tokenizer.decode(masked_tokens_predictions))
>>> should print " missing."
``` |
LeoFeng/ChineseSequenceClassification | LeoFeng | 2022-01-02T09:13:10Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | 利用THUC dataset 訓練的文章分類器,共支援14種種類 |
ykliu1892/opus-mt-zh-de-tuned-Tatoeba-small | ykliu1892 | 2022-01-02T04:09:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-zh-de-tuned-Tatoeba-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-zh-de-tuned-Tatoeba-small
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-de](https://huggingface.co/Helsinki-NLP/opus-mt-zh-de) on a refined dataset of Tatoeba German - Chinese corpus https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README.md.
It achieves the following results on the evaluation set:
- Loss: 2.2703
- Bleu: 16.504
- Gen Len: 16.6531
## Model description
More information needed
## Intended uses & limitations
Prefix used during fine-tuning: "将中文翻译成德语". This prefix is also recommended in prediction.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.7229 | 0.24 | 16000 | 2.5605 | 14.1956 | 16.2206 |
| 2.5988 | 0.49 | 32000 | 2.4447 | 14.8619 | 16.2726 |
| 2.515 | 0.73 | 48000 | 2.3817 | 15.3212 | 16.2823 |
| 2.4683 | 0.97 | 64000 | 2.3367 | 15.9043 | 16.7138 |
| 2.3873 | 1.22 | 80000 | 2.3115 | 16.1037 | 16.6369 |
| 2.3792 | 1.46 | 96000 | 2.2919 | 16.2957 | 16.6304 |
| 2.3626 | 1.7 | 112000 | 2.2790 | 16.2995 | 16.6235 |
| 2.3353 | 1.95 | 128000 | 2.2703 | 16.504 | 16.6531 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
LACAI/DialoGPT-small-SGD | LACAI | 2022-01-02T04:08:07Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | Base model: [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small)
Fine tuned for dialogue response generation on the [Schema Guided Dialogue Dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue) (Rastogi et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses |
huggingtweets/michaeldrummey-theegaycomrade-vpukhanov | huggingtweets | 2022-01-01T19:30:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/michaeldrummey-theegaycomrade-vpukhanov/1641065423081/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1413939279127011331/dVGeqlNN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468996975404228610/Etj-urSz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471632802894389249/2ubdnotf_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vyacheslav Pukhanov & Michael Drummey & oh no zach had a thought</div>
<div style="text-align: center; font-size: 14px;">@michaeldrummey-theegaycomrade-vpukhanov</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vyacheslav Pukhanov & Michael Drummey & oh no zach had a thought.
| Data | Vyacheslav Pukhanov | Michael Drummey | oh no zach had a thought |
| --- | --- | --- | --- |
| Tweets downloaded | 308 | 3246 | 3248 |
| Retweets | 50 | 231 | 55 |
| Short tweets | 63 | 1133 | 640 |
| Tweets kept | 195 | 1882 | 2553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1udeu111/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @michaeldrummey-theegaycomrade-vpukhanov's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3h79hg6v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3h79hg6v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/michaeldrummey-theegaycomrade-vpukhanov')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
s3h/arabert-gec-v2-2 | s3h | 2022-01-01T18:50:19Z | 3 | 0 | transformers | [
"transformers",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: s3h/arabic-t5-small-finetuned-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/arabic-t5-small-finetuned-gec
This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0930
- Validation Loss: 0.9132
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0930 | 0.9132 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
s3h/arabic-t5-small-finetuned-gec | s3h | 2022-01-01T18:36:08Z | 9 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: s3h/arabic-t5-small-finetuned-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/arabic-t5-small-finetuned-gec
This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0930
- Validation Loss: 0.9132
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0930 | 0.9132 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
avichr/heBERT_sentiment_analysis | avichr | 2021-12-31T16:08:22Z | 17,326 | 26 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1810.04805",
"arxiv:2102.01909",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br>
HeBert was trained on three datasets:
1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences.
2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences
3. Emotion UGC data was collected for the purpose of this study. (described below)
We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks.
### Emotion UGC Data Description
Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences.
4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity <br>
In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).
### Performance
#### sentiment analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| natural | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
## How to use
### For masked-LM model (can be fine-tunned to any down-stream task)
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT")
model = AutoModel.from_pretrained("avichr/heBERT")
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="avichr/heBERT",
tokenizer="avichr/heBERT"
)
fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.")
```
### For sentiment classification model (polarity ONLY):
```
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
>>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
[[{'label': 'natural', 'score': 0.9978172183036804},
{'label': 'positive', 'score': 0.0014792329166084528},
{'label': 'negative', 'score': 0.0007035882445052266}]]
>>> sentiment_analysis('קפה זה טעים')
[[{'label': 'natural', 'score': 0.00047328314394690096},
{'label': 'possitive', 'score': 0.9994067549705505},
{'label': 'negetive', 'score': 0.00011996887042187154}]]
>>> sentiment_analysis('אני לא אוהב את העולם')
[[{'label': 'natural', 'score': 9.214012970915064e-05},
{'label': 'possitive', 'score': 8.876807987689972e-05},
{'label': 'negetive', 'score': 0.9998190999031067}]]
```
Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)
## Stay tuned!
We are still working on our model and will edit this page as we progress.<br>
Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>
our git: https://github.com/avichaychriqui/HeBERT
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \\\\\\\\\\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
|
nwl/DialoGPT-small-enhypen | nwl | 2021-12-31T13:38:51Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
|
airKlizz/mt5-base-wikinewssum-english-1000 | airKlizz | 2021-12-31T12:29:07Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-1000
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
- Rouge1: 7.7389
- Rouge2: 3.1606
- Rougel: 6.3317
- Rougelsum: 7.2487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 125 | 2.6981 | 7.1504 | 2.6253 | 5.8261 | 6.7427 |
| No log | 2.0 | 250 | 2.5597 | 7.4666 | 2.9362 | 6.0965 | 6.9699 |
| No log | 3.0 | 375 | 2.5145 | 7.4599 | 2.9449 | 6.0941 | 6.9734 |
| No log | 4.0 | 500 | 2.4904 | 7.5063 | 2.975 | 6.137 | 7.0027 |
| No log | 5.0 | 625 | 2.4904 | 7.6027 | 3.0582 | 6.2161 | 7.0832 |
| No log | 6.0 | 750 | 2.4801 | 7.7601 | 3.1916 | 6.3689 | 7.2686 |
| No log | 7.0 | 875 | 2.4737 | 7.7162 | 3.1332 | 6.3113 | 7.2283 |
| No log | 8.0 | 1000 | 2.4724 | 7.7389 | 3.1606 | 6.3317 | 7.2487 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/mt5-base-wikinewssum-english-100 | airKlizz | 2021-12-31T12:02:27Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-100
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6225
- Rouge1: 3.909
- Rouge2: 0.9312
- Rougel: 3.3835
- Rougelsum: 3.7786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 |
| No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 |
| No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 |
| No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 |
| No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 |
| No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 |
| No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 |
| No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maher13/arabic-iti | maher13 | 2021-12-31T09:05:42Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: arabic-iti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabic-iti
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0154
- Wer: 0.6350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0355 | 2.36 | 400 | 3.0286 | 1.0 |
| 0.7999 | 4.73 | 800 | 0.8623 | 0.8067 |
| 0.4485 | 7.1 | 1200 | 0.6920 | 0.6651 |
| 0.3719 | 9.47 | 1600 | 0.6361 | 0.6591 |
| 0.3401 | 11.83 | 2000 | 0.6967 | 0.6497 |
| 0.3222 | 14.2 | 2400 | 0.6697 | 0.6246 |
| 0.3094 | 16.57 | 2800 | 0.7282 | 0.6537 |
| 0.2822 | 18.93 | 3200 | 0.8019 | 0.6816 |
| 0.2446 | 21.3 | 3600 | 0.7622 | 0.6608 |
| 0.235 | 23.67 | 4000 | 0.8644 | 0.6780 |
| 0.2362 | 26.04 | 4400 | 0.9083 | 0.6710 |
| 0.206 | 28.4 | 4800 | 0.8243 | 0.6598 |
| 0.1765 | 30.77 | 5200 | 0.8614 | 0.6647 |
| 0.1458 | 33.14 | 5600 | 0.8907 | 0.6447 |
| 0.1544 | 35.5 | 6000 | 0.9059 | 0.6523 |
| 0.2402 | 18.88 | 6400 | 0.9639 | 0.6970 |
| 0.2026 | 20.06 | 6800 | 0.9868 | 0.6817 |
| 0.185 | 21.24 | 7200 | 1.0043 | 0.6936 |
| 0.1951 | 22.42 | 7600 | 0.8918 | 0.6795 |
| 0.1933 | 23.6 | 8000 | 0.9367 | 0.6826 |
| 0.2272 | 24.78 | 8400 | 0.8540 | 0.6792 |
| 0.1922 | 25.96 | 8800 | 0.8983 | 0.6657 |
| 0.1547 | 27.14 | 9200 | 0.9742 | 0.6747 |
| 0.1579 | 28.32 | 9600 | 0.9066 | 0.6668 |
| 0.1642 | 29.5 | 10000 | 0.9440 | 0.6790 |
| 0.1726 | 30.68 | 10400 | 0.9654 | 0.6813 |
| 0.1656 | 31.86 | 10800 | 0.9880 | 0.6801 |
| 0.1741 | 33.04 | 11200 | 0.9707 | 0.6584 |
| 0.1494 | 34.22 | 11600 | 0.9801 | 0.6709 |
| 0.1482 | 35.4 | 12000 | 0.9258 | 0.6646 |
| 0.14 | 36.58 | 12400 | 0.9802 | 0.6635 |
| 0.142 | 37.76 | 12800 | 0.9268 | 0.6524 |
| 0.1281 | 38.94 | 13200 | 0.9615 | 0.6587 |
| 0.1051 | 40.12 | 13600 | 0.9721 | 0.6495 |
| 0.1074 | 41.3 | 14000 | 1.0045 | 0.6582 |
| 0.0879 | 42.48 | 14400 | 1.0290 | 0.6516 |
| 0.1015 | 43.66 | 14800 | 1.0514 | 0.6556 |
| 0.0932 | 44.84 | 15200 | 1.0287 | 0.6450 |
| 0.1008 | 46.02 | 15600 | 0.9940 | 0.6399 |
| 0.0968 | 47.2 | 16000 | 1.0206 | 0.6368 |
| 0.0858 | 48.38 | 16400 | 1.0452 | 0.6361 |
| 0.0886 | 49.56 | 16800 | 1.0154 | 0.6350 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Muennighoff/SBERT-base-nli-stsb-v2 | Muennighoff | 2021-12-31T07:59:14Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:04Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
This model is used in "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
|
NahedAbdelgaber/distilbert-base-uncased-finetuned-evaluating-student-writing | NahedAbdelgaber | 2021-12-31T06:28:07Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-evaluating-student-writing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-evaluating-student-writing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3485 | 1.0 | 878 | 2.0959 |
| 2.1407 | 2.0 | 1756 | 2.0162 |
| 2.0843 | 3.0 | 2634 | 1.9846 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
TrLOX/gpt2-tdk | TrLOX | 2021-12-31T02:18:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dgpt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
hello
hello
|
davanstrien/flyswot-test | davanstrien | 2021-12-30T16:35:07Z | 0 | 0 | null | [
"onnx",
"region:us"
]
| null | 2022-03-02T23:29:05Z | # flyswot
## Model description
In progress model for detecting 'fake' flysheets
## Intended uses & limitations
Not currently intended for public consumption...
#### Limitations and bias
Not currently intended for public consumption...
## Training data
TODO
## Eval results
|
scasutt/Prototype_training_large_model | scasutt | 2021-12-30T14:40:39Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training_large_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training_large_model
This model is a fine-tuned version of [scasutt/Prototype_training_large_model](https://huggingface.co/scasutt/Prototype_training_large_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2585
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0545 | 1.47 | 100 | 3.2604 | 1.0 |
| 3.0413 | 2.93 | 200 | 3.2585 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
pinecone/bert-medqp-cross-encoder | pinecone | 2021-12-30T12:11:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # Med-QP Cross Encoder
Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp). |
NahedAbdelgaber/distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing | NahedAbdelgaber | 2021-12-30T06:58:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5869 | 1.0 | 157 | 2.3949 |
| 2.4142 | 2.0 | 314 | 2.3551 |
| 2.3792 | 3.0 | 471 | 2.2840 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
youngjae/bert-finetuned-squad | youngjae | 2021-12-30T04:13:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rkmt/wav2vec2-base-timit-demo-colab | rkmt | 2021-12-30T00:39:31Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0280
- Wer: 0.0082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1152 | 1.42 | 500 | 0.0416 | 0.0159 |
| 0.0803 | 2.83 | 1000 | 0.0372 | 0.0144 |
| 0.0672 | 4.25 | 1500 | 0.0345 | 0.0119 |
| 0.0564 | 5.67 | 2000 | 0.0338 | 0.0106 |
| 0.0513 | 7.08 | 2500 | 0.0307 | 0.0100 |
| 0.0448 | 8.5 | 3000 | 0.0343 | 0.0098 |
| 0.0374 | 9.92 | 3500 | 0.0300 | 0.0084 |
| 0.0368 | 11.33 | 4000 | 0.0314 | 0.0086 |
| 0.0388 | 12.75 | 4500 | 0.0283 | 0.0089 |
| 0.0277 | 14.16 | 5000 | 0.0302 | 0.0089 |
| 0.0298 | 15.58 | 5500 | 0.0298 | 0.0089 |
| 0.0271 | 17.0 | 6000 | 0.0320 | 0.0098 |
| 0.024 | 18.41 | 6500 | 0.0286 | 0.0088 |
| 0.0236 | 19.83 | 7000 | 0.0284 | 0.0084 |
| 0.0238 | 21.25 | 7500 | 0.0290 | 0.0086 |
| 0.0227 | 22.66 | 8000 | 0.0284 | 0.0093 |
| 0.0198 | 24.08 | 8500 | 0.0280 | 0.0088 |
| 0.0225 | 25.5 | 9000 | 0.0281 | 0.0086 |
| 0.018 | 26.91 | 9500 | 0.0280 | 0.0082 |
| 0.0178 | 28.33 | 10000 | 0.0280 | 0.0082 |
| 0.0209 | 29.75 | 10500 | 0.0280 | 0.0082 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lgris/distilxlsr_bp_4-12 | lgris | 2021-12-30T00:38:04Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"speech",
"pt",
"arxiv:2110.01900",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/distilxlsr_bp_8-12 | lgris | 2021-12-30T00:37:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"speech",
"pt",
"arxiv:2110.01900",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
lgris/distilxlsr_bp_8-12-24 | lgris | 2021-12-30T00:37:34Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"speech",
"pt",
"arxiv:2110.01900",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
SophieTr/distil-pegasus-reddit | SophieTr | 2021-12-29T23:58:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | This is the model so far before time out
|
BigSalmon/InformalToFormalLincoln17 | BigSalmon | 2021-12-29T21:25:31Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln17")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln17")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` |
pierreguillou/ner-bert-base-cased-pt-lenerbr | pierreguillou | 2021-12-29T19:32:39Z | 108,865 | 15 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"pt",
"dataset:lener_br",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language:
- pt
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: checkpoints
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
metrics:
- name: F1
type: f1
value: 0.8926146010186757
- name: Precision
type: precision
value: 0.8810222036028488
- name: Recall
type: recall
value: 0.9045161290322581
- name: Accuracy
type: accuracy
value: 0.9759397808828684
- name: Loss
type: loss
value: 0.18803243339061737
widget:
- text: "Ao Instituto Médico Legal da jurisdição do acidente ou da residência cumpre fornecer, no prazo de 90 dias, laudo à vítima (art. 5, § 5, Lei n. 6.194/74 de 19 de dezembro de 1974), função técnica que pode ser suprida por prova pericial realizada por ordem do juízo da causa, ou por prova técnica realizada no âmbito administrativo que se mostre coerente com os demais elementos de prova constante dos autos."
- text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
- text: "Dispõe sobre o estágio de estudantes; altera a redação do art. 428 da Consolidação das Leis do Trabalho – CLT, aprovada pelo Decreto-Lei no 5.452, de 1o de maio de 1943, e a Lei no 9.394, de 20 de dezembro de 1996; revoga as Leis nos 6.494, de 7 de dezembro de 1977, e 8.859, de 23 de março de 1994, o parágrafo único do art. 82 da Lei no 9.394, de 20 de dezembro de 1996, e o art. 6o da Medida Provisória no 2.164-41, de 24 de agosto de 2001; e dá outras providências."
---
## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)
**ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.
Due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
- **f1**: 0.8926146010186757
- **precision**: 0.8810222036028488
- **recall**: 0.9045161290322581
- **accuracy**: 0.9759397808828684
- **loss**: 0.18803243339061737
Check as well the [large version of this model](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr) with a f1 of 0.908.
**Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model):
- **f1**: 0.8716487228203504
- **precision**: 0.8559286898839138
- **recall**: 0.8879569892473118
- **accuracy**: 0.9755893153732458
- **loss**: 0.1133928969502449
## Blog post
[NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Widget & App
You can test this model into the widget of this page.
Use as well the [NER App](https://huggingface.co/spaces/pierreguillou/ner-bert-pt-lenerbr) that allows comparing the 2 BERT models (base and large) fitted in the NER task with the legal LeNER-Br dataset.
## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# parameters
model_name = "pierreguillou/ner-bert-base-cased-pt-lenerbr"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_text = "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
# tokenization
inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors="pt")
tokens = inputs.tokens()
# get predictions
outputs = model(**inputs).logits
predictions = torch.argmax(outputs, dim=2)
# print predictions
for token, prediction in zip(tokens, predictions[0].numpy()):
print((token, model.config.id2label[prediction]))
````
You can use pipeline, too. However, it seems to have an issue regarding to the max_length of the input sequence.
````
!pip install transformers
import transformers
from transformers import pipeline
model_name = "pierreguillou/ner-bert-base-cased-pt-lenerbr"
ner = pipeline(
"ner",
model=model_name
)
ner(input_text)
````
## Training procedure
### Notebook
The notebook of finetuning ([HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb)) is in github.
### Hyperparameters
#### batch, learning rate...
- per_device_batch_size = 2
- gradient_accumulation_steps = 2
- learning_rate = 2e-5
- num_train_epochs = 10
- weight_decay = 0.01
- optimizer = AdamW
- betas = (0.9,0.999)
- epsilon = 1e-08
- lr_scheduler_type = linear
- seed = 7
#### save model & load best model
- save_total_limit = 2
- logging_steps = 300
- eval_steps = logging_steps
- evaluation_strategy = 'steps'
- logging_strategy = 'steps'
- save_strategy = 'steps'
- save_steps = logging_steps
- load_best_model_at_end = True
- fp16 = True
#### get best model through a metric
- metric_for_best_model = 'eval_f1'
- greater_is_better = True
### Training results
````
Num examples = 7828
Num Epochs = 10
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 2
Total optimization steps = 19570
Step Training Loss Validation Loss Precision Recall F1 Accuracy
300 0.127600 0.178613 0.722909 0.741720 0.732194 0.948802
600 0.088200 0.136965 0.733636 0.867742 0.795074 0.963079
900 0.078000 0.128858 0.791912 0.838065 0.814335 0.965243
1200 0.077800 0.126345 0.815400 0.865376 0.839645 0.967849
1500 0.074100 0.148207 0.779274 0.895914 0.833533 0.960184
1800 0.059500 0.116634 0.830829 0.868172 0.849090 0.969342
2100 0.044500 0.208459 0.887150 0.816559 0.850392 0.960535
2400 0.029400 0.136352 0.867821 0.851398 0.859531 0.970271
2700 0.025000 0.165837 0.814881 0.878495 0.845493 0.961235
3000 0.038400 0.120629 0.811719 0.893763 0.850768 0.971506
3300 0.026200 0.175094 0.823435 0.882581 0.851983 0.962957
3600 0.025600 0.178438 0.881095 0.886022 0.883551 0.963689
3900 0.041000 0.134648 0.789035 0.916129 0.847846 0.967681
4200 0.026700 0.130178 0.821275 0.903226 0.860303 0.972313
4500 0.018500 0.139294 0.844016 0.875054 0.859255 0.971140
4800 0.020800 0.197811 0.892504 0.873118 0.882705 0.965883
5100 0.019300 0.161239 0.848746 0.888172 0.868012 0.967849
5400 0.024000 0.139131 0.837507 0.913333 0.873778 0.970591
5700 0.018400 0.157223 0.899754 0.864731 0.881895 0.970210
6000 0.023500 0.137022 0.883018 0.873333 0.878149 0.973243
6300 0.009300 0.181448 0.840490 0.900860 0.869628 0.968290
6600 0.019200 0.173125 0.821316 0.896559 0.857290 0.966736
6900 0.016100 0.143160 0.789938 0.904946 0.843540 0.968245
7200 0.017000 0.145755 0.823274 0.897634 0.858848 0.969037
7500 0.012100 0.159342 0.825694 0.883226 0.853491 0.967468
7800 0.013800 0.194886 0.861237 0.859570 0.860403 0.964771
8100 0.008000 0.140271 0.829914 0.896129 0.861752 0.971567
8400 0.010300 0.143318 0.826844 0.908817 0.865895 0.973578
8700 0.015000 0.143392 0.847336 0.889247 0.867786 0.973365
9000 0.006000 0.143512 0.847795 0.905591 0.875741 0.972892
9300 0.011800 0.138747 0.827133 0.894194 0.859357 0.971673
9600 0.008500 0.159490 0.837030 0.909032 0.871546 0.970028
9900 0.010700 0.159249 0.846692 0.910968 0.877655 0.970546
10200 0.008100 0.170069 0.848288 0.900645 0.873683 0.969113
10500 0.004800 0.183795 0.860317 0.899355 0.879403 0.969570
10800 0.010700 0.157024 0.837838 0.906667 0.870894 0.971094
11100 0.003800 0.164286 0.845312 0.880215 0.862410 0.970744
11400 0.009700 0.204025 0.884294 0.887527 0.885907 0.968854
11700 0.008900 0.162819 0.829415 0.887742 0.857588 0.970530
12000 0.006400 0.164296 0.852666 0.901075 0.876202 0.971414
12300 0.007100 0.143367 0.852959 0.895699 0.873807 0.973669
12600 0.015800 0.153383 0.859224 0.900430 0.879345 0.972679
12900 0.006600 0.173447 0.869954 0.899140 0.884306 0.970927
13200 0.006800 0.163234 0.856849 0.897204 0.876563 0.971795
13500 0.003200 0.167164 0.850867 0.907957 0.878485 0.971231
13800 0.003600 0.148950 0.867801 0.910538 0.888656 0.976961
14100 0.003500 0.155691 0.847621 0.907957 0.876752 0.974127
14400 0.003300 0.157672 0.846553 0.911183 0.877680 0.974584
14700 0.002500 0.169965 0.847804 0.917634 0.881338 0.973045
15000 0.003400 0.177099 0.842199 0.912473 0.875929 0.971155
15300 0.006000 0.164151 0.848928 0.911183 0.878954 0.973258
15600 0.002400 0.174305 0.847437 0.906667 0.876052 0.971765
15900 0.004100 0.174561 0.852929 0.907957 0.879583 0.972907
16200 0.002600 0.172626 0.843263 0.907097 0.874016 0.972100
16500 0.002100 0.185302 0.841108 0.907312 0.872957 0.970485
16800 0.002900 0.175638 0.840557 0.909247 0.873554 0.971704
17100 0.001600 0.178750 0.857056 0.906452 0.881062 0.971765
17400 0.003900 0.188910 0.853619 0.907957 0.879950 0.970835
17700 0.002700 0.180822 0.864699 0.907097 0.885390 0.972283
18000 0.001300 0.179974 0.868150 0.906237 0.886785 0.973060
18300 0.000800 0.188032 0.881022 0.904516 0.892615 0.972572
18600 0.002700 0.183266 0.868601 0.901290 0.884644 0.972298
18900 0.001600 0.180301 0.862041 0.903011 0.882050 0.972344
19200 0.002300 0.183432 0.855370 0.904301 0.879155 0.971109
19500 0.001800 0.183381 0.854501 0.904301 0.878696 0.971186
````
### Validation metrics by Named Entity
````
Num examples = 1177
{'JURISPRUDENCIA': {'f1': 0.7016574585635359,
'number': 657,
'precision': 0.6422250316055625,
'recall': 0.7732115677321156},
'LEGISLACAO': {'f1': 0.8839681133746677,
'number': 571,
'precision': 0.8942652329749103,
'recall': 0.8739054290718039},
'LOCAL': {'f1': 0.8253968253968254,
'number': 194,
'precision': 0.7368421052631579,
'recall': 0.9381443298969072},
'ORGANIZACAO': {'f1': 0.8934049079754601,
'number': 1340,
'precision': 0.918769716088328,
'recall': 0.8694029850746269},
'PESSOA': {'f1': 0.982653539615565,
'number': 1072,
'precision': 0.9877474081055608,
'recall': 0.9776119402985075},
'TEMPO': {'f1': 0.9657657657657657,
'number': 816,
'precision': 0.9469964664310954,
'recall': 0.9852941176470589},
'overall_accuracy': 0.9725722644643211,
'overall_f1': 0.8926146010186757,
'overall_precision': 0.8810222036028488,
'overall_recall': 0.9045161290322581}
```` |
LPM/AI_1 | LPM | 2021-12-29T18:54:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04Z | git lfs install
git clone https://huggingface.co/LPM/AI_1 |
patrickvonplaten/wav2vec2-2-bart-base | patrickvonplaten | 2021-12-29T15:53:10Z | 373 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
model-index:
- name: wav2vec2-2-bart-base
results: []
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) and [bart-base](https://huggingface.co/facebook/bart-base) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.405
- Wer: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-2-bart-large | patrickvonplaten | 2021-12-29T15:49:52Z | 6 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
model-index:
- name: wav2vec2-2-bart-large
results: []
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-large
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) and [bart-large](https://huggingface.co/facebook/bart-large) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3204
- Wer: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3 |
ydshieh/flax-vision-encoder-decoder-vit-gpt2-coco-en | ydshieh | 2021-12-29T10:12:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05Z | ## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
```python
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/flax-vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
def generate_step(pixel_values):
output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
preds = generate_step(pixel_values)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` |
huggingtweets/ihyjuju | huggingtweets | 2021-12-29T01:31:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ihyjuju/1640741515385/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448859687449862147/frVD6mW3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">juju 💰</div>
<div style="text-align: center; font-size: 14px;">@ihyjuju</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from juju 💰.
| Data | juju 💰 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1 |
| Short tweets | 478 |
| Tweets kept | 2769 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n82hqbg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ihyjuju's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ihyjuju')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pyf98/speechcommands_12commands_conformer | pyf98 | 2021-12-29T00:51:32Z | 4 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:speechcommands",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- speechcommands
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/speechcommands_12commands_conformer`
This model was trained by Yifan Peng using speechcommands recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout bf523b70cae8300da004b41ec6a0d1b57c7ae8bb
pip install -e .
cd egs2/speechcommands/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/speechcommands_12commands_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 24 21:53:37 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `3fd3dae71427d2ba5ecbc3fe0f2ae05db79acc29`
- Commit date: `Fri Dec 24 21:32:26 2021 -0500`
## asr_conformer_noBatchNorm_warmup5k_lr2e-4_accum3_conv15_5speeds
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|infer/dev|4605|4605|97.7|2.3|0.0|0.0|2.3|2.3|
|infer/test|4890|4890|97.9|2.1|0.0|0.0|2.1|2.1|
|infer/test_speechbrain|4886|4886|98.4|1.6|0.0|0.0|1.6|1.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|infer/dev|4605|19541|98.6|0.9|0.5|1.0|2.5|2.3|
|infer/test|4890|19959|97.8|1.1|1.1|0.7|3.0|2.1|
|infer/test_speechbrain|4886|19923|98.7|0.7|0.6|0.6|1.9|1.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_noBatchNorm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_noBatchNorm_warmup5k_lr2e-4_accum3_conv15_5speeds
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 4000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_fbank_pitch_word_sp/train/speech_shape
- exp/asr_stats_fbank_pitch_word_sp/train/text_shape.word
valid_shape_file:
- exp/asr_stats_fbank_pitch_word_sp/valid/speech_shape
- exp/asr_stats_fbank_pitch_word_sp/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/train_sp/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/dev/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- 'yes'
- down
- 'no'
- stop
- go
- 'on'
- left
- right
- _unknown_
- _silence_
- 'off'
- up
- <sos/eos>
init: null
input_size: 83
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_fbank_pitch_word_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: legacy
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a3
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/sh44sti | huggingtweets | 2021-12-28T23:36:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/sh44sti/1640734573813/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1202199127544737793/v_wbcf_Z_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shasti</div>
<div style="text-align: center; font-size: 14px;">@sh44sti</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shasti.
| Data | Shasti |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 32 |
| Short tweets | 1087 |
| Tweets kept | 2130 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/178u93b4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sh44sti's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u8a1x7b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u8a1x7b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sh44sti')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mrm8488/deberta-v3-small-goemotions | mrm8488 | 2021-12-28T23:12:12Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-snall-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-snall-goemotions
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5638
- F1: 0.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.614 | 1.0 | 3082 | 1.5577 | 0.3663 |
| 1.4338 | 2.0 | 6164 | 1.5580 | 0.4084 |
| 1.2936 | 3.0 | 9246 | 1.5006 | 0.4179 |
| 1.1531 | 4.0 | 12328 | 1.5348 | 0.4276 |
| 1.0536 | 5.0 | 15410 | 1.5638 | 0.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SophieTr/results | SophieTr | 2021-12-28T19:59:38Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [sshleifer/distill-pegasus-xsum-16-4](https://huggingface.co/sshleifer/distill-pegasus-xsum-16-4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.2378 | 0.51 | 100 | 7.1853 |
| 7.2309 | 1.01 | 200 | 6.6342 |
| 6.4796 | 1.52 | 300 | 6.3206 |
| 6.2691 | 2.02 | 400 | 6.0184 |
| 5.7382 | 2.53 | 500 | 5.5754 |
| 4.9922 | 3.03 | 600 | 4.5178 |
| 3.6031 | 3.54 | 700 | 2.8579 |
| 2.5203 | 4.04 | 800 | 2.4718 |
| 2.2563 | 4.55 | 900 | 2.4128 |
| 2.1425 | 5.05 | 1000 | 2.3767 |
| 2.004 | 5.56 | 1100 | 2.3982 |
| 2.0437 | 6.06 | 1200 | 2.3787 |
| 1.9407 | 6.57 | 1300 | 2.3952 |
| 1.9194 | 7.07 | 1400 | 2.3964 |
| 1.758 | 7.58 | 1500 | 2.4056 |
| 1.918 | 8.08 | 1600 | 2.4101 |
| 1.9162 | 8.59 | 1700 | 2.4085 |
| 1.8983 | 9.09 | 1800 | 2.4058 |
| 1.6939 | 9.6 | 1900 | 2.4050 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best | espnet | 2021-12-28T18:57:57Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slue-voxceleb",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- slue-voxceleb
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
This model was trained by Siddhant using slue-voxceleb recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 17758ad804fd7c4b6f88ef5601f475a241dc4605
pip install -e .
cd egs2/slue-voxceleb/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Dec 28 12:28:28 EST 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a2`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `6bf3c2a4f138d35331634d2e879bbc5c32a5266e`
- Commit date: `Mon Dec 22 15:41:32 EST 2021`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent
- ASR config: [conf/train_asr.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
|dataset|Snt|Intent Classification Accuracy (%)|Intent Classification Macro F1 (%)|
|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|955|80.2|29.7|
### Detailed Classification Report
|dataset|Label|Snt|Prec|Recall|F1|
|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|Neutral|784|85|93|89|
|inference_asr_model_valid.acc.ave_10best/devel|Positive|167|40|24|30|
|inference_asr_model_valid.acc.ave_10best/devel|Negative|3|0|0|0|
|inference_asr_model_valid.acc.ave_10best/devel|Mixed|1|0|0|0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- sound
- - dump/raw/devel/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁i
- s
- ▁and
- ''''
- ▁the
- ▁a
- ▁to
- ▁it
- Neutral
- ▁you
- ▁that
- ▁of
- t
- ing
- ▁in
- ▁was
- ed
- ▁uh
- ▁know
- e
- m
- ▁he
- y
- er
- ▁so
- ▁we
- re
- a
- o
- d
- ▁um
- i
- ▁s
- c
- ▁like
- n
- ▁is
- ▁be
- ▁f
- ▁but
- ▁c
- Positive
- en
- l
- ve
- ▁just
- ▁m
- st
- ▁they
- le
- an
- ▁on
- ▁p
- u
- ▁my
- ar
- p
- ▁this
- ▁for
- ▁b
- ▁think
- in
- ▁with
- g
- or
- ▁h
- r
- ly
- w
- ▁me
- ▁d
- ▁e
- ▁have
- ▁she
- it
- ▁t
- ▁what
- b
- ▁st
- al
- es
- ▁there
- ▁really
- ic
- ▁g
- ▁as
- ▁w
- ▁l
- ▁do
- ll
- v
- ▁all
- at
- 'on'
- as
- ▁about
- h
- ▁not
- ▁re
- ▁o
- ▁at
- k
- ▁don
- ▁had
- ▁when
- ou
- ent
- is
- ra
- ▁who
- ri
- ▁go
- se
- f
- ▁out
- ▁get
- ▁an
- ▁people
- nd
- ▁kind
- ▁very
- ce
- ▁because
- ▁are
- ion
- ▁some
- et
- ▁can
- ge
- ▁or
- me
- ▁up
- ▁n
- ▁if
- ▁no
- ▁one
- ▁were
- ct
- ▁mean
- ad
- ▁time
- ▁ch
- ▁then
- ro
- ▁ex
- ▁mo
- ▁her
- ▁every
- ▁would
- ▁co
- ▁work
- ir
- ▁sh
- ay
- ▁se
- ol
- ver
- ▁su
- ▁got
- ▁k
- th
- ▁love
- ▁from
- ld
- ation
- ▁him
- ▁said
- ▁how
- ▁well
- ▁lot
- ▁show
- ch
- ard
- ie
- ▁pro
- ▁de
- ▁gonna
- ▁bo
- ▁say
- ▁see
- ▁li
- one
- ▁his
- ther
- ▁been
- ur
- ▁any
- ▁great
- ▁
- ▁yeah
- pe
- ▁which
- ▁come
- ▁them
- ot
- ▁play
- ab
- ite
- ▁way
- ally
- id
- gh
- ▁r
- ▁sc
- our
- x
- mp
- ers
- ong
- ate
- ▁your
- ss
- ast
- ▁did
- ▁sort
- ▁am
- am
- and
- ▁make
- ant
- ▁thing
- ▁ha
- ▁te
- ▁has
- ess
- ▁v
- ▁something
- ▁back
- ▁where
- ▁things
- red
- ▁al
- ut
- el
- ight
- ment
- un
- ive
- ▁th
- ▁le
- il
- ▁j
- op
- ▁more
- ▁ro
- ill
- ▁fi
- ies
- ▁much
- ck
- ▁ne
- ▁wh
- ▁always
- ▁act
- ine
- pp
- z
- ▁now
- ▁con
- thing
- ▁us
- body
- ▁want
- ▁other
- ort
- ice
- ▁doing
- ▁sa
- ▁feel
- ow
- ▁int
- ne
- ▁these
- ▁could
- ▁good
- ▁cause
- Negative
- ▁actually
- ▁wr
- ▁little
- ain
- ▁being
- ▁look
- ▁into
- ere
- ul
- ▁our
- ▁guy
- ▁first
- ud
- ▁by
- ▁fun
- ▁qu
- ▁didn
- us
- ity
- ▁jo
- od
- ▁u
- ▁part
- ▁off
- ▁pre
- ▁right
- ▁film
- ▁start
- ok
- ▁two
- ving
- ▁never
- pt
- um
- te
- ▁movie
- ▁going
- ff
- nder
- ke
- ▁ag
- ▁en
- ▁try
- ful
- im
- ays
- ▁life
- ▁different
- ach
- are
- ▁di
- ist
- ▁oh
- au
- ▁po
- nt
- ▁com
- all
- ▁lo
- om
- ▁real
- ▁y
- ame
- ▁went
- ry
- ber
- ▁even
- ci
- ▁ho
- ▁years
- ▁their
- ▁happen
- ure
- self
- per
- ▁pl
- ▁those
- ble
- 'no'
- ▁day
- ▁take
- ▁does
- ien
- ▁br
- be
- wn
- ▁thought
- ▁fe
- ght
- ▁tr
- ▁story
- ty
- ▁down
- ous
- ish
- ▁wom
- ▁wanna
- ▁put
- ▁through
- ide
- ▁ab
- ▁new
- ▁also
- ▁big
- ▁call
- ▁around
- ▁character
- ▁read
- iz
- ▁came
- act
- ily
- ath
- ag
- ree
- ▁per
- ▁will
- ▁mu
- ▁talk
- ▁over
- ▁friend
- atch
- ▁bl
- ade
- ▁world
- ▁many
- ▁sp
- sic
- ▁cl
- ▁bit
- ▁man
- ace
- ▁person
- ft
- ip
- ▁than
- ▁wanted
- ▁may
- ven
- ick
- ious
- ▁mar
- ▁before
- ▁rel
- j
- ting
- ▁set
- sh
- ep
- ▁un
- ue
- ▁aw
- ▁find
- ▁kid
- tain
- ▁such
- ter
- ▁end
- ▁tw
- ind
- aking
- ▁after
- ▁fam
- ars
- ig
- ore
- ▁bec
- ak
- art
- reat
- ust
- rou
- ack
- ▁ye
- ould
- ime
- itt
- ▁gu
- qu
- ose
- fe
- ▁wor
- lf
- alk
- ▁charact
- ▁mov
- out
- ich
- ▁happ
- ▁thou
- ith
- <mixed>
- rom
- ake
- ▁diff
- ▁char
- na
- round
- ory
- ink
- ually
- ▁gon
- ▁pe
- right
- ody
- ah
- rie
- riend
- now
- so
- ause
- ▁fil
- ▁pers
- fore
- very
- ▁differe
- rough
- q
- ▁fir
- anna
- ways
- ':'
- '&'
- fter
- <sos/eos>
transcript_token_list: null
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
postdecoder: null
postdecoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.3a2
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/amnananadeem-talal916 | huggingtweets | 2021-12-28T12:50:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433365322313043974/gPI08qaY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377835980552474624/sxTjuspv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">halal talal & amna</div>
<div style="text-align: center; font-size: 14px;">@amnananadeem-talal916</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from halal talal & amna.
| Data | halal talal | amna |
| --- | --- | --- |
| Tweets downloaded | 3187 | 3132 |
| Retweets | 484 | 778 |
| Short tweets | 532 | 369 |
| Tweets kept | 2171 | 1985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/42dvu161/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @amnananadeem-talal916's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2irbhtmu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2irbhtmu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/amnananadeem-talal916')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
luomingshuang/icefall_avsr_grid_combinenet_ctc | luomingshuang | 2021-12-28T12:46:37Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Pre-trained CombineNet-CTC models for the GRID audio-visual dataset with icefall.
The model was trained on full [GRID](https://zenodo.org/record/3625687#.Ybn7HagzY2w) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/tree/master/egs/grid/AVSR/combinenet_ctc_avsr) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/grid/AVSR/combinenet_ctc_avsr/Pre-trained.md)
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/grid/AVSR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0"
python combinenet_ctc_avsr/train.py --world-size 1
```
## Evaluation results
The best decoding results (WER) on GRID TEST are listed below, we got this result by averaging models from epoch 25 to 29, the decoding method is `whole-lattice-rescoring`, when lm scale is 0.01.
||TEST|
|--|--|
|WER|1.71%| |
facebook/wav2vec2-large-lv60 | facebook | 2021-12-28T12:45:09Z | 10,076 | 8 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-LV60
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
luomingshuang/icefall_vsr_grid_visualnet2_ctc | luomingshuang | 2021-12-28T12:24:55Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Pre-trained VisualNet2-CTC models for the GRID visual dataset with icefall.
The model was trained on full [GRID](https://zenodo.org/record/3625687#.Ybn7HagzY2w) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/tree/master/egs/grid/AVSR/visualnet2_ctc_asr) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/grid/AVSR/visualnet2_ctc_asr/Pre-trained.md)
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/grid/AVSR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0"
python visualnet2_ctc_asr/train.py --world-size 1
```
## Evaluation results
The best decoding results (WER) on GRID TEST are listed below, we got this result by averaging models from epoch 15 to 29, the decoding method is `1best`.
||TEST|
|--|--|
|WER|13.63%| |
luomingshuang/icefall_vsr_grid_visualnet_ctc | luomingshuang | 2021-12-28T12:24:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Pre-trained VisualNet-CTC models for the GRID visual dataset with icefall.
The model was trained on full [GRID](https://zenodo.org/record/3625687#.Ybn7HagzY2w) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/tree/master/egs/grid/AVSR/visualnet_ctc_asr) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/grid/AVSR/visualnet_ctc_asr/Pre-trained.md)
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/grid/AVSR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0"
python visualnet_ctc_asr/train.py --world-size 1
```
## Evaluation results
The best decoding results (WER) on GRID TEST are listed below, we got this result by averaging models from epoch 16 to 25, the decoding method is `1best`.
||TEST|
|--|--|
|WER|15.68%| |
huggingtweets/sunnekochan | huggingtweets | 2021-12-28T06:52:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/sunnekochan/1640674359998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475670958170157064/ykhcM2Wb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sun 🌻</div>
<div style="text-align: center; font-size: 14px;">@sunnekochan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sun 🌻.
| Data | Sun 🌻 |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 706 |
| Short tweets | 637 |
| Tweets kept | 1900 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11t8eba2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sunnekochan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lhat7qg6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lhat7qg6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sunnekochan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nlp-waseda/gpt2-small-japanese-wikipedia | nlp-waseda | 2021-12-28T06:31:38Z | 23 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language:
- ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: "早稲田 大学 で 自然 言語 処理 を"
---
# nlp-waseda/gpt2-small-japanese-wikipedia
This model is Japanese GPT-2 pretrained on Japanese Wikipedia.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task.
Note that the texts should be segmented into words using Juman++ in advance.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese-wikipedia')
>>> set_seed(42)
>>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5)
[{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 1969 年 に は 同 大学院 を 修了 。 東京 芝浦 電気 株式 会社 に 就職 後 、 情報 処理'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 は 立教 大学 理学部 助手 を 務めた 。 1978 年 に 神奈川 県立 湘南 高等 学校 校長 に 就任'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 研究 。 1972 年 に 早稲田 大学 文学部 ドイツ 文学 専攻 を 卒業 し 、 同 年 から 1979 年 まで 上智 大学'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 する 。 1979 年 東京 農工 大学 農学 部 卒業 。 1980 年 同 大学院 農学 研究 科 修士 課程 修了 。'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 し ながら 、 日本 で 活動 する 自然 言語 研究 家 。 大学 時代 は 東京 大学 理学部 の 助手 を 務め'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ReformerTokenizer, GPT2Model
tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia')
model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia')
text = "早稲田 大学 で 自然 言語 処理 を"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2021-12-20.
## Training procedure
### Preprocessing
The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
The model was trained on 8 NVIDIA A100 GPUs.
|
vukpetar/trocr-small-photomath | vukpetar | 2021-12-27T19:41:43Z | 45 | 6 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:2109.10282",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2022-03-02T23:29:05Z | ## TrOCR (small-sized model, fine-tuned on Synthetic Math Expression Dataset)
TrOCR model fine-tuned on the Synthetic Math Expression Dataset. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the model hub to look for fine-tuned versions on a task that interests you.
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import VisionEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer
from PIL import Image
import requests
# load image from the IAM database
url = 'https://drive.google.com/uc?export=view&id=15dUjO44YDe1Agw_Qi8MyODRHpUFaCFw-'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
feature_extractor = AutoFeatureExtractor.from_pretrained('vukpetar/trocr-small-photomath')
tokenizer = AutoTokenizer.from_pretrained("vukpetar/trocr-small-photomath")
model = VisionEncoderDecoderModel.from_pretrained('vukpetar/trocr-small-photomath')
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## BibTeX entry and citation info
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac | MMG | 2021-12-27T17:33:12Z | 24 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
This model is a fine-tuned version of [ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es](https://huggingface.co/ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/layoutlmv2-base-uncased-finetuned-vi-infovqa | tiennvcs | 2021-12-27T14:23:33Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| document-question-answering | 2022-03-02T23:29:05Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.33 | 100 | 5.3461 |
| No log | 0.66 | 200 | 4.9734 |
| No log | 0.99 | 300 | 4.6074 |
| No log | 1.32 | 400 | 4.4548 |
| 4.6355 | 1.65 | 500 | 4.3831 |
| 4.6355 | 1.98 | 600 | 4.3332 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu101
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/layoutlmv2-large-uncased-finetuned-vi-infovqa | tiennvcs | 2021-12-27T11:54:10Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| document-question-answering | 2022-03-02T23:29:05Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-large-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-large-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-large-uncased](https://huggingface.co/microsoft/layoutlmv2-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.17 | 100 | 4.6181 |
| No log | 0.33 | 200 | 4.3357 |
| No log | 0.5 | 300 | 4.3897 |
| No log | 0.66 | 400 | 4.8238 |
| 4.4277 | 0.83 | 500 | 3.9088 |
| 4.4277 | 0.99 | 600 | 3.6063 |
| 4.4277 | 1.16 | 700 | 3.4278 |
| 4.4277 | 1.32 | 800 | 3.5428 |
| 4.4277 | 1.49 | 900 | 3.4331 |
| 3.0413 | 1.65 | 1000 | 3.3699 |
| 3.0413 | 1.82 | 1100 | 3.3622 |
| 3.0413 | 1.98 | 1200 | 3.5294 |
| 3.0413 | 2.15 | 1300 | 3.7918 |
| 3.0413 | 2.31 | 1400 | 3.4007 |
| 2.0843 | 2.48 | 1500 | 4.0296 |
| 2.0843 | 2.64 | 1600 | 4.1852 |
| 2.0843 | 2.81 | 1700 | 3.6690 |
| 2.0843 | 2.97 | 1800 | 3.6089 |
| 2.0843 | 3.14 | 1900 | 5.5534 |
| 1.7527 | 3.3 | 2000 | 4.7498 |
| 1.7527 | 3.47 | 2100 | 5.2691 |
| 1.7527 | 3.63 | 2200 | 5.1324 |
| 1.7527 | 3.8 | 2300 | 4.5912 |
| 1.7527 | 3.96 | 2400 | 4.1727 |
| 1.2037 | 4.13 | 2500 | 6.1174 |
| 1.2037 | 4.29 | 2600 | 5.7172 |
| 1.2037 | 4.46 | 2700 | 5.8843 |
| 1.2037 | 4.62 | 2800 | 6.4232 |
| 1.2037 | 4.79 | 2900 | 7.4486 |
| 0.8386 | 4.95 | 3000 | 7.1946 |
| 0.8386 | 5.12 | 3100 | 7.9869 |
| 0.8386 | 5.28 | 3200 | 8.0310 |
| 0.8386 | 5.45 | 3300 | 8.2954 |
| 0.8386 | 5.61 | 3400 | 8.5361 |
| 0.4389 | 5.78 | 3500 | 8.6040 |
| 0.4389 | 5.94 | 3600 | 8.5806 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu101
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/bert-base-uncased-finetuned-vi-infovqa | tiennvcs | 2021-12-27T09:57:23Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.21 | 100 | 4.2058 |
| No log | 0.43 | 200 | 4.0210 |
| No log | 0.64 | 300 | 4.0454 |
| No log | 0.85 | 400 | 3.7557 |
| 4.04 | 1.07 | 500 | 3.8257 |
| 4.04 | 1.28 | 600 | 3.7713 |
| 4.04 | 1.49 | 700 | 3.6075 |
| 4.04 | 1.71 | 800 | 3.6155 |
| 4.04 | 1.92 | 900 | 3.5470 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27 | csukuangfj | 2021-12-27T08:12:51Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Introduction
## How to clone this repo
```
sudo apt-get install git-lfs
git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
cd icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
git lfs pull
```
**Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later.
The model in this repo is trained using the commit `14c93add507982306f5a478cd144e0e32e0f970d`.
You can use
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout 14c93add507982306f5a478cd144e0e32e0f970d
```
to download `icefall`.
You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/14c93add507982306f5a478cd144e0e32e0f970d/egs/librispeech/ASR/transducer_stateless/train.py#L198>.
In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward;
the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2.
The decoder architecture is modified from
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419).
A Conv1d layer is placed right after the input embedding layer.
-----
## Description
This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset
using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless
and contains only an embedding layer and a Conv1d.
The commands for training are:
```
cd egs/librispeech/ASR/
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp-full \
--full-libri 1 \
--max-duration 250 \
--lr-factor 3
```
The tensorboard training log can be found at
<https://tensorboard.dev/experiment/Mjx7MeTgR3Oyr1yBCwjozw/>
The command for decoding is:
```
epoch=29
avg=13
## greedy search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100
## beam search
./transducer_stateless/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir transducer_stateless/exp-full \
--bpe-model ./data/lang_bpe_500/bpe.model \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4
```
You can find the decoding log for the above command in this
repo (in the folder `log`).
The WERs for the test datasets are
| | test-clean | test-other | comment |
|---------------------------|------------|------------|------------------------------------------|
| greedy search | 2.85 | 7.30 | --epoch 29, --avg 13, --max-duration 100 |
| beam search (beam size 4) | 2.83 | 7.19 | |
# File description
- [log][log], this directory contains the decoding log and decoding results
- [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model
- [data][data], this directory contains files generated by [prepare.sh][prepare]
- [exp][exp], this directory contains only one file: `preprained.pt`
`exp/pretrained.pt` is generated by the following command:
```
./transducer_stateless/export.py \
--epoch 29 \
--avg 13 \
--bpe-model data/lang_bpe_500/bpe.model \
--exp-dir transducer_stateless/exp-full
```
**HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other,
just do the following:
```
cp icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/exp/pretrained.pt \
/path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt
```
and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`.
[icefall]: https://github.com/k2-fsa/icefall
[prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh
[exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/exp
[data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/data
[test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/test_wavs
[log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/log
[icefall]: https://github.com/k2-fsa/icefall
|
xkang/distilbert-base-uncased-finetuned-imdb-whole-word-masking | xkang | 2021-12-27T07:35:23Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb-whole-word-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-whole-word-masking
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5536 | 1.0 | 157 | 3.3242 |
| 3.4026 | 2.0 | 314 | 3.2848 |
| 3.3708 | 3.0 | 471 | 3.2791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
SEISHIN/distilbert-base-uncased-finetuned-squad | SEISHIN | 2021-12-27T05:27:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2172 | 1.0 | 5533 | 1.1532 |
| 0.9446 | 2.0 | 11066 | 1.1184 |
| 0.7671 | 3.0 | 16599 | 1.1605 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lijingxin/dummy-model | lijingxin | 2021-12-27T02:12:17Z | 5 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ayham/roberta_gpt2_new_max64_summarization_cnndm | Ayham | 2021-12-27T00:19:01Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_gpt2_new_max64_summarization_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_gpt2_new_max64_summarization_cnndm
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lakahaga/novel_reading_tts | lakahaga | 2021-12-26T17:45:00Z | 0 | 4 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ko",
"dataset:novelspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| text-to-speech | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ko
datasets:
- novelspeech
license: cc-by-4.0
---
## ESPnet2 TTS model
### `lakahaga/novel_reading_tts`
This model was trained by lakahaga using novelspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 9827dfe37f69e8e55f902dc4e340de5108596311
pip install -e .
cd egs2/novelspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model lakahaga/novel_reading_tts
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 34177
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 25600000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/energy.scp
- energy
- npy
- - dump/raw/tr_no_dev/utt2sid
- sids
- text_int
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/energy.scp
- energy
- npy
- - dump/raw/dev/utt2sid
- sids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- '='
- _
- A
- Y
- N
- O
- E
- U
- L
- G
- S
- D
- M
- J
- H
- B
- ZERO
- TWO
- C
- .
- Q
- ','
- P
- T
- SEVEN
- X
- W
- THREE
- ONE
- NINE
- K
- EIGHT
- '@'
- '!'
- Z
- '?'
- F
- SIX
- FOUR
- '#'
- $
- +
- '%'
- FIVE
- '~'
- AND
- '*'
- '...'
- ''
- ^
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
encoder_type: conformer
decoder_type: conformer
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SEISHIN/distilbert-base-uncased-finetuned-mnli | SEISHIN | 2021-12-26T16:30:56Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.82190524707081
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Accuracy: 0.8219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5161 | 1.0 | 24544 | 0.5025 | 0.8037 |
| 0.4176 | 2.0 | 49088 | 0.5274 | 0.8131 |
| 0.3154 | 3.0 | 73632 | 0.5348 | 0.8194 |
| 0.2294 | 4.0 | 98176 | 0.6560 | 0.8219 |
| 0.1827 | 5.0 | 122720 | 0.8190 | 0.8203 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
wilsontam/gpt2-dstc9 | wilsontam | 2021-12-26T14:02:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"dstc9",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: "en"
tags:
- dstc9
widget:
- text: "Yes, I'm going to be in Chinatown, San Francisco and am looking"
- text: "Can you find me one that is in the"
---
This GPT2 model is trained using DSTC9 data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Jia-Chen Jason Gu, Wilson Tam
|
wilsontam/bert-base-uncased-dstc9 | wilsontam | 2021-12-26T14:00:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"dstc10",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: "en"
tags:
- dstc10
widget:
- text: "Can you accommodate large [MASK] ?"
---
# Goal
This Bert model is trained using DSTC9 training + validation data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Shuhan Yuan, Wilson Tam |
airKlizz/mt5-base-wikinewssum-portuguese | airKlizz | 2021-12-26T08:03:49Z | 22 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-portuguese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-portuguese
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0428
- Rouge1: 9.4966
- Rouge2: 4.2224
- Rougel: 7.9845
- Rougelsum: 8.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 334 | 2.2258 | 7.3686 | 2.9066 | 6.3167 | 6.8758 |
| No log | 2.0 | 668 | 2.1389 | 9.0551 | 3.8395 | 7.6578 | 8.4641 |
| No log | 3.0 | 1002 | 2.1030 | 9.2792 | 3.9352 | 7.8259 | 8.663 |
| No log | 4.0 | 1336 | 2.0841 | 9.337 | 4.0647 | 7.8662 | 8.693 |
| 3.2831 | 5.0 | 1670 | 2.0487 | 9.4244 | 4.0821 | 7.8633 | 8.7111 |
| 3.2831 | 6.0 | 2004 | 2.0580 | 9.4598 | 4.1598 | 7.9511 | 8.8299 |
| 3.2831 | 7.0 | 2338 | 2.0426 | 9.501 | 4.1885 | 7.9803 | 8.8612 |
| 3.2831 | 8.0 | 2672 | 2.0428 | 9.4966 | 4.2224 | 7.9845 | 8.8641 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/nateritter-naval | huggingtweets | 2021-12-26T06:51:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474979242618195971/Dm_HPJsd_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nate Ritter & Naval</div>
<div style="text-align: center; font-size: 14px;">@nateritter-naval</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nate Ritter & Naval.
| Data | Nate Ritter | Naval |
| --- | --- | --- |
| Tweets downloaded | 3244 | 3243 |
| Retweets | 401 | 171 |
| Short tweets | 400 | 629 |
| Tweets kept | 2443 | 2443 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1t8lp3s8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nateritter-naval's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/293roeg0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/293roeg0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nateritter-naval')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
wilsontam/bert-base-uncased-dstc10-kb-title-body-validate | wilsontam | 2021-12-26T04:16:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dstc10",
"knowledge title-body validation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language: "en"
tags:
- dstc10
- knowledge title-body validation
widget:
- text: "Can you accommodate large groups? It does not offer free WiFi."
- text: "Is there a gym on site? It does not have an onsite fitness center."
---
This is the model used for knowledge clustering where we feed title-body pair and the classifier predicts if the pair is valid or not.
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository.
Credit: Jiakai Zou, Wilson Tam
---
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
def single_test(tokenizer, title_body_pair):
result = tokenizer([title_body_pair], return_tensors="pt")
model.eval()
outputs = model(**result)
predictions = outputs.logits.argmax(dim=-1)
# There was a mistake in flipping the labels.
return True if predictions == 0 else False
if __name__ == '__main__':
model_name = "wilsontam/bert-base-uncased-dstc10-kb-title-body-validate"
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(".")
sentence = "Can I check in anytime?"
body = "Yes, 24 Hours Front Desk Avaliable."
print(single_test((sentence, body))) # Expect: True
``` |
mohammadtari/arxivinterface | mohammadtari | 2021-12-26T02:18:42Z | 4 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5_small_summarization_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5_small_summarization_model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ayham/xlmroberta_large_gpt2_summarization_cnndm | Ayham | 2021-12-26T00:06:35Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlmroberta_large_gpt2_summarization_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta_large_gpt2_summarization_cnndm
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
airKlizz/mt5-base-wikinewssum-spanish | airKlizz | 2021-12-25T23:19:15Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2394
- Rouge1: 7.9732
- Rouge2: 3.5041
- Rougel: 6.6713
- Rougelsum: 7.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 528 | 2.3707 | 6.687 | 2.9169 | 5.6793 | 6.2978 |
| No log | 2.0 | 1056 | 2.3140 | 7.9518 | 3.4529 | 6.7265 | 7.4984 |
| No log | 3.0 | 1584 | 2.2848 | 7.9708 | 3.5344 | 6.7272 | 7.534 |
| No log | 4.0 | 2112 | 2.2668 | 8.0252 | 3.5323 | 6.7319 | 7.5819 |
| 3.2944 | 5.0 | 2640 | 2.2532 | 8.0143 | 3.534 | 6.7155 | 7.582 |
| 3.2944 | 6.0 | 3168 | 2.2399 | 7.9525 | 3.4849 | 6.6716 | 7.5155 |
| 3.2944 | 7.0 | 3696 | 2.2376 | 7.9405 | 3.4661 | 6.6559 | 7.5043 |
| 3.2944 | 8.0 | 4224 | 2.2394 | 7.9732 | 3.5041 | 6.6713 | 7.5229 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Palak/xlm-roberta-large_squad | Palak | 2021-12-25T20:19:12Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xlm-roberta-base_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset.
- eval_exact_match": 85.96026490066225
- "eval_f1": 92.25000664341768
- "eval_samples": 10918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.67
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
s3h/finetuned-arabert-head-gec | s3h | 2021-12-25T19:17:45Z | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: s3h/finetuned-arabert-head-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/finetuned-arabert-head-gec
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 16.9313
- Validation Loss: 19.1589
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 16.9313 | 19.1589 | 0 |
### Framework versions
- Transformers 4.14.1
- TensorFlow 2.6.2
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vanadhi/roberta-base-fiqa-flm-sq-flit | vanadhi | 2021-12-25T18:36:54Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fiqa-flm-sq-flit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fiqa-flm-sq-flit
This model is a fine-tuned version of roberta-base on a custom dataset create for question answering in
financial domain.
## Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
The model was further processed as below for the specific downstream QA task.
1. Pretrained for domain adaptation with Masked language modeling (MLM) objective with
the FIQA challenge Opinion-based QA task is available here - https://drive.google.com/file/d/1BlWaV-qVPfpGyJoWQJU9bXQgWCATgxEP/view
2. Pretrained with MLM objective with custom generated dataset for Banking and Finance.
3. Fine Tuned with SQuAD V2 dataset for QA task adaptation.
4. Fine Tuned with custom labeled dataset in SQuAD format for domain and task adaptation.
## Intended uses & limitations
The model is intended to be used for a custom Questions Answering system in the BFSI domain.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits