modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
nvkha/bert-qa-vi | 0833d29c7f224469100e84ef3c5d447736bf5cbc | 2022-02-02T06:22:15.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nvkha | null | nvkha/bert-qa-vi | 0 | null | transformers | 35,800 | Suggest under 1k character |
nyu-mll/roberta-base-1B-1 | 71bca774d4a7399d7da0990a6dbdd2d30642291e | 2021-05-20T19:03:06.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-1B-1 | 0 | null | transformers | 35,801 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
obss/mt5-small-3task-prepend-tquad2 | 82ba154657e40f601637403b9e5f8196c604d6fe | 2021-12-03T23:55:18.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | obss | null | obss/mt5-small-3task-prepend-tquad2 | 0 | null | transformers | 35,802 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "answer: film ve TV haklarını context: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un film ve TV haklarını satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "answer: bir antlaşma yaparak context: Fatih Sultan Mehmet, Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa son verdi."
example_title: "Question Generation (History)"
- text: "answer: Venedik'le context: Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi."
example_title: "Question Generation (History 2)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Answer Extraction (Open Domain)"
license: cc-by-4.0
---
# mt5-small for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-prepend-tquad2', qg_format='prepend')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-small
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-small"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "prepend"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-prepend-tquad2', qg_format='prepend')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
|
odinmay/joebot | 926fc217bc407b6c0d3dbbfb94f8d553443daa3d | 2021-06-05T03:37:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | odinmay | null | odinmay/joebot | 0 | null | transformers | 35,803 | ---
tags:
- conversational
---
# Joebot |
ogpat123/DialoGPT-small-Michael | 49812e513662faf4a04011d6e18a09ebcef36b66 | 2022-02-08T09:03:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ogpat123 | null | ogpat123/DialoGPT-small-Michael | 0 | null | transformers | 35,804 | ---
tags:
- conversational
---
# Michael DialoGPT model |
omnimokha/DialoGPT-medium-jakeamal | 1bd71103481960e563c975845887db0b6361a19f | 2021-09-10T22:42:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | omnimokha | null | omnimokha/DialoGPT-medium-jakeamal | 0 | null | transformers | 35,805 | ---
tags:
- conversational
---
# DialoGPT Jakeamal model |
omnimokha/DialoGPT-small-jakeamal | 4de700bcc1f3cdae2467c110abc93cc239354e62 | 2021-09-10T22:26:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | omnimokha | null | omnimokha/DialoGPT-small-jakeamal | 0 | null | transformers | 35,806 | ---
tags:
- conversational
---
# DialoGPT Jakeamal model |
omnimokha/jakebot2 | a3798565392e411ce22bbc7db81454b68989e864 | 2021-09-12T03:14:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | omnimokha | null | omnimokha/jakebot2 | 0 | null | transformers | 35,807 | ---
tags:
- conversational
---
# DialoGPT Jakeamal model |
omoekan/opus-tatoeba-eng-yor | 1e4d4253b0666205c4b4dfc92ab2c8c10c416dd8 | 2022-02-05T10:15:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | omoekan | null | omoekan/opus-tatoeba-eng-yor | 0 | null | transformers | 35,808 | ## OPUS Tatoeba English-Yoruba
This model was obtained by running the script convert_marian_to_pytorch.py with the flag -m eng-yor. The original models were trained by Jörg Tiedemann using the MarianNMT library. See all available MarianMTModel models on the profile of the Helsinki NLP group.
---
- tags: translation
- source language: English
- target language: Yoruba
- dataset: opus+bt
-model: transformer-align
-pre-processing: normalization + SentencePiece (spm12k,spm12k)
-download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.zip)
-test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.test.txt)
-test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.eval.txt)
-Benchmarks
|test set|BLEU|chr-F|
|:---|:---|:---|
|Tatoeba-test.eng-yor|13.0|0.333|
--- |
openclimatefix/dgmr-generator | 9f24d2a8444a84409a8af349a11fb57a4710aa4d | 2022-02-02T16:54:27.000Z | [
"pytorch"
] | null | false | openclimatefix | null | openclimatefix/dgmr-generator | 0 | null | null | 35,809 | Entry not found |
orendar/distilbert-base-cased-finetuned-conll03-english | 9db71fd18194434b7b1d18b72e56953c2b79f561 | 2021-01-05T11:26:25.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | orendar | null | orendar/distilbert-base-cased-finetuned-conll03-english | 0 | null | transformers | 35,810 | Entry not found |
orendar/en_he_base | ae3c45f8167225ca804fde3898b071ca6b69b6e2 | 2022-05-01T12:11:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | orendar | null | orendar/en_he_base | 0 | null | transformers | 35,811 | Entry not found |
orri/IceBERT-finetuned-ner | f0927451c9da138adb048311c06dde47dc0eb175 | 2021-10-01T15:49:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | orri | null | orri/IceBERT-finetuned-ner | 0 | null | transformers | 35,812 | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Systurnar Guðrún og Monique átu einar á McDonalds og horfðu á Stöð 2, þar glitti í Bruce Willis leika í Die Hard 2.
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.89397115028973
- name: Recall
type: recall
value: 0.8664117576771418
- name: F1
type: f1
value: 0.8799757281553399
- name: Accuracy
type: accuracy
value: 0.9854156499755994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0802
- Precision: 0.8940
- Recall: 0.8664
- F1: 0.8800
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0528 | 1.0 | 2904 | 0.0779 | 0.8829 | 0.8504 | 0.8663 | 0.9831 |
| 0.0274 | 2.0 | 5808 | 0.0784 | 0.8802 | 0.8585 | 0.8692 | 0.9839 |
| 0.0162 | 3.0 | 8712 | 0.0802 | 0.8940 | 0.8664 | 0.8800 | 0.9854 |
### Framework versions
- Transformers 4.11.1
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
osanseviero/ConvTasNet_Libri1Mix_enhsingle_16k | 4601b9678f3a3eae81f61499718c33dfbe1c3da6 | 2021-09-23T16:16:32.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | osanseviero | null | osanseviero/ConvTasNet_Libri1Mix_enhsingle_16k | 0 | null | null | 35,813 | ---
tags:
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
library_tag: generic
---
## Clone from Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
osanseviero/dummy-model2 | b4e4cf150705f20dab52ddb44111b6683e81d34b | 2021-06-30T18:59:53.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | osanseviero | null | osanseviero/dummy-model2 | 0 | null | transformers | 35,814 | Entry not found |
osanseviero/flair-ner-english3 | c556f812228fdf38bc225065dc9ab1164048ed5e | 2021-06-10T10:46:45.000Z | [
"pytorch"
] | null | false | osanseviero | null | osanseviero/flair-ner-english3 | 0 | null | null | 35,815 | Entry not found |
osanseviero/full-sentence-upload-to-hub2 | 8afa3969c5724cf91739e0172ebe104cb774571b | 2021-05-20T19:12:25.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | osanseviero | null | osanseviero/full-sentence-upload-to-hub2 | 0 | null | transformers | 35,816 | Entry not found |
osanseviero/just-a-test2 | bc8e72d9337391e10138fc066e5642a2942fa4f0 | 2022-07-01T06:49:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"causal-lm",
"license:cc-by-sa-4.0",
"sentence-similarity"
] | sentence-similarity | false | osanseviero | null | osanseviero/just-a-test2 | 0 | null | sentence-transformers | 35,817 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- causal-lm
license:
- cc-by-sa-4.0
---
# TODO: Name of Model
TODO: Description
## Model Description
TODO: Add relevant content
(0) Base Transformer Type: RobertaModel
(1) Pooling mean
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer(TODO)
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
# The next step is optional if you want your own pooling function.
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
max_over_time = torch.max(token_embeddings, 1)[0]
return max_over_time
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(TODO)
model = AutoModel.from_pretrained(TODO)
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt'))
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## TODO: Training Procedure
## TODO: Evaluation Results
## TODO: Citing & Authors
|
osanseviero/upload-to-hub | c41d941f2b9a849c64c18928c667363094454124 | 2021-05-20T19:13:12.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | osanseviero | null | osanseviero/upload-to-hub | 0 | null | transformers | 35,818 | Example card
Second modification |
osunlp/ReasonBERT-TAPAS-base | 65dfad734539724a417c7877f9d2dd2328446f9e | 2021-09-13T05:46:43.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
] | feature-extraction | false | osunlp | null | osunlp/ReasonBERT-TAPAS-base | 0 | null | transformers | 35,819 | Entry not found |
owen99630/catexp | b09b21e32a50f9ca27117892a9af6ab67b036ea4 | 2021-09-29T13:27:55.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | owen99630 | null | owen99630/catexp | 0 | null | transformers | 35,820 | Entry not found |
owencubes/DialoGPT-small-Josuke | ff456962d2e21a6fbc457411a5a902df052a8738 | 2021-08-29T21:39:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | owencubes | null | owencubes/DialoGPT-small-Josuke | 0 | null | transformers | 35,821 | ---
tags:
- conversational
---
# Test |
oya163/NepBERT | 48e21e711754350db73b0e4c79f008d92942d7f2 | 2021-05-20T19:14:16.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | oya163 | null | oya163/NepBERT | 0 | null | transformers | 35,822 | Entry not found |
p208p2002/qmst-qgg-qa | 259afd243a3046ae77c116f68b3efe970421f81e | 2021-06-19T05:04:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | p208p2002 | null | p208p2002/qmst-qgg-qa | 0 | null | transformers | 35,823 | Entry not found |
pablouribe/bertstem | ad78bd6fb51e32f68309cb9324fe8983031d1acc | 2021-11-11T18:11:49.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | pablouribe | null | pablouribe/bertstem | 0 | null | transformers | 35,824 | # BERT-STEM
BERT model fine-tuned on Science Technology Engineering and Mathematics (STEM) lessons.
## Install:
To install from pip:
```
pip install bertstem
```
## Quickstart
To encode sentences and get embedding matrix for embedding layers:
```python
from BERT_STEM.BertSTEM import *
bert = BertSTEM()
# Example dataframe with text in spanish
data = {'col_1': [3, 2, 1],
'col_2': ['hola como estan', 'alumnos queridos', 'vamos a hablar de matematicas']}
df = pd.DataFrame.from_dict(data)
# Encode sentences using BertSTEM:
bert._encode_df(df, column='col_2', encoding='sum')
# Get embedding matrix:
embedding_matrix = bert.get_embedding_matrix()
```
To use it from HuggingFace:
```python
from BERT_STEM.Encode import *
import pandas as pd
import transformers
# Download spanish BERTSTEM:
model = transformers.BertModel.from_pretrained("pablouribe/bertstem")
# Download spanish tokenizer:
tokenizer = transformers.BertTokenizerFast.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased",
do_lower_case=True,
add_special_tokens = False)
# Example dataframe with text in spanish
data = {'col_1': [3, 2, 1],
'col_2': ['hola como estan', 'alumnos queridos', 'vamos a hablar de matematicas']}
df = pd.DataFrame.from_dict(data)
# Encode sentences using BertSTEM:
sentence_encoder(df, model, tokenizer, column = 'col_2', encoding = 'sum')
```
|
paladinx00/rh-bender | 38f073a38da43644c6f583641bba93d078db1b65 | 2021-07-17T16:05:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | paladinx00 | null | paladinx00/rh-bender | 0 | null | transformers | 35,825 | ---
tags:
- conversational
---
# GPT |
parhamabedazad/ft-bz | d95ef282f46dff940b31c9eba74975085d76c50a | 2022-01-01T18:53:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | parhamabedazad | null | parhamabedazad/ft-bz | 0 | null | transformers | 35,826 | Entry not found |
parigaswetha/DialoGPT-small-jakeperalta | 1b3ba23a7e99f4a7ca9a10fc25f12acf21aa9966 | 2022-02-08T19:35:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | parigaswetha | null | parigaswetha/DialoGPT-small-jakeperalta | 0 | null | transformers | 35,827 | ---
tags:
- conversational
---
# Jake Peralta DialoGPT Model |
parthsinha/DialoGPT-small-rickandmorty | da630a8461d05f39252e309f64b6978524ae0d24 | 2021-10-04T13:30:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | parthsinha | null | parthsinha/DialoGPT-small-rickandmorty | 0 | null | transformers | 35,828 | ---
tags:
- conversational
---
#Rick and Morty DialoGPT Model |
patricklai14/tapt_citation | 065cb2df377138991778a73a1db9ec00fd10dfc4 | 2021-05-20T19:15:14.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | patricklai14 | null | patricklai14/tapt_citation | 0 | null | transformers | 35,829 | Entry not found |
patrickvonplaten/data2vec-base | 650cb56bf0ba309ab4514b79700fc51c7135721b | 2022-04-18T16:29:03.000Z | [
"pytorch",
"data2vec-audio",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/data2vec-base | 0 | null | transformers | 35,830 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Base
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
patrickvonplaten/dummy_to_del | 2ffdceee0753c3b31882e897d1e477b9681d28cf | 2021-05-26T11:23:40.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/dummy_to_del | 0 | null | transformers | 35,831 | Entry not found |
patrickvonplaten/dummy_wav2vec2_with_adapter | f4d9ec9942b629768f2a380247b1c8a58e3931e8 | 2022-02-02T11:06:17.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/dummy_wav2vec2_with_adapter | 0 | null | transformers | 35,832 | Entry not found |
patrickvonplaten/sew-mid-100k-librispeech-clean-100h-ft | 20e4af30fb69f62c2eb7f634afd100956f3ecedc | 2021-12-20T12:53:26.000Z | [
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-mid-100k-librispeech-clean-100h-ft | 0 | null | transformers | 35,833 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: sew-mid-100k-librispeech-clean-100h-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-librispeech-clean-100h-ft
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1976
- Wer: 0.1665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4274 | 0.11 | 100 | 4.1419 | 1.0 |
| 2.9657 | 0.22 | 200 | 3.1203 | 1.0 |
| 2.9069 | 0.34 | 300 | 3.0107 | 1.0 |
| 2.8666 | 0.45 | 400 | 2.8960 | 1.0 |
| 1.4535 | 0.56 | 500 | 1.4062 | 0.8664 |
| 0.6821 | 0.67 | 600 | 0.5530 | 0.4930 |
| 0.4827 | 0.78 | 700 | 0.4122 | 0.3630 |
| 0.4485 | 0.9 | 800 | 0.3597 | 0.3243 |
| 0.2666 | 1.01 | 900 | 0.3104 | 0.2790 |
| 0.2378 | 1.12 | 1000 | 0.2913 | 0.2613 |
| 0.2516 | 1.23 | 1100 | 0.2702 | 0.2452 |
| 0.2456 | 1.35 | 1200 | 0.2619 | 0.2338 |
| 0.2392 | 1.46 | 1300 | 0.2466 | 0.2195 |
| 0.2117 | 1.57 | 1400 | 0.2379 | 0.2092 |
| 0.1837 | 1.68 | 1500 | 0.2295 | 0.2029 |
| 0.1757 | 1.79 | 1600 | 0.2240 | 0.1949 |
| 0.1626 | 1.91 | 1700 | 0.2195 | 0.1927 |
| 0.168 | 2.02 | 1800 | 0.2137 | 0.1853 |
| 0.168 | 2.13 | 1900 | 0.2123 | 0.1839 |
| 0.1576 | 2.24 | 2000 | 0.2095 | 0.1803 |
| 0.1756 | 2.35 | 2100 | 0.2075 | 0.1776 |
| 0.1467 | 2.47 | 2200 | 0.2049 | 0.1754 |
| 0.1702 | 2.58 | 2300 | 0.2013 | 0.1722 |
| 0.177 | 2.69 | 2400 | 0.1993 | 0.1701 |
| 0.1417 | 2.8 | 2500 | 0.1983 | 0.1688 |
| 0.1302 | 2.91 | 2600 | 0.1977 | 0.1678 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-base-100h-13K-steps | 408690c6146563a72807cb77026e57b5e8cc8839 | 2021-03-03T13:11:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-100h-13K-steps | 0 | null | transformers | 35,834 | Fine-tuning of `wav2vec2-base` on 100h of Librispeech training data. Results on "clean" data are very similar to the ones of the [official model](https://huggingface.co/facebook/wav2vec2-base-100h). However, the result on "other" is significantly worse - the model seems to have overfitting to the "clean" data.
Model was trained on *librispeech-clean-train.100* with following hyper-parameters:
- 2 GPUs Titan RTX
- Total update steps 13000
- Batch size per GPU: 32 corresponding to a *total batch size* of ca. ~1500 seconds
- Adam with linear decaying learning rate with 3000 warmup steps
- dynamic grouping for batch
- fp16
- attention_mask was **not** used during training
Check: https://wandb.ai/patrickvonplaten/huggingface/reports/Project-Dashboard--Vmlldzo1MDI2MTU?accessToken=69z0mrkoxs1msgh71p4nntr9shi6mll8rhtbo6c56yynygw0scp11d8z9o1xd0uk
*Result (WER)* on Librispeech test:
| "clean" | "other" |
|---|---|
| 6.5 | 18.7 | |
patrickvonplaten/wav2vec2-base-random | 74835ba563e79a9e7743e7ad36473159e6a644da | 2021-10-22T15:56:55.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-random | 0 | null | transformers | 35,835 | Entry not found |
patrickvonplaten/wav2vec2-common_voice-tamil | d1b7543d186217b9f745efc9b329cd8df486b0c0 | 2022-02-01T14:17:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-common_voice-tamil | 0 | null | transformers | 35,836 | ---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tamil
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1172
- Wer: 1.0070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.84 | 100 | 4.0148 | 1.0 |
| No log | 1.69 | 200 | 3.1738 | 1.0 |
| No log | 2.54 | 300 | 2.5980 | 1.0236 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab | d1400d953f36cc08e629dc6f3c1df16292af10cf | 2021-10-27T17:08:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab | 0 | null | transformers | 35,837 | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-129-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-129-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-129](https://huggingface.co/facebook/wav2vec2-large-xlsr-129) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Wer: 0.4748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.4837 | 3.67 | 400 | 3.2526 | 1.0 |
| 3.0896 | 7.34 | 800 | 2.8037 | 1.0 |
| 1.5604 | 11.01 | 1200 | 0.5688 | 0.6613 |
| 0.6511 | 14.68 | 1600 | 0.3998 | 0.5580 |
| 0.4798 | 18.35 | 2000 | 0.3505 | 0.5118 |
| 0.4047 | 22.02 | 2400 | 0.3273 | 0.4858 |
| 0.3519 | 25.69 | 2800 | 0.3224 | 0.4796 |
| 0.343 | 29.36 | 3200 | 0.3149 | 0.4748 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab | 38e6fca7875aae416436220a6f167cfad2f8fcfb | 2021-10-19T17:18:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab | 0 | null | transformers | 35,838 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-turkish-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-turkish-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4055
- Wer: 0.4800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 |
| 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 |
| 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 |
| 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 |
| 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 |
| 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 |
| 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xls-r-100m-common_voice-tr-ft | eb295d91e297ece394b9184c4614d60b47e5aab7 | 2021-11-14T16:43:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-xls-r-100m-common_voice-tr-ft | 0 | null | transformers | 35,839 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-100m-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-100m-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4113
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|
| 3.1315 | 9.09 | 500 | 3.3832 | 1.0 | 1.0 |
| 3.1163 | 18.18 | 1000 | 3.4252 | 1.0 | 1.0 |
| 3.121 | 27.27 | 1500 | 3.4051 | 1.0 | 1.0 |
| 3.1273 | 36.36 | 2000 | 3.4345 | 1.0 | 1.0 |
| 3.2257 | 45.45 | 2500 | 3.4097 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2_tiny_random | 24bc33e6e5d824bef7eafd205bb0a70dcffec750 | 2021-07-05T13:53:54.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/wav2vec2_tiny_random | 0 | null | transformers | 35,840 | ## Test model
To test this model run the following code:
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC
import torchaudio
import torch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2_tiny_random")
def load_audio(batch):
batch["samples"], _ = torchaudio.load(batch["file"])
return batch
ds = ds.map(load_audio)
input_values = torch.nn.utils.rnn.pad_sequence([torch.tensor(x[0]) for x in ds["samples"][:10]], batch_first=True)
# forward
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
# dummy loss
dummy_labels = pred_ids.clone()
dummy_labels[dummy_labels == model.config.pad_token_id] = 1 # can't have CTC blank token in label
dummy_labels = dummy_labels[:, -(dummy_labels.shape[1] // 4):] # make sure labels are shorter to avoid "inf" loss (can still happen though...)
loss = model(input_values, labels=dummy_labels).loss
```
|
patrickvonplaten/xls-r-300m-sv-cv8 | 56c65864b2cfdc76946c1251aafb740e5138c908 | 2022-03-24T11:54:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sv",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/xls-r-300m-sv-cv8 | 0 | null | transformers | 35,841 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- sv
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Swedish - CV8 - v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 17.33
- name: Test CER
type: cer
value: 5.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 27.01
- name: Test CER
type: cer
value: 12.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2779
- Wer: 0.2525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3224 | 1.37 | 500 | 3.3354 | 1.0 |
| 2.9318 | 2.74 | 1000 | 2.9361 | 1.0000 |
| 2.1371 | 4.11 | 1500 | 1.1157 | 0.8359 |
| 1.6883 | 5.48 | 2000 | 0.6003 | 0.6314 |
| 1.5812 | 6.85 | 2500 | 0.4746 | 0.4725 |
| 1.5145 | 8.22 | 3000 | 0.4376 | 0.4736 |
| 1.4763 | 9.59 | 3500 | 0.4006 | 0.3863 |
| 1.4215 | 10.96 | 4000 | 0.3783 | 0.3629 |
| 1.3638 | 12.33 | 4500 | 0.3555 | 0.3425 |
| 1.3561 | 13.7 | 5000 | 0.3340 | 0.3228 |
| 1.3406 | 15.07 | 5500 | 0.3373 | 0.3295 |
| 1.3055 | 16.44 | 6000 | 0.3432 | 0.3210 |
| 1.3048 | 17.81 | 6500 | 0.3282 | 0.3118 |
| 1.2863 | 19.18 | 7000 | 0.3226 | 0.3018 |
| 1.2389 | 20.55 | 7500 | 0.3050 | 0.2986 |
| 1.2361 | 21.92 | 8000 | 0.3048 | 0.2980 |
| 1.2263 | 23.29 | 8500 | 0.3011 | 0.2977 |
| 1.2225 | 24.66 | 9000 | 0.3017 | 0.2959 |
| 1.2044 | 26.03 | 9500 | 0.2977 | 0.2782 |
| 1.2017 | 27.4 | 10000 | 0.2966 | 0.2781 |
| 1.1912 | 28.77 | 10500 | 0.2999 | 0.2786 |
| 1.1658 | 30.14 | 11000 | 0.2991 | 0.2757 |
| 1.148 | 31.51 | 11500 | 0.2915 | 0.2684 |
| 1.1423 | 32.88 | 12000 | 0.2913 | 0.2643 |
| 1.123 | 34.25 | 12500 | 0.2777 | 0.2630 |
| 1.1297 | 35.62 | 13000 | 0.2873 | 0.2646 |
| 1.0987 | 36.98 | 13500 | 0.2829 | 0.2619 |
| 1.0873 | 38.36 | 14000 | 0.2864 | 0.2608 |
| 1.0848 | 39.73 | 14500 | 0.2827 | 0.2577 |
| 1.0628 | 41.1 | 15000 | 0.2896 | 0.2581 |
| 1.0815 | 42.47 | 15500 | 0.2814 | 0.2561 |
| 1.0587 | 43.83 | 16000 | 0.2738 | 0.2542 |
| 1.0709 | 45.21 | 16500 | 0.2785 | 0.2578 |
| 1.0512 | 46.57 | 17000 | 0.2793 | 0.2539 |
| 1.0396 | 47.94 | 17500 | 0.2788 | 0.2525 |
| 1.0481 | 49.31 | 18000 | 0.2777 | 0.2534 |
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300m-sv-cv8 --dataset mozilla-foundation/common_voice_8_0 --config sv-SE --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id patrickvonplaten/xls-r-300m-sv-cv8 --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.10.3
|
peggyhuang/SciBERT-CoQA | 85d08d1c348b58419270c02fd1a3e99c5a9083a0 | 2021-11-27T11:43:10.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/SciBERT-CoQA | 0 | null | transformers | 35,842 | Entry not found |
peggyhuang/finetune-SciBert-v2 | 17f2645388f80da94d810ecf01531b219264b473 | 2022-01-17T07:12:34.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/finetune-SciBert-v2 | 0 | null | transformers | 35,843 | Entry not found |
peggyhuang/finetune-bert-base-v1 | 7c916e2453c2a64bcdb93d44d6e8c747e95560db | 2021-12-13T04:11:11.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/finetune-bert-base-v1 | 0 | null | transformers | 35,844 | Entry not found |
peggyhuang/finetune-bert-base-v2 | b00820a6a54ca72239f6747200110dfb0589d305 | 2022-01-17T07:21:17.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/finetune-bert-base-v2 | 0 | null | transformers | 35,845 | Entry not found |
peggyhuang/nolog-SciBert-v2 | 084b8c5e05a69e97fbe3bee2b4603f0141dab315 | 2022-01-17T07:33:18.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/nolog-SciBert-v2 | 0 | null | transformers | 35,846 | Entry not found |
peixian/bridge-scribe | 8d48f8039ac60e6de51d6d0804b5592dde7bfa2a | 2021-06-20T17:05:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | peixian | null | peixian/bridge-scribe | 0 | null | transformers | 35,847 | Entry not found |
pere/nb-nn-dev2 | 06f299eac8157b83e64029861c48723782abfd82 | 2021-09-23T16:19:18.000Z | [
"pytorch",
"jax",
"no",
"dataset:oscar",
"translation",
"license:cc-by-4.0"
] | translation | false | pere | null | pere/nb-nn-dev2 | 0 | null | null | 35,848 | ---
language: no
license: cc-by-4.0
tags:
- translation
datasets:
- oscar
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# Norwegian T5 - Translation Bokmål Nynorsk - Development
## Description
This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-dev')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
pere/nb-roberta-base-scandinavian-long | b6fe1e0dcce11016f027454f9ec730e56e55cd12 | 2021-11-25T18:21:53.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pere | null | pere/nb-roberta-base-scandinavian-long | 0 | null | transformers | 35,849 | # This is just a Test Model. Do NOT use for anything!
Continued pretrained from the nb-roberta-base.
The domain specific pretraining is done on the 102GB (Scandinavian corpus)[https://huggingface.co/datasets/NbAiLab/scandinavian].
## Train for 180k steps for 128 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="128" \
--weight_decay="0.01" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="6e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="180000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="10000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
## Train for 20k steps for 512 sequences:
```bash
./run_mlm_flax_stream.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--model_name_or_path="./" \
--dataset_name="NbAiLab/scandinavian" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="48" \
--per_device_eval_batch_size="48" \
--learning_rate="3e-5" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_steps="20000" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="20000" \
--save_steps="10000" \
--eval_steps="10000" \
--preprocessing_num_workers 96 \
--auth_token True \
--adafactor \
--push_to_hub
```
Approximate additional training time: 1 week.
|
peterhsu/distilbert-base-uncased-finetuned-imdb-accelerate | ca8b7a194e288b721a6f7b6aa8d19444f85a1bba | 2022-02-15T13:46:25.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | peterhsu | null | peterhsu/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 35,850 | Entry not found |
pewriebontal/DialoGPT-medium-Pewpewbon | 07ea6ed54b2c5d7ddffbcd98dbf3432ab2023fb2 | 2021-06-13T11:46:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pewriebontal | null | pewriebontal/DialoGPT-medium-Pewpewbon | 0 | null | transformers | 35,851 | ---
tags:
- conversational
---
# My Awesome Model |
phantom-deluxe/dialoGPT-RickBot | 335e81b22c14a8cfa58d729d8d1ff530a1b6db69 | 2021-09-18T04:05:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | phantom-deluxe | null | phantom-deluxe/dialoGPT-RickBot | 0 | null | transformers | 35,852 | ---
tags:
- conversational
---
#Rick Style dialoGPT Model |
phantom-deluxe/dialoGPT-harry | 4b860a78c0d8149a3a17e9a6d6cbc3506a4d7d08 | 2021-09-16T13:29:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | phantom-deluxe | null | phantom-deluxe/dialoGPT-harry | 0 | null | transformers | 35,853 | ---
tags:
- conversational
---
#Harry Style dialoGPT Model |
philschmid/pt-test | 992fc3657d646769ba4407239fe1c6a8588e0bac | 2022-01-24T07:46:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | philschmid | null | philschmid/pt-test | 0 | null | transformers | 35,854 | Entry not found |
phongdtd/fb-vindata-vi-large | fa1b25704aa9011973452ab134c7785c2afff544 | 2022-02-24T10:24:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/fb-vindata-vi-large | 0 | null | transformers | 35,855 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
model-index:
- name: fb-vindata-vi-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-vindata-vi-large
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the PHONGDTD/VINDATAVLSP - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phongdtd/fb-youtube-vi-large | 51b2b513289287aa355154191855bc0b4cbbd193 | 2022-02-23T13:56:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"phongdtd/youtube_casual_audio",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/fb-youtube-vi-large | 0 | null | transformers | 35,856 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- phongdtd/youtube_casual_audio
- generated_from_trainer
model-index:
- name: fb-youtube-vi-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-youtube-vi-large
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the PHONGDTD/YOUTUBE_CASUAL_AUDIO - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phozon/harry-potter-medium | 39aebf959c64be5700c18b469e3a7cdc768847ca | 2021-06-22T20:23:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | phozon | null | phozon/harry-potter-medium | 0 | null | transformers | 35,857 | ---
tags:
- conversational
---
# My Awesome Model |
pitehu/T5_NER_CONLL_LIST | 932fad7362502e0e399a1b5995fff791619b4a78 | 2022-01-20T14:32:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wmt19",
"transformers",
"Named Entity Recognition",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | pitehu | null | pitehu/T5_NER_CONLL_LIST | 0 | null | transformers | 35,858 | ---
language:
- en
tags:
- Named Entity Recognition
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
- sacrebleu
inference:
parameters:
max_length: 1024
---
|
pixyz/distilbert-base-uncased-finetuned-squad | 6a028ec273761e49de82188ba02d51156ee1d5c0 | 2021-11-20T14:49:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | pixyz | null | pixyz/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,859 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2203 | 1.0 | 5533 | 1.1569 |
| 0.9452 | 2.0 | 11066 | 1.1234 |
| 0.7656 | 3.0 | 16599 | 1.1586 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
piyushdubey/DialoGPT-Mi | 4986fa86aad9d8f1c332740d01e65fc3a44b75a3 | 2021-09-20T21:19:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | piyushdubey | null | piyushdubey/DialoGPT-Mi | 0 | null | transformers | 35,860 | ---
tags:
- conversational
---
# Sheldon GPT Model |
porpaul/t5-small-finetuned-xsum | e72518929ceb01914cb330eee68560bc1e0b07a7 | 2022-01-16T06:59:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | porpaul | null | porpaul/t5-small-finetuned-xsum | 0 | null | transformers | 35,861 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: chinese_traditional
metrics:
- name: Rouge1
type: rouge
value: 0.5217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2188
- Rouge1: 0.5217
- Rouge2: 0.0464
- Rougel: 0.527
- Rougelsum: 0.5215
- Gen Len: 6.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.3831 | 1.0 | 7475 | 1.2188 | 0.5217 | 0.0464 | 0.527 | 0.5215 | 6.7441 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ppletscher/dummy | c320c820f57922d78ae76662bbb33727558c4115 | 2021-07-16T09:21:12.000Z | [
"pytorch",
"camembert",
"fill-mask",
"fr",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ppletscher | null | ppletscher/dummy | 0 | null | transformers | 35,862 | ---
language: fr
---
# Foo
Bar
|
prajjwal1/ctrl_discovery_4 | 85ef10cdb41e591a2c4b91d19ca261c63704d0da | 2021-03-19T20:28:51.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_4 | 0 | null | transformers | 35,863 | Entry not found |
prajjwal1/ctrl_discovery_5 | d95b0591ac168844ab4cc45dd420210af1ed1f96 | 2021-03-23T02:54:01.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_5 | 0 | null | transformers | 35,864 | Entry not found |
prajjwal1/ctrl_discovery_flipped_6 | 946f972d415186dbc43c678e3c74a2bc168d3b41 | 2021-06-06T19:32:48.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_6 | 0 | null | transformers | 35,865 | Entry not found |
prajwalcr/poetry-disgust_gpt2 | b7ad45c952f63332fd980286636a6c852572e82d | 2021-05-29T18:47:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-disgust_gpt2 | 0 | null | transformers | 35,866 | Entry not found |
prajwalcr/poetry-fear_gpt2 | c8b1aa0dd98c9c78e9559ffa19ef2c000cacfcca | 2021-05-29T19:35:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-fear_gpt2 | 0 | null | transformers | 35,867 | Entry not found |
prajwalcr/poetry-sadness_gpt2 | 6749594078dea7d952fd0eb867dee11eb86660e6 | 2021-08-03T11:37:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-sadness_gpt2 | 0 | null | transformers | 35,868 | Entry not found |
prajwalcr/poetry_gpt2 | 0f04d6c001ec0235368ccbd3bd434f5dda035ada | 2021-05-29T08:37:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry_gpt2 | 0 | null | transformers | 35,869 | Entry not found |
pranavtharoor/test | f637758376b1627de457b23d00be26294f65daa9 | 2021-09-10T22:13:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pranavtharoor | null | pranavtharoor/test | 0 | null | transformers | 35,870 | ---
tags:
- conversational
---
# Test Model
|
princeton-nlp/datamux-qnli-40 | 9fb4c2634aa25f989c228274e65867919e85e564 | 2022-02-16T17:01:46.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qnli-40 | 0 | null | transformers | 35,871 | Entry not found |
princeton-nlp/datamux-qqp-2 | a0f1566b497b6d2f5a9589f437a5676acf6fc0f2 | 2022-02-16T17:02:26.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qqp-2 | 0 | null | transformers | 35,872 | Entry not found |
princeton-nlp/datamux-qqp-20 | 41422d2a83b1879334c2070ed67d753cf6289330 | 2022-02-16T17:05:39.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qqp-20 | 0 | null | transformers | 35,873 | Entry not found |
princeton-nlp/datamux-qqp-5 | b0af020f40fcb327d982972a7dcd68a13fba0c8c | 2022-02-16T17:03:38.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qqp-5 | 0 | null | transformers | 35,874 | Entry not found |
princeton-nlp/datamux-sst2-5 | 4cc7baeac894e53decaf1f1dff21840ea4573614 | 2022-02-16T17:08:25.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-sst2-5 | 0 | null | transformers | 35,875 | Entry not found |
princeton-nlp/densephrases-multi-query-kilt-multi | 24e67af2578a03aac5c223c5d771b4621b131564 | 2021-09-23T18:54:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-kilt-multi | 0 | null | transformers | 35,876 | Entry not found |
princeton-nlp/densephrases-multi-query-wow | 72b86e6d511273ea983184e6f0c4b47760ca6ffa | 2021-09-23T18:48:26.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-wow | 0 | null | transformers | 35,877 | Entry not found |
prithivida/bertscrnn-probwordnoise | b99c0e14f89b755ebcfbf7c1e255e7f1722c918b | 2021-12-06T05:57:30.000Z | [
"pytorch",
"en",
"BERT",
"RNN",
"license:mit"
] | null | false | prithivida | null | prithivida/bertscrnn-probwordnoise | 0 | null | null | 35,878 | ---
language:
- en
tags:
- BERT
- RNN
license: "MIT"
---
# NeuSpell: A Neural Spelling Correction Toolkit
This model checkpoint belongs to the Original Neuspell python library and is ported to HuggingFace Hub to be used as a part of NeuSpell-Demo spaces.
- [Refer to the Fork of the library (with HF hub support) in GitHub:](https://github.com/PrithivirajDamodaran/neuspell)
- [Refer to the original library in GitHub:](https://github.com/neuspell/neuspell)
|
prithivida/cnn-lstm-probwordnoise | f24fcebe64f3f17f7e328b331fa124ad48e89407 | 2021-12-06T05:56:30.000Z | [
"pytorch",
"en",
"CNN",
"LSTM",
"license:mit"
] | null | false | prithivida | null | prithivida/cnn-lstm-probwordnoise | 0 | null | null | 35,879 | ---
language:
- en
tags:
- CNN
- LSTM
license: "MIT"
---
# NeuSpell: A Neural Spelling Correction Toolkit
This model checkpoint belongs to the Original Neuspell python library and is ported to HuggingFace Hub to be used as a part of NeuSpell-Demo spaces.
- [Refer to the Fork of the library (with HF hub support) in GitHub:](https://github.com/PrithivirajDamodaran/neuspell)
- [Refer to the original library in GitHub:](https://github.com/neuspell/neuspell)
|
prithivida/elmoscrnn-probwordnoise | 2262a7ba8b2b473ee305be3723cbf7982f12ae1a | 2021-12-06T05:54:22.000Z | [
"pytorch",
"en",
"ELMo",
"RNN",
"license:mit"
] | null | false | prithivida | null | prithivida/elmoscrnn-probwordnoise | 0 | null | null | 35,880 | ---
language:
- en
tags:
- ELMo
- RNN
license: "MIT"
---
# NeuSpell: A Neural Spelling Correction Toolkit
This model checkpoint belongs to the Original Neuspell python library and is ported to HuggingFace Hub to be used as a part of NeuSpell-Demo spaces.
- [Refer to the Fork of the library (with HF hub support) in GitHub:](https://github.com/PrithivirajDamodaran/neuspell)
- [Refer to the original library in GitHub:](https://github.com/neuspell/neuspell)
|
pritoms/distilgpt2-YTTranscriptTrial2 | d1896b09075d829c3d1e65ea4b9c23b65027a401 | 2022-02-03T04:46:19.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/distilgpt2-YTTranscriptTrial2 | 0 | null | transformers | 35,881 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-YTTranscriptTrial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-YTTranscriptTrial2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 6.0027 |
| No log | 2.0 | 140 | 5.9072 |
| No log | 3.0 | 210 | 5.8738 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/distilgpt2-finetuned-irll2 | f32737ce8fb7668510c0a332e828c37474077434 | 2021-09-25T11:34:01.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/distilgpt2-finetuned-irll2 | 0 | null | transformers | 35,882 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: distilgpt2-finetuned-irll2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-irll2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 12 | 4.2919 |
| No log | 2.0 | 24 | 4.2158 |
| No log | 3.0 | 36 | 4.1925 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
pritoms/distilgpt2-finetuned-mit-lecture | f9bc82b421a4a85205fe05bc1dcfe1e4558e226d | 2021-10-21T08:59:34.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/distilgpt2-finetuned-mit-lecture | 0 | null | transformers | 35,883 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-mit-lecture
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-mit-lecture
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 144 | 3.8737 |
| No log | 2.0 | 288 | 3.8436 |
| No log | 3.0 | 432 | 3.8377 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pritoms/distilgpt2-finetuned-pgt | fac0f561f0c05ca95187863fb3cbdf217eba41a6 | 2021-09-04T11:16:01.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/distilgpt2-finetuned-pgt | 0 | null | transformers | 35,884 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: distilgpt2-finetuned-pgt
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-pgt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 31 | 5.0513 |
| No log | 2.0 | 62 | 5.0175 |
| No log | 3.0 | 93 | 5.0132 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
pritoms/distilgpt2-finetuned-wikitext2 | 5fa01312bd0e3c77f4386831980ef8c1298ef79d | 2021-10-21T21:16:24.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 35,885 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 130 | 3.1733 |
| No log | 2.0 | 260 | 3.0756 |
| No log | 3.0 | 390 | 3.0540 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
pritoms/gpt2-group2 | 4ec633d272595e01b0c6a43de09e5787d3b3fda6 | 2022-02-21T23:03:28.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | pritoms | null | pritoms/gpt2-group2 | 0 | null | transformers | 35,886 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-group2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-group2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 3.7517 |
| No log | 2.0 | 12 | 3.6951 |
| No log | 3.0 | 18 | 3.6769 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
promisemee/odqa-roberta-large | ba69bb30708f76d1a8c4230587951b05da7b33cf | 2021-12-15T14:47:15.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | promisemee | null | promisemee/odqa-roberta-large | 0 | null | transformers | 35,887 | Entry not found |
prophetikai/gpt-code | 326fe357e67d55f67d99ad722ab85a0169fe62d6 | 2021-08-11T22:24:38.000Z | [
"pytorch",
"tf",
"keras",
"gpt2",
"text-generation"
] | text-generation | false | prophetikai | null | prophetikai/gpt-code | 0 | null | keras | 35,888 | TODO
gpt-code uses the weights and tokenizer of https://huggingface.co/Sentdex/GPyT as a starting point for pretraining |
prows12/wav2vec2-base-timit-demo-test_jong | e685aa56d3b6e25acebc6dc37f8eb661ab656a32 | 2021-10-23T13:10:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | prows12 | null | prows12/wav2vec2-base-timit-demo-test_jong | 0 | null | transformers | 35,889 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-test_jong
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-test_jong
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
proxyht/mdsister-news-100 | df1c1eb45687a598eb291e85cb92925fe969786c | 2021-07-14T11:48:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | proxyht | null | proxyht/mdsister-news-100 | 0 | null | transformers | 35,890 | Entry not found |
proycon/robbert2-pos-cased-deepfrog-nld | bf03279dbf37190f034123fca95f0b7f88458150 | 2021-05-20T19:45:16.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | proycon | null | proycon/robbert2-pos-cased-deepfrog-nld | 0 | null | transformers | 35,891 | Entry not found |
ps2102/DialoGPT-small-harrypotter | f3e62eab0de21d335ac9d61fb6751ad305b204b1 | 2022-01-24T05:27:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ps2102 | null | ps2102/DialoGPT-small-harrypotter | 0 | null | transformers | 35,892 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
pszemraj/Ballpark-Trivia-L | b1e14d7509cd62b11f97290c7eb28ae79ea3ed9a | 2022-01-18T23:32:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/Ballpark-Trivia-L | 0 | null | transformers | 35,893 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "how many ping-pong balls fit inside a standard 747 jet aeroplane?\nperson beta:\n\n"
example_title: "ping-pong"
- text: "What is the capital of Uganda?\nperson beta:\n\n"
example_title: "geography"
- text: "What is the most popular TV show of all time?\nperson beta:\n\n"
example_title: "pseudo-culture"
- text: "A man pushes his car to a hotel and tells the owner he’s bankrupt. Why?\nperson beta:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 32
no_repeat_ngram_size: 2
do_sample: True
top_p: 0.90
top_k: 10
repetition_penalty: 2.1
---
# Ballpark Trivia: Size L
Are you frequently asked google-able Trivia questions and annoyed by it? Well, this is the model for you! Ballpark Trivia Bot answers any trivia question with something that sounds plausible but is probably not 100% correct. One might say.. the answers are in the right ballpark. Check out a demo of it [here](https://huggingface.co/spaces/pszemraj/ballpark-trivia).
```
how many varieties of eggplant are there?
person beta:
about 4,000
```
## Training
This text gen model is a GPT-2 774M Parameter Size L Model, first trained on [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps (34/36 layers frozen for the fine-tuning), and then subsequently trained for 40k steps on a parsed variant of [Natural Questions](https://ai.google.com/research/NaturalQuestions)(**also** 34/36 layers frozen for the fine-tuning) to accidentally create this model.
Note that because the model was originally trained for use in a [chatbot application](https://github.com/pszemraj/ai-msgbot), it uses a named conversation dialogue structure, _, i.e. the questions are asked by person alpha, and responded to by person beta_. Even if you don't specify person alpha, it should hopefully respond to any question.
## Example Prompt
- the default examples are not great
- you can type in any trivia question or delete the example and write `what` or `when` in there, and it will generate the rest of the trivia question **and the answer**!
```
where is the tv show the arrow filmed
person beta:
Vancouver, British Columbia
``` |
pszemraj/Ballpark-Trivia-M | cf25fc2fd9e87c3863b2edc70572ec8c17238b3e | 2022-01-18T23:45:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/Ballpark-Trivia-M | 0 | null | transformers | 35,894 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "how many ping-pong balls fit inside a standard 747 jet aeroplane?\nperson beta:\n\n"
example_title: "ping-pong"
- text: "What is the capital of Uganda?\nperson beta:\n\n"
example_title: "geography"
- text: "What is the most popular TV show of all time?\nperson beta:\n\n"
example_title: "pseudo-culture"
- text: "A man pushes his car to a hotel and tells the owner he’s bankrupt. Why?\nperson beta:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 32
no_repeat_ngram_size: 2
do_sample: True
top_p: 0.90
top_k: 10
---
# Ballpark Trivia: Size M
Are you frequently asked google-able Trivia questions and annoyed by it? Well, this is the model for you! Ballpark Trivia Bot answers any trivia question with something that sounds plausible but is probably not 100% correct. One might say.. the answers are in the right ballpark.
> The size M is smaller and less capable but loads _a lot_ faster. The inference API does not like size M for some reason, [here](https://colab.research.google.com/gist/pszemraj/e2c5cee3361122d878062d0287ebc799/scratchpad.ipynb) is a Colab gist to test it out.
## Training
This text gen model is a GPT-2 ~350 M Parameter Size M Model, trained for 40k steps on a parsed variant of [Natural Questions](https://ai.google.com/research/NaturalQuestions)(with **22**/24 layers frozen for the fine-tuning) to create this model accidentally.
Note that because the model was originally trained for use in a [chatbot application](https://github.com/pszemraj/ai-msgbot), it uses a named conversation dialogue structure, _, i.e. the questions are asked by person alpha, and responded to by person beta_. Even if you don't specify person alpha in the prompt, it hopefully responds to any question.
## Example Prompt
```
when was the french revolution?
person beta:
1805
```
- the provided examples are not great
- you can type in any trivia question or delete the example and write `what` or `when` in there, and it will generate the rest of the trivia question **and the answer**!
|
pszemraj/t5_1_1-base-writing-analysis | 648ba0e665e479c9572c322db835638401b41a94 | 2022-02-03T23:09:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"transformers",
"analysis",
"book",
"notes",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/t5_1_1-base-writing-analysis | 0 | null | transformers | 35,895 | ---
language:
- en
tags:
- t5
- analysis
- book
- notes
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: "A large drop of sun lingered on the horizon and then dripped over and was gone, and the sky was brilliant over the spot where it had gone, and a torn cloud, like a bloody rag, hung over the spot of its going. And dusk crept over the sky from the eastern horizon, and darkness crept over the land from the east."
example_title: "grapes of wrath"
- text: "The year was 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General."
example_title: "Harrison Bergeron"
- text: "The ledge, where I placed my candle, had a few mildewed books piled up in one corner; and it was covered with writing scratched on the paint. This writing, however, was nothing but a name repeated in all kinds of characters, large and small—Catherine Earnshaw, here and there varied to Catherine Heathcliff, and then again to Catherine Linton. In vapid listlessness I leant my head against the window, and continued spelling over Catherine Earnshaw—Heathcliff—Linton, till my eyes closed; but they had not rested five minutes when a glare of white letters started from the dark, as vivid as spectres—the air swarmed with Catherines; and rousing myself to dispel the obtrusive name, I discovered my candle wick reclining on one of the antique volumes, and perfuming the place with an odour of roasted calf-skin."
example_title: "Wuthering Heights"
inference:
parameters:
no_repeat_ngram_size: 2
max_length: 32
early_stopping: True
---
# literary analysis with t5-base
- t5 sort-of learning to do literary analysis. It was trained on the booksum dataset with `chapter` (original text) as input and `summary_analysis` as the output text, where `summary_analysis` is the sparknotes/cliff notes/etc analysis
- It was trained for 8 epochs
- Testing may need to be completed in Colab as it seems to be CPU-intensive. A link to an example notebook is [here](https://colab.research.google.com/gist/pszemraj/fe495bc0225ef0c00c9f8445b64672a6/example-t5_1_1-base-writing-analysis.ipynb)
# Example
```
!pip install -U -q transformers
!pip install -U -q sentencepiece
from transformers import pipeline
analyzer = pipeline("text2text-generation",
"pszemraj/t5_1_1-base-writing-analysis")
```
- enter text and compute.
```
text = "text to be analyzed goes here"
result = analyzer(
text,
max_length=int(len(text) * 1.2),
no_repeat_ngram_size= 2,
repetition_penalty= 2.4,
num_beams=4,
early_stopping= True,
)
```
# sample results
- inputs:
> Beren now, he never thought he was going to get that silmaril from the iron crown in thangorodrim, and yet he did, and that was a worse place and a blacker danger than ours. but that’s a long tale, of course, and goes on past the happiness and into grief and beyond it – and the silmaril went on and came to eärendil. and why, sir, I never thought of that before! we’ve got – you’ve got some of the light of it in that star glass that the lady gave you! why, to think of it, we’re in the same tale still! it’s going on. don’t the great tales never end?
- output:
> The silmaril comes to eärendil. it is a long tale, and it is going on beyond the happy ending of the story. in this way, we see that Berens dream has come true. he is in love with her, but she doesnt know how to do it. she thinks about it as if he had never thought of it before. then again, there is an ironic contrast between his dreams and reality. this is not surprising because they are so far apart from each other. they have been together foreshadowing one another. and yet, even though, they were at the same time. but now, their relationship is more important than any other characters. for example, the two men seem to be separated by them. their friendship is very much more interesting. we can imagine what happens when they get back home. when they meet up in the next chapter, however, then, the darkest danger seems to suggest that they will go on the night. now, after all, everyone else does not want to find outwardly.
\* _NOTE:_ As the above were not saved in real-time, both the input and output had `humanize` string formatting applied to quickly clean them as they were copied and pasted from a Colab notebook. |
pulp/ParentBERTo-4-years-old | 01ae536104f56bd9f27358f625989b0ee145c73f | 2021-12-09T20:55:04.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pulp | null | pulp/ParentBERTo-4-years-old | 0 | null | transformers | 35,896 | This is a Roberta-based model trained on parents' input before 4 years old. |
qdenisq/BertFormalityClassificiation | 62aa201c04395d781398d4325da0a2bd856f5d2a | 2021-09-05T12:03:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | qdenisq | null | qdenisq/BertFormalityClassificiation | 0 | null | transformers | 35,897 | Entry not found |
qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500 | bfb4994e32547ea47719acf43eec5e1f7a46b41a | 2021-04-01T15:16:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | qqhann | null | qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500 | 0 | null | transformers | 35,898 | ---
language: ja
datasets:
- common_voice #TODO: remove if you did not use the common voice dataset
- TODO: add more datasets if you have used additional datasets. Make sure to use the exact same
dataset name as the one found [here](https://huggingface.co/datasets). If the dataset can not be found in the official datasets, just give it a new name
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Japanese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 70.1869
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_commonvoice_from_xlsr53_pretrain_0329UTC1500")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 70.18 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ...
<!-- # TODO: adapt to state all the datasets that were used for training. -->
The script used for training can be found [here](...)
<!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
|
quangtran199hust/layoutlmv2_e | a086fcd08048000f75cf7cc67eeff3cda6ce905b | 2021-10-28T08:17:21.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | quangtran199hust | null | quangtran199hust/layoutlmv2_e | 0 | null | transformers | 35,899 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_e
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.