modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
WarrenK-Design/DialoGPT-small-Rick | 84ad795d6e5544687a3ecd23121b2a29b28e4783 | 2021-08-31T16:30:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | WarrenK-Design | null | WarrenK-Design/DialoGPT-small-Rick | 3 | null | transformers | 21,000 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
WikinewsSum/t5-base-multi-fr-wiki-news | 0b828755a3b1274e7b8c122cc4ecc4c16df5b289 | 2021-06-23T11:50:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | WikinewsSum | null | WikinewsSum/t5-base-multi-fr-wiki-news | 3 | null | transformers | 21,001 | Entry not found |
Wilson2021/bert_cn_finetuning_model01 | 36e3513d238e3f70319ffd808fc394ac9932ebb8 | 2021-11-05T05:52:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Wilson2021 | null | Wilson2021/bert_cn_finetuning_model01 | 3 | null | transformers | 21,002 | Entry not found |
XSY/t5-small-finetuned-xsum | 04fce666251f48a4c444c2e335a8f39bd745484c | 2021-11-09T13:40:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | XSY | null | XSY/t5-small-finetuned-xsum | 3 | null | transformers | 21,003 | 这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
- Rouge1: 28.6901
- Rouge2: 8.0102
- Rougel: 22.6087
- Rougelsum: 22.6105
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
XiaoqiJiao/TinyBERT_General_4L_312D | f42680cdc19ca7fe70e8a490843f7c80f0247fbb | 2020-09-02T03:37:19.000Z | [
"pytorch",
"transformers"
] | null | false | XiaoqiJiao | null | XiaoqiJiao/TinyBERT_General_4L_312D | 3 | null | transformers | 21,004 | Entry not found |
XiaoqiJiao/TinyBERT_General_6L_768D | 9b5f17d2421503291c9177c903fc27872a8079d7 | 2020-09-02T03:40:56.000Z | [
"pytorch",
"transformers"
] | null | false | XiaoqiJiao | null | XiaoqiJiao/TinyBERT_General_6L_768D | 3 | null | transformers | 21,005 | Entry not found |
YusufSahin99/IFIS_ZORK_AI_FANTASY | 4e9c77b22c5ba06a7f59abd616ba7bb928e6e158 | 2021-07-14T13:18:10.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | false | YusufSahin99 | null | YusufSahin99/IFIS_ZORK_AI_FANTASY | 3 | null | transformers | 21,006 | ---
license: mit
tags:
- generated_from_trainer
model_index:
- name: IFIS_ZORK_AI_FANTASY
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_FANTASY
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
Yves/wav2vec2-large-xlsr-53-swiss-german | bc71dc396ea481549d66c37ea4d6f310d5b52d54 | 2021-07-05T18:09:03.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sg",
"dataset:Yves/fhnw_swiss_parliament",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Yves | null | Yves/wav2vec2-large-xlsr-53-swiss-german | 3 | null | transformers | 21,007 | ---
language: sg
datasets:
- Yves/fhnw_swiss_parliament
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- sg
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: Yves XLSR Wav2Vec2 Large 53 Swiss German
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Yves/fhnw_swiss_parliament
type: Yves/fhnw_swiss_parliament
metrics:
- name: Test WER
type: wer
value: NA%
---
# wav2vec2-large-xlsr-53-swiss-german
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swiss German trying to achieve satisfactory Swiss-German to German transcriptions
## Dataset
Detailed information about the dataset that the model has been trained and validated with is available on [Yves/fhnw_swiss_parliament](https://huggingface.co/datasets/Yves/fhnw_swiss_parliament)
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("Yves/fhnw_swiss_parliament", data_dir="swiss_parliament", split="validation")
processor = Wav2Vec2Processor.from_pretrained("Yves/wav2vec2-large-xlsr-53-swiss-german")
model = Wav2Vec2ForCTC.from_pretrained("Yves/wav2vec2-large-xlsr-53-swiss-german").cuda()
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.cuda(), attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
```
## Evaluation
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
import csv
model_name = "Yves/wav2vec2-large-xlsr-53-swiss-german"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\_\²\…\˟\&\+\[\]\(\−\–\)\›\»\‹\@\«\*\ʼ\/\°\'\'\’\'̈]'
completed_iterations = 0
eval_batch_size = 16
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("Yves/fhnw_swiss_parliament", data_dir="container_0/swiss_parliament_dryrun", split="validation")
wer = load_metric("wer")
cer = load_metric("cer")
bleu = load_metric("sacrebleu")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
out_file = open('output.tsv', 'w', encoding='utf-8')
tsv_writer = csv.writer(out_file, delimiter='\t')
tsv_writer.writerow(["client_id", "reference", "prediction", "wer", "cer", "bleu"])
def map_to_pred(batch,idx):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
if not (len(idx) <= 2 and idx[0] == 0):
for x in range(0, len(idx)):
temp_reference = []
temp_reference.append([batch["target"][x]])
tsv_writer.writerow([batch["client_id"][x], batch["target"][x], batch["predicted"][x],
wer.compute(predictions=[batch["predicted"][x]], references=[batch["sentence"][x]]),
cer.compute(predictions=[batch["predicted"][x]], references=[batch["sentence"][x]]),
bleu.compute(predictions=[batch["predicted"][x]], references=temp_reference)["score"]])
return batch
result = ds.map(map_to_pred, batched=True, batch_size=eval_batch_size, with_indices=True, remove_columns=list(ds.features.keys()))
out_file.close()
target_bleu = []
for x in result["target"]:
target_bleu.append([x])
print(wer.compute(predictions=result["predicted"], references=result["target"]))
print(cer.compute(predictions=result["predicted"], references=result["target"]))
print(bleu.compute(predictions=result["predicted"], references=target_bleu))
```
## Scripts
The script used for training can be found on Google Colab [TBD](https://huggingface.co/Yves/wav2vec2-large-xlsr-53-swiss-german) |
ZYW/squad-en-de-es-model | f05cd78359183b3d04355081f6b5f13431d83e15 | 2021-05-29T16:53:56.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | ZYW | null | ZYW/squad-en-de-es-model | 3 | null | transformers | 21,008 | ---
model-index:
- name: squad-en-de-es-model
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-en-de-es-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
|
Zayt/viRoberta-l6-h384-word-cased | aea7428572fd8c1347749ca01c8936a08d7f23bc | 2021-11-10T09:54:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Zayt | null | Zayt/viRoberta-l6-h384-word-cased | 3 | null | transformers | 21,009 | More information: [github](https://github.com/TanHM-1211/viRoberta-l6-h384-cased)
```python
from underthesea import word_tokenize
from transformers import RobertaTokenizer, RobertaModel
model_name = 'Zayt/viRoberta-l6-h384-word-cased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
text = word_tokenize("Xin chào, tôi không còn là sinh viên đại học Bách Khoa.", format='text')
output = model(**tokenizer(text, return_tensors='pt))
output
``` |
Zeer0/DialoGPT-small-ZerO | 3c600e4fd7e53336ad077e5682b8e1cddc06b82b | 2021-09-17T05:35:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Zeer0 | null | Zeer0/DialoGPT-small-ZerO | 3 | null | transformers | 21,010 | ---
tags:
- conversational
---
# ZerO DialoGTP Model |
ZhangCheng/T5P3 | 3509f125ebb2611bbb144f1421b4d64246d0f317 | 2022-02-16T22:56:41.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"Question Generation",
"autotrain_compatible"
] | text2text-generation | false | ZhangCheng | null | ZhangCheng/T5P3 | 3 | 1 | transformers | 21,011 | ---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5 <context> Cheng fine-tuned T5 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self, model_dir=None):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer: str, context: str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
outputs = self.model.generate(
input_ids=input_ids,
attention_mask=attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
return {'question': question, 'answer': answer, 'context': context}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5 on SQuAD dataset for question generation?
```
|
Zixtrauce/BDBot | 60e53f54b64157332d2046067b6f4011f4496a71 | 2022-01-01T07:02:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Zixtrauce | null | Zixtrauce/BDBot | 3 | null | transformers | 21,012 | ---
tags:
- conversational
---
#BDBot2 |
aapot/wav2vec2-xlsr-1b-finnish-lm | a13f7fa3234c5fbabcc14b837fff65ba8d9ed62c | 2022-03-28T17:31:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aapot | null | aapot/wav2vec2-xlsr-1b-finnish-lm | 3 | null | transformers | 21,013 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.65
- name: Test CER
type: cer
value: 1.2
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
aapot/wav2vec2-xlsr-300m-finnish | c2724bb2f2bd71f9b270598031b88f2354ecdadd | 2022-03-28T17:45:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aapot | null | aapot/wav2vec2-xlsr-300m-finnish | 3 | null | transformers | 21,014 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-300m-finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 17.92
- name: Test CER
type: cer
value: 3.36
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
abhi1nandy2/Craft-bionlp-roberta-base | 1c0d05762ffe6a951b0c1f05e6507da8b61627ae | 2022-05-23T20:09:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"English",
"dataset:CRAFT BioNLP Corpus",
"transformers",
"CRAFT",
"autotrain_compatible"
] | fill-mask | false | abhi1nandy2 | null | abhi1nandy2/Craft-bionlp-roberta-base | 3 | null | transformers | 21,015 | ---
language:
- English
tags:
- CRAFT
- roberta
datasets:
- CRAFT BioNLP Corpus
---
Refer to https://aclanthology.org/2021.semeval-1.87/
## Citation
If you use this model in your work, please add the following citation -
```
@inproceedings{nandy-etal-2021-cs60075,
title = "cs60075{\_}team2 at {S}em{E}val-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora",
author = "Nandy, Abhilash and
Adak, Sayantan and
Halder, Tanurima and
Pokala, Sai Mahesh",
booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.semeval-1.87",
doi = "10.18653/v1/2021.semeval-1.87",
pages = "678--682",
abstract = "The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E.g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc. We perform ablation studies on selecting the transformer models and how their individual complexity scores are aggregated to get the resulting complexity scores. Our method achieves a best Pearson Correlation of 0.784 in sub-task 1 (single word) and 0.836 in sub-task 2 (multiple word expressions).",
}
```
|
abhilash1910/distilbert-squadv1 | 5572964ed9a2148f52739cd52807dc4c38eb5091 | 2021-09-14T07:25:33.000Z | [
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad_v1",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | abhilash1910 | null | abhilash1910/distilbert-squadv1 | 3 | null | transformers | 21,016 | # DistilBERT--SQuAD-v1
Training is done on the [SQuAD](https://huggingface.co/datasets/squad) dataset. The model can be accessed via [HuggingFace](https://huggingface.co/abhilash1910/distilbert-squadv1):
## Model Specifications
We have used the following parameters:
- Training Batch Size : 512
- Learning Rate : 3e-5
- Training Epochs : 0.75
- Sequence Length : 384
- Stride : 128
## Usage Specifications
```python
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
model=AutoModelForQuestionAnswering.from_pretrained('abhilash1910/distilbert-squadv1')
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/distilbert-squadv1')
nlp_QA=pipeline('question-answering',model=model,tokenizer=tokenizer)
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
```
The result is:
```bash
{'score': 0.38547369837760925,
'start': 42,
'end': 55,
'answer': '$19.6 million'}
```
---
language:
- en
license: apache-2.0
datasets:
- squad_v1
---
|
abhinavkulkarni/distilbert-base-uncased-finetuned-squad | 0625c3e5eedd1fc7b0091a9f36f6b8e7aa7bb577 | 2022-02-06T18:39:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | abhinavkulkarni | null | abhinavkulkarni/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 21,017 | Entry not found |
adalbertojunior/test-deberta | 6e48279f72398e6116a590cf0c5a915693444f59 | 2022-03-10T14:39:06.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adalbertojunior | null | adalbertojunior/test-deberta | 3 | null | transformers | 21,018 | Entry not found |
adamlin/ClinicalBert_all_notes | 33e9ee2da7f113daf1301dcc121005bfc3703d96 | 2019-12-25T17:08:00.000Z | [
"pytorch",
"transformers"
] | null | false | adamlin | null | adamlin/ClinicalBert_all_notes | 3 | null | transformers | 21,019 | Entry not found |
adamlin/ClinicalBert_disch | a3ca1268a732befca64e43b415b60636ffba3f6e | 2019-12-25T17:08:32.000Z | [
"pytorch",
"transformers"
] | null | false | adamlin | null | adamlin/ClinicalBert_disch | 3 | null | transformers | 21,020 | Entry not found |
adamlin/filter-mlsum-pretrained | e8757ad413eeafcaf8e71b9126c0024198d1586b | 2021-07-10T07:51:42.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"zh_CN",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | adamlin | null | adamlin/filter-mlsum-pretrained | 3 | null | transformers | 21,021 | ---
language:
- zh_CN
- zh_CN
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model_index:
- name: filter-mlsum-pretrained
results:
- task:
name: Translation
type: translation
metric:
name: Rouge1
type: rouge
value: 42.1802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filter-mlsum-pretrained
This model is a fine-tuned version of [lincoln/mbart-mlsum-automatic-summarization](https://huggingface.co/lincoln/mbart-mlsum-automatic-summarization) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1258
- Rouge1: 42.1802
- Rouge2: 28.8282
- Rougel: 38.353
- Rougelsum: 38.4497
- Gen Len: 15.7048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Gen Len | Validation Loss | Rouge-1 | Rouge-2 | Rouge-3 | Rouge-4 |
|:-------------:|:-----:|:-----:|:------:|:-------:|:---------------:|:-------:|:-------:|:-------:|:-------:|
| 3.2488 | 0.02 | 600 | 1.0077 | 16.5021 | 2.9137 | 0.3472 | 0.2187 | 0.129 | 0.0831 |
| 2.8602 | 0.04 | 1200 | 1.0448 | 15.5959 | 2.7929 | 0.3555 | 0.231 | 0.1425 | 0.0948 |
| 2.7612 | 0.06 | 1800 | 0.9912 | 15.9283 | 2.7275 | 0.3634 | 0.2327 | 0.139 | 0.0892 |
| 2.71 | 0.08 | 2400 | 1.1238 | 16.029 | 2.6673 | 0.3705 | 0.2393 | 0.1448 | 0.0937 |
| 2.6029 | 0.11 | 3000 | 1.091 | 15.8317 | 2.6153 | 0.3705 | 0.2382 | 0.1443 | 0.0943 |
| 2.5834 | 0.13 | 3600 | 1.0894 | 15.9131 | 2.5937 | 0.3793 | 0.246 | 0.1517 | 0.1013 |
| 2.5339 | 0.15 | 4200 | 1.1034 | 15.8331 | 2.5716 | 0.3758 | 0.2441 | 0.146 | 0.0948 |
| 2.5176 | 0.17 | 4800 | 1.1365 | 16.2552 | 2.5338 | 0.3695 | 0.2385 | 0.1454 | 0.0942 |
| 2.4962 | 0.19 | 5400 | 1.1237 | 16.0041 | 2.5145 | 0.3773 | 0.2462 | 0.1533 | 0.1017 |
| 2.4573 | 0.21 | 6000 | 0.9416 | 16.1241 | 2.5056 | 0.3753 | 0.2457 | 0.1541 | 0.1012 |
| 2.4324 | 0.23 | 6600 | 1.122 | 15.3448 | 2.4891 | 0.3824 | 0.2531 | 0.157 | 0.1033 |
| 2.4343 | 0.25 | 7200 | 1.8299 | 15.5959 | 2.4728 | 0.384 | 0.2512 | 0.1556 | 0.1026 |
| 2.4089 | 0.28 | 7800 | 1.7741 | 16.3421 | 2.4608 | 0.3818 | 0.2501 | 0.1556 | 0.102 |
| 2.376 | 0.3 | 8400 | 1.1575 | 15.3834 | 2.4402 | 0.3887 | 0.2582 | 0.1611 | 0.1058 |
| 2.3739 | 0.32 | 9000 | 1.7924 | 15.6455 | 2.4331 | 0.3902 | 0.2561 | 0.1587 | 0.1042 |
| 2.3485 | 0.34 | 9600 | 2.2605 | 15.5407 | 2.4215 | 0.3712 | 0.2423 | 0.1493 | 0.0984 |
| 2.3535 | 0.36 | 10200 | 1.2569 | 16.2538 | 2.4047 | 0.3837 | 0.2524 | 0.1572 | 0.1045 |
| 2.3359 | 0.38 | 10800 | 1.2334 | 15.4607 | 2.4025 | 0.3808 | 0.2488 | 0.1531 | 0.0994 |
| 2.3265 | 0.4 | 11400 | 1.116 | 16.2703 | 2.3926 | 0.3909 | 0.2574 | 0.159 | 0.1049 |
| 2.3024 | 0.42 | 12000 | 1.0944 | 15.3807 | 2.3964 | 0.3883 | 0.2554 | 0.158 | 0.1043 |
| 2.2988 | 0.45 | 12600 | 1.6318 | 15.5062 | 2.3616 | 0.3889 | 0.259 | 0.1617 | 0.107 |
| 2.2966 | 0.47 | 13200 | 1.1887 | 15.8041 | 2.3728 | 0.3835 | 0.2556 | 0.1633 | 0.1111 |
| 2.2823 | 0.49 | 13800 | 1.1252 | 15.9972 | 2.3591 | 0.3805 | 0.249 | 0.1571 | 0.1052 |
| 2.2748 | 0.51 | 14400 | 1.0418 | 15.3021 | 2.3619 | 0.3862 | 0.2569 | 0.161 | 0.1072 |
| 2.2624 | 0.53 | 15000 | 1.0299 | 15.8634 | 2.3415 | 0.3909 | 0.2575 | 0.1608 | 0.1072 |
| 2.2585 | 0.55 | 15600 | 1.0671 | 15.5503 | 2.3557 | 0.3899 | 0.2586 | 0.1622 | 0.1077 |
| 2.2586 | 0.57 | 16200 | 1.6521 | 15.4345 | 2.3431 | 0.389 | 0.2593 | 0.1642 | 0.1105 |
| 2.2464 | 0.59 | 16800 | 1.2836 | 15.6124 | 2.3609 | 0.3934 | 0.2591 | 0.1593 | 0.1041 |
| 2.2523 | 0.62 | 17400 | 1.7653 | 15.8648 | 2.3339 | 0.3958 | 0.2653 | 0.1683 | 0.1133 |
| 2.2287 | 0.64 | 18000 | 1.3186 | 16.4455 | 2.3188 | 0.3911 | 0.2617 | 0.1678 | 0.1143 |
| 2.2068 | 0.66 | 18600 | 1.6488 | 15.9062 | 2.3109 | 0.3919 | 0.262 | 0.1657 | 0.1115 |
| 2.2195 | 0.68 | 19200 | 1.8291 | 15.5269 | 2.3271 | 0.3859 | 0.2575 | 0.1631 | 0.1081 |
| 2.2128 | 0.7 | 19800 | 2.2759 | 15.8703 | 2.3113 | 0.3962 | 0.2655 | 0.1691 | 0.1123 |
| 2.2071 | 0.72 | 20400 | 2.4205 | 15.9738 | 2.3036 | 0.3907 | 0.2608 | 0.1637 | 0.1081 |
| 2.1975 | 0.74 | 21000 | 1.9886 | 15.8234 | 2.2919 | 0.3906 | 0.2632 | 0.169 | 0.1157 |
| 2.1965 | 0.76 | 21600 | 1.8754 | 15.3434 | 2.2957 | 0.39 | 0.2608 | 0.1665 | 0.1114 |
| 2.1886 | 0.78 | 22200 | 1.5683 | 15.3407 | 2.2835 | 0.3968 | 0.2658 | 0.168 | 0.1117 |
| 2.185 | 0.81 | 22800 | 2.127 | 16.0566 | 2.2685 | 0.3913 | 0.2624 | 0.1691 | 0.114 |
| 2.1697 | 0.83 | 23400 | 1.2554 | 15.7021 | 2.2888 | 0.3983 | 0.2676 | 0.1704 | 0.1148 |
| 2.1637 | 0.85 | 24000 | 2.0099 | 16.2607 | 2.2767 | 0.3979 | 0.2681 | 0.1719 | 0.1181 |
| 2.1559 | 0.87 | 24600 | 2.2632 | 15.2179 | 2.2840 | 0.3996 | 0.269 | 0.1714 | 0.1152 |
| 2.1666 | 0.89 | 25200 | 1.2354 | 15.6828 | 2.2744 | 0.397 | 0.2655 | 0.1677 | 0.1108 |
| 2.1388 | 0.91 | 25800 | 1.2576 | 15.7959 | 2.2661 | 0.3982 | 0.2655 | 0.1687 | 0.1128 |
| 2.1458 | 0.93 | 26400 | 1.334 | 15.6428 | 2.2582 | 0.3976 | 0.2682 | 0.1711 | 0.1142 |
| 2.1337 | 0.95 | 27000 | 1.287 | 16.1379 | 2.2474 | 0.4001 | 0.2654 | 0.1682 | 0.1119 |
| 2.1324 | 0.98 | 27600 | 1.1739 | 16.0552 | 2.2487 | 0.4003 | 0.2664 | 0.168 | 0.1113 |
| 2.1318 | 1.0 | 28200 | 2.1267 | 15.931 | 2.2553 | 0.4037 | 0.27 | 0.1714 | 0.1163 |
| 2.0379 | 1.02 | 28800 | 1.1489 | 15.3421 | 2.2787 | 0.3962 | 0.263 | 0.1674 | 0.114 |
| 1.9044 | 1.04 | 29400 | 1.6737 | 15.6 | 2.2538 | 0.4003 | 0.2693 | 0.1729 | 0.1161 |
| 1.9149 | 1.06 | 30000 | 1.1077 | 15.771 | 2.2487 | 0.4062 | 0.274 | 0.1774 | 0.1209 |
| 1.9211 | 1.08 | 30600 | 1.2744 | 15.0566 | 2.2708 | 0.4075 | 0.2742 | 0.1744 | 0.1172 |
| 1.9285 | 1.1 | 31200 | 1.1875 | 16.1021 | 2.2443 | 0.3983 | 0.2652 | 0.1671 | 0.1124 |
| 1.9106 | 1.12 | 31800 | 1.2422 | 15.36 | 2.2562 | 0.4079 | 0.2751 | 0.1762 | 0.119 |
| 1.9313 | 1.15 | 32400 | 1.3036 | 15.8317 | 2.2515 | 0.4027 | 0.2717 | 0.1748 | 0.1196 |
| 1.931 | 1.17 | 33000 | 1.138 | 16.1917 | 2.2415 | 0.4016 | 0.2701 | 0.1724 | 0.1179 |
| 1.9232 | 1.19 | 33600 | 1.2741 | 15.6814 | 2.2511 | 0.4074 | 0.2757 | 0.1782 | 0.1222 |
| 1.9233 | 1.21 | 34200 | 1.4101 | 15.8345 | 2.2388 | 0.4027 | 0.2712 | 0.1727 | 0.1174 |
| 1.9172 | 1.23 | 34800 | 1.252 | 15.6124 | 2.2434 | 0.4046 | 0.2747 | 0.1783 | 0.1215 |
| 1.9258 | 1.25 | 35400 | 1.2459 | 15.5062 | 2.2342 | 0.4107 | 0.2801 | 0.1814 | 0.1236 |
| 1.9184 | 1.27 | 36000 | 1.2943 | 15.6083 | 2.2393 | 0.4119 | 0.2817 | 0.1839 | 0.1244 |
| 1.9195 | 1.29 | 36600 | 1.1197 | 15.8359 | 2.2237 | 0.4014 | 0.2695 | 0.1699 | 0.1132 |
| 1.932 | 1.31 | 37200 | 1.2212 | 15.9752 | 2.2202 | 0.4027 | 0.2708 | 0.1723 | 0.1168 |
| 1.9161 | 1.34 | 37800 | 1.2541 | 15.5779 | 2.2236 | 0.4091 | 0.2783 | 0.1804 | 0.1244 |
| 1.9115 | 1.36 | 38400 | 1.4237 | 15.8276 | 2.1993 | 0.4122 | 0.2813 | 0.1832 | 0.1258 |
| 1.9108 | 1.38 | 39000 | 1.8321 | 16.0386 | 2.2079 | 0.412 | 0.2794 | 0.1806 | 0.1226 |
| 1.921 | 1.4 | 39600 | 1.8388 | 15.5076 | 2.2158 | 0.411 | 0.2799 | 0.1804 | 0.1226 |
| 1.9124 | 1.42 | 40200 | 1.915 | 16.0 | 2.2071 | 0.4032 | 0.2726 | 0.1742 | 0.1185 |
| 1.9134 | 1.44 | 40800 | 2.1237 | 16.0372 | 2.1980 | 0.4036 | 0.2702 | 0.1689 | 0.1122 |
| 1.9124 | 1.46 | 41400 | 2.4274 | 15.3421 | 2.2111 | 0.4037 | 0.274 | 0.1754 | 0.1203 |
| 1.9149 | 1.48 | 42000 | 1.8393 | 15.5683 | 2.2105 | 0.4057 | 0.2748 | 0.1762 | 0.119 |
| 1.9147 | 1.51 | 42600 | 1.2703 | 16.3048 | 2.1874 | 0.4084 | 0.2767 | 0.179 | 0.1233 |
| 1.9075 | 1.53 | 43200 | 1.7775 | 15.9545 | 2.1946 | 0.4109 | 0.2807 | 0.1857 | 0.1286 |
| 1.8996 | 1.55 | 43800 | 1.2485 | 15.6648 | 2.1924 | 0.4082 | 0.2749 | 0.1764 | 0.1196 |
| 1.9003 | 1.57 | 44400 | 1.1624 | 15.8041 | 2.1895 | 0.4093 | 0.2758 | 0.1766 | 0.1194 |
| 1.9048 | 1.59 | 45000 | 1.8167 | 16.2938 | 2.1843 | 0.407 | 0.2741 | 0.1779 | 0.1203 |
| 1.9017 | 1.61 | 45600 | 2.0689 | 15.3931 | 2.2073 | 0.4111 | 0.2795 | 0.1811 | 0.1246 |
| 1.8946 | 1.63 | 46200 | 1.7099 | 15.9917 | 2.1839 | 0.4095 | 0.2797 | 0.1826 | 0.1258 |
| 1.886 | 1.65 | 46800 | 1.8287 | 15.8276 | 2.1945 | 0.4051 | 0.2761 | 0.1799 | 0.1237 |
| 1.9068 | 1.68 | 47400 | 1.9476 | 15.3503 | 2.1926 | 0.4132 | 0.2819 | 0.1836 | 0.1262 |
| 1.9008 | 1.7 | 48000 | 1.3086 | 15.5931 | 2.1857 | 0.4167 | 0.2868 | 0.1893 | 0.1303 |
| 1.8965 | 1.72 | 48600 | 2.1687 | 15.8317 | 2.1781 | 0.402 | 0.2715 | 0.175 | 0.1197 |
| 1.8907 | 1.74 | 49200 | 2.3316 | 15.8952 | 2.1661 | 0.4035 | 0.2717 | 0.1746 | 0.1193 |
| 1.8938 | 1.76 | 49800 | 1.6839 | 15.6028 | 2.1736 | 0.4008 | 0.2693 | 0.1741 | 0.1184 |
| 1.8769 | 1.78 | 50400 | 1.1867 | 15.9393 | 2.1723 | 0.403 | 0.272 | 0.1761 | 0.1201 |
| 1.8813 | 1.8 | 51000 | 1.8509 | 16.2538 | 2.1454 | 0.4085 | 0.2773 | 0.1801 | 0.1227 |
| 1.8913 | 1.82 | 51600 | 1.9677 | 15.7503 | 2.1691 | 0.4052 | 0.2786 | 0.1836 | 0.1274 |
| 1.8785 | 1.85 | 52200 | 1.7 | 15.7559 | 2.1683 | 0.4132 | 0.2793 | 0.1796 | 0.1216 |
| 1.881 | 1.87 | 52800 | 1.2867 | 16.0345 | 2.1372 | 0.416 | 0.2824 | 0.1837 | 0.1264 |
| 1.8833 | 1.89 | 53400 | 1.761 | 16.0966 | 2.1501 | 0.4126 | 0.2808 | 0.1825 | 0.1253 |
| 1.8727 | 1.91 | 54000 | 1.9868 | 15.8221 | 2.1504 | 0.4165 | 0.2828 | 0.1826 | 0.1233 |
| 1.8901 | 1.93 | 54600 | 1.801 | 14.9393 | 2.2104 | 0.4151 | 0.2846 | 0.1848 | 0.1258 |
| 1.8802 | 1.95 | 55200 | 2.0887 | 15.8069 | 2.1555 | 0.407 | 0.2766 | 0.1794 | 0.1214 |
| 1.8827 | 1.97 | 55800 | 1.8323 | 15.8524 | 2.1510 | 0.4221 | 0.291 | 0.193 | 0.135 |
| 1.8673 | 1.99 | 56400 | 1.2667 | 15.4262 | 2.1620 | 0.4092 | 0.2795 | 0.1836 | 0.1275 |
| 1.6735 | 2.01 | 57000 | 1.821 | 15.8538 | 2.1836 | 0.4193 | 0.2875 | 0.189 | 0.1317 |
| 1.6367 | 2.04 | 57600 | 2.5547 | 15.8055 | 2.1941 | 0.415 | 0.2831 | 0.1849 | 0.1284 |
| 1.6326 | 2.06 | 58200 | 2.0999 | 15.9352 | 2.1743 | 0.4157 | 0.2829 | 0.1842 | 0.1267 |
| 1.6354 | 2.08 | 58800 | 2.3907 | 15.68 | 2.1879 | 0.4233 | 0.2921 | 0.1936 | 0.1361 |
| 1.6352 | 2.1 | 59400 | 1.979 | 16.1807 | 2.1735 | 0.4236 | 0.2907 | 0.193 | 0.1346 |
| 1.6428 | 2.12 | 60000 | 2.2266 | 15.8759 | 2.1858 | 0.4204 | 0.2881 | 0.1896 | 0.1308 |
| 1.6483 | 2.14 | 60600 | 1.9294 | 15.8469 | 2.1878 | 0.4237 | 0.2892 | 0.1901 | 0.1317 |
| 1.6502 | 2.16 | 61200 | 1.7967 | 15.7131 | 2.1814 | 0.4164 | 0.2835 | 0.1852 | 0.1275 |
| 1.6585 | 2.18 | 61800 | 1.1843 | 16.0579 | 2.1620 | 0.413 | 0.2828 | 0.1852 | 0.128 |
| 1.6457 | 2.21 | 62400 | 1.7951 | 15.9862 | 2.1873 | 0.4194 | 0.2885 | 0.1908 | 0.1341 |
| 1.6433 | 2.23 | 63000 | 1.6297 | 16.1324 | 2.1770 | 0.4039 | 0.2741 | 0.1773 | 0.1209 |
| 1.6493 | 2.25 | 63600 | 1.8762 | 15.5131 | 2.1702 | 0.414 | 0.2851 | 0.1883 | 0.1292 |
| 1.672 | 2.27 | 64200 | 2.1811 | 16.1945 | 2.1433 | 0.4198 | 0.2852 | 0.1854 | 0.1272 |
| 1.6411 | 2.29 | 64800 | 2.0637 | 16.1434 | 2.1661 | 0.4103 | 0.2809 | 0.1848 | 0.1275 |
| 1.6561 | 2.31 | 65400 | 2.452 | 15.5724 | 2.1761 | 0.4204 | 0.292 | 0.1935 | 0.135 |
| 1.6516 | 2.33 | 66000 | 2.216 | 15.7048 | 2.1836 | 0.4186 | 0.2887 | 0.1909 | 0.1326 |
| 1.6738 | 2.35 | 66600 | 1.7496 | 15.731 | 2.1452 | 0.4186 | 0.2904 | 0.1944 | 0.1364 |
| 1.672 | 2.38 | 67200 | 1.3179 | 15.7697 | 2.1412 | 0.4206 | 0.2898 | 0.1936 | 0.1358 |
| 1.6625 | 2.4 | 67800 | 2.3606 | 15.76 | 2.1412 | 0.4134 | 0.285 | 0.189 | 0.1315 |
| 1.6725 | 2.42 | 68400 | 2.3687 | 15.4745 | 2.1825 | 0.4165 | 0.2874 | 0.1883 | 0.1303 |
| 1.6588 | 2.44 | 69000 | 2.2056 | 15.8841 | 2.1307 | 0.4259 | 0.2952 | 0.1974 | 0.139 |
| 1.6629 | 2.46 | 69600 | 1.7605 | 15.469 | 2.1523 | 0.4149 | 0.2861 | 0.1901 | 0.1327 |
| 1.6716 | 2.48 | 70200 | 1.3733 | 15.3683 | 2.1546 | 0.4202 | 0.2889 | 0.1897 | 0.1314 |
| 1.6708 | 2.5 | 70800 | 2.6313 | 15.7214 | 2.1408 | 0.4236 | 0.2937 | 0.1972 | 0.1395 |
| 1.6637 | 2.52 | 71400 | 2.5112 | 15.909 | 2.1252 | 0.4203 | 0.2903 | 0.1935 | 0.1361 |
| 1.6743 | 2.55 | 72000 | 2.2902 | 15.749 | 2.1326 | 0.426 | 0.297 | 0.1989 | 0.1404 |
| 1.6681 | 2.57 | 72600 | 2.1003 | 16.3338 | 2.1120 | 0.4185 | 0.2876 | 0.1904 | 0.1342 |
| 1.6791 | 2.59 | 73200 | 1.7082 | 15.7283 | 2.1269 | 0.4268 | 0.2968 | 0.1988 | 0.1392 |
| 1.6643 | 2.61 | 73800 | 1.9914 | 16.0552 | 2.1166 | 0.4177 | 0.2886 | 0.1939 | 0.1369 |
| 1.6666 | 2.63 | 74400 | 1.8012 | 16.0276 | 2.1242 | 0.4174 | 0.2875 | 0.19 | 0.1328 |
| 1.67 | 2.65 | 75000 | 1.696 | 15.5559 | 2.1619 | 0.4196 | 0.2919 | 0.1939 | 0.136 |
| 1.6794 | 2.67 | 75600 | 2.0322 | 15.6221 | 2.1425 | 0.4166 | 0.2871 | 0.1891 | 0.1328 |
| 1.6753 | 2.69 | 76200 | 2.5736 | 15.7407 | 2.1432 | 0.4215 | 0.2928 | 0.1958 | 0.1388 |
| 1.6807 | 2.71 | 76800 | 2.3404 | 15.7186 | 2.1240 | 0.4181 | 0.2885 | 0.1917 | 0.1346 |
| 1.6707 | 2.74 | 77400 | 2.4439 | 15.5724 | 2.1246 | 0.4191 | 0.2906 | 0.1936 | 0.1359 |
| 1.6736 | 2.76 | 78000 | 2.0595 | 16.2731 | 2.1053 | 0.4158 | 0.2869 | 0.1902 | 0.1324 |
| 1.6651 | 2.78 | 78600 | 1.6489 | 15.6772 | 2.1365 | 0.4242 | 0.2924 | 0.1938 | 0.1346 |
| 1.6746 | 2.8 | 79200 | 1.1565 | 15.9062 | 2.1232 | 0.4161 | 0.2848 | 0.1872 | 0.1308 |
| 1.6666 | 2.82 | 79800 | 1.7445 | 15.9407 | 2.1417 | 0.414 | 0.2807 | 0.1817 | 0.1249 |
| 1.6687 | 2.84 | 80400 | 1.9425 | 15.8676 | 2.1240 | 0.4088 | 0.2786 | 0.1821 | 0.1269 |
| 1.6678 | 2.86 | 81000 | 1.6419 | 15.9214 | 2.1125 | 0.417 | 0.2873 | 0.188 | 0.1303 |
| 1.6609 | 2.88 | 81600 | 1.8123 | 15.8579 | 2.1227 | 0.4199 | 0.2904 | 0.1916 | 0.1323 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.9.0
- Tokenizers 0.10.3
|
adamlin/topicalchat-multiturn | c2ba752e071961714c0df373bb45964c0a84309b | 2021-07-02T16:00:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | false | adamlin | null | adamlin/topicalchat-multiturn | 3 | null | transformers | 21,022 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: topicalchat-multiturn
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topicalchat-multiturn
This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 73 | 4.2992 |
| No log | 2.0 | 146 | 3.4433 |
| No log | 3.0 | 219 | 3.1606 |
| No log | 4.0 | 292 | 3.0366 |
| No log | 5.0 | 365 | 2.9679 |
| No log | 6.0 | 438 | 2.9131 |
| 4.1401 | 7.0 | 511 | 2.8752 |
| 4.1401 | 8.0 | 584 | 2.8391 |
| 4.1401 | 9.0 | 657 | 2.8118 |
| 4.1401 | 10.0 | 730 | 2.7871 |
| 4.1401 | 11.0 | 803 | 2.7659 |
| 4.1401 | 12.0 | 876 | 2.7489 |
| 4.1401 | 13.0 | 949 | 2.7331 |
| 2.9768 | 14.0 | 1022 | 2.7196 |
| 2.9768 | 15.0 | 1095 | 2.7071 |
| 2.9768 | 16.0 | 1168 | 2.6940 |
| 2.9768 | 17.0 | 1241 | 2.6854 |
| 2.9768 | 18.0 | 1314 | 2.6728 |
| 2.9768 | 19.0 | 1387 | 2.6647 |
| 2.9768 | 20.0 | 1460 | 2.6562 |
| 2.7864 | 21.0 | 1533 | 2.6482 |
| 2.7864 | 22.0 | 1606 | 2.6439 |
| 2.7864 | 23.0 | 1679 | 2.6326 |
| 2.7864 | 24.0 | 1752 | 2.6107 |
| 2.7864 | 25.0 | 1825 | 2.6043 |
| 2.7864 | 26.0 | 1898 | 2.5970 |
| 2.7864 | 27.0 | 1971 | 2.5908 |
| 2.6568 | 28.0 | 2044 | 2.5862 |
| 2.6568 | 29.0 | 2117 | 2.5828 |
| 2.6568 | 30.0 | 2190 | 2.5765 |
| 2.6568 | 31.0 | 2263 | 2.5742 |
| 2.6568 | 32.0 | 2336 | 2.5682 |
| 2.6568 | 33.0 | 2409 | 2.5656 |
| 2.6568 | 34.0 | 2482 | 2.5614 |
| 2.5489 | 35.0 | 2555 | 2.5605 |
| 2.5489 | 36.0 | 2628 | 2.5552 |
| 2.5489 | 37.0 | 2701 | 2.5541 |
| 2.5489 | 38.0 | 2774 | 2.5494 |
| 2.5489 | 39.0 | 2847 | 2.5491 |
| 2.5489 | 40.0 | 2920 | 2.5455 |
| 2.5489 | 41.0 | 2993 | 2.5452 |
| 2.475 | 42.0 | 3066 | 2.5433 |
| 2.475 | 43.0 | 3139 | 2.5397 |
| 2.475 | 44.0 | 3212 | 2.5386 |
| 2.475 | 45.0 | 3285 | 2.5400 |
| 2.475 | 46.0 | 3358 | 2.5339 |
| 2.475 | 47.0 | 3431 | 2.5327 |
| 2.4144 | 48.0 | 3504 | 2.5327 |
| 2.4144 | 49.0 | 3577 | 2.5312 |
| 2.4144 | 50.0 | 3650 | 2.5338 |
| 2.4144 | 51.0 | 3723 | 2.5314 |
| 2.4144 | 52.0 | 3796 | 2.5309 |
| 2.4144 | 53.0 | 3869 | 2.5289 |
| 2.4144 | 54.0 | 3942 | 2.5290 |
| 2.3642 | 55.0 | 4015 | 2.5270 |
| 2.3642 | 56.0 | 4088 | 2.5270 |
| 2.3642 | 57.0 | 4161 | 2.5263 |
| 2.3642 | 58.0 | 4234 | 2.5267 |
| 2.3642 | 59.0 | 4307 | 2.5273 |
| 2.3642 | 60.0 | 4380 | 2.5258 |
| 2.3642 | 61.0 | 4453 | 2.5253 |
| 2.3216 | 62.0 | 4526 | 2.5244 |
| 2.3216 | 63.0 | 4599 | 2.5256 |
| 2.3216 | 64.0 | 4672 | 2.5227 |
| 2.3216 | 65.0 | 4745 | 2.5241 |
| 2.3216 | 66.0 | 4818 | 2.5244 |
| 2.3216 | 67.0 | 4891 | 2.5236 |
| 2.3216 | 68.0 | 4964 | 2.5251 |
| 2.2879 | 69.0 | 5037 | 2.5231 |
| 2.2879 | 70.0 | 5110 | 2.5254 |
| 2.2879 | 71.0 | 5183 | 2.5242 |
| 2.2879 | 72.0 | 5256 | 2.5254 |
| 2.2879 | 73.0 | 5329 | 2.5253 |
| 2.2879 | 74.0 | 5402 | 2.5228 |
| 2.2879 | 75.0 | 5475 | 2.5247 |
| 2.261 | 76.0 | 5548 | 2.5243 |
| 2.261 | 77.0 | 5621 | 2.5247 |
| 2.261 | 78.0 | 5694 | 2.5250 |
| 2.261 | 79.0 | 5767 | 2.5248 |
| 2.261 | 80.0 | 5840 | 2.5236 |
| 2.261 | 81.0 | 5913 | 2.5264 |
| 2.261 | 82.0 | 5986 | 2.5249 |
| 2.2396 | 83.0 | 6059 | 2.5256 |
| 2.2396 | 84.0 | 6132 | 2.5267 |
| 2.2396 | 85.0 | 6205 | 2.5258 |
| 2.2396 | 86.0 | 6278 | 2.5242 |
| 2.2396 | 87.0 | 6351 | 2.5233 |
| 2.2396 | 88.0 | 6424 | 2.5249 |
| 2.2396 | 89.0 | 6497 | 2.5253 |
| 2.2238 | 90.0 | 6570 | 2.5252 |
| 2.2238 | 91.0 | 6643 | 2.5255 |
| 2.2238 | 92.0 | 6716 | 2.5263 |
| 2.2238 | 93.0 | 6789 | 2.5261 |
| 2.2238 | 94.0 | 6862 | 2.5257 |
| 2.2238 | 95.0 | 6935 | 2.5253 |
| 2.213 | 96.0 | 7008 | 2.5267 |
| 2.213 | 97.0 | 7081 | 2.5258 |
| 2.213 | 98.0 | 7154 | 2.5258 |
| 2.213 | 99.0 | 7227 | 2.5259 |
| 2.213 | 100.0 | 7300 | 2.5260 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
adamlin/usr-topicalchat-ctx | c445e5b06f6c63ee12925f86f63aa38ff52db3c7 | 2021-06-28T12:54:23.000Z | [
"pytorch",
"transformers"
] | null | false | adamlin | null | adamlin/usr-topicalchat-ctx | 3 | null | transformers | 21,023 | Entry not found |
addy88/t5-base-finetuned-sn-to-en | c2c37741bcb516c7c2fa8b5991e31e47e7bbb0bf | 2022-01-02T15:49:39.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:itihasa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | addy88 | null | addy88/t5-base-finetuned-sn-to-en | 3 | null | transformers | 21,024 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- itihasa
model-index:
- name: t5-base-finetuned-sn-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-sn-to-en
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the itihasa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
addy88/wav2vec2-malayalam-stt | f63c0adbba3885e78078bc87d4cb2ba517345f5b | 2021-12-19T16:36:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-malayalam-stt | 3 | null | transformers | 21,025 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-malayalam-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-malayalam-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-rajsthani-stt | 71c4573cad5b2410fed2dce248f0ce4b1511bdf6 | 2021-12-19T15:52:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-rajsthani-stt | 3 | null | transformers | 21,026 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-rajsthani-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-rajsthani-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
addy88/wav2vec2-tamil-stt | 4b1ba455ea88fbe9240cf2076e552e78b07c9d60 | 2021-12-19T15:43:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-tamil-stt | 3 | null | transformers | 21,027 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-tamil-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-tamil-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
adeiMousa/dummy-model | 0ca4caa44cbe6743bf4ccca6d1e4bddc4c3511c8 | 2022-01-29T18:50:06.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adeiMousa | null | adeiMousa/dummy-model | 3 | null | transformers | 21,028 | Entry not found |
aditeyabaral/additionalpretrained-distilbert-base-cased | bfd207041c63f4f29b61746d3a22035ea88a9e81 | 2021-10-21T22:30:15.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | aditeyabaral | null | aditeyabaral/additionalpretrained-distilbert-base-cased | 3 | null | transformers | 21,029 | Entry not found |
aditeyabaral/additionalpretrained-roberta-hinglish-small | 9842006c3da1a56887cb4cbba3c2d340c9bf5642 | 2021-10-20T18:29:44.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | aditeyabaral | null | aditeyabaral/additionalpretrained-roberta-hinglish-small | 3 | null | transformers | 21,030 | Entry not found |
aditeyabaral/bert-hinglish-big | d32d01f7f4593f29f2727fb80e5b1ebf755a9fa9 | 2021-09-26T05:36:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aditeyabaral | null | aditeyabaral/bert-hinglish-big | 3 | null | transformers | 21,031 | Entry not found |
aditeyabaral/roberta-hinglish-small | fcf98659bdc0f547c700b38ac2b22dcad224ed20 | 2021-09-25T09:26:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aditeyabaral | null | aditeyabaral/roberta-hinglish-small | 3 | null | transformers | 21,032 | Entry not found |
aditeyabaral/sentencetransformer-contrastive-roberta-base | 59a1adc92ceb9d352310d9adf3254a41c16550a2 | 2021-11-13T13:29:45.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-contrastive-roberta-base | 3 | null | sentence-transformers | 21,033 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-contrastive-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-contrastive-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-contrastive-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aditya140/longformerNER_kaggle | 349cea49e9b3485914ddbabe97cda1d6ede8c86f | 2022-02-07T21:22:11.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"transformers"
] | feature-extraction | false | aditya140 | null | aditya140/longformerNER_kaggle | 3 | null | transformers | 21,034 | Entry not found |
ainize/GPT2-futurama-script | 7456f78b7d125727e78c42ef5a8a0b520ca1af67 | 2021-05-21T11:58:18.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ainize | null | ainize/GPT2-futurama-script | 3 | null | transformers | 21,035 | Entry not found |
airKlizz/bart-large-multi-combine-wiki-news | 9dd218a5836908f7be731ec5e2deb65a89a436a5 | 2020-06-11T10:57:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/bart-large-multi-combine-wiki-news | 3 | null | transformers | 21,036 | Entry not found |
airKlizz/bart-large-multi-en-wiki-news | 5d06d568e25bc7bf85de94122824663379c1b41b | 2020-06-09T14:41:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/bart-large-multi-en-wiki-news | 3 | null | transformers | 21,037 | Entry not found |
airKlizz/bert2bert-multi-de-wiki-news | bfe144348ccba967681de115b78115a5834b1a26 | 2020-06-10T08:36:47.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/bert2bert-multi-de-wiki-news | 3 | null | transformers | 21,038 | Entry not found |
airKlizz/distilbart-6-12-multi-combine-wiki-news | a1ee9285e4d194ca919d055c1bcce461c3972ea1 | 2020-08-22T07:50:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/distilbart-6-12-multi-combine-wiki-news | 3 | null | transformers | 21,039 | Entry not found |
airKlizz/distilbart-6-6-multi-combine-wiki-news | a749497a1cff1c8f9854e4d5dbe6a98f07bef028 | 2020-08-22T07:53:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/distilbart-6-6-multi-combine-wiki-news | 3 | null | transformers | 21,040 | Entry not found |
airKlizz/mt5-base-germeval21-toxic-with-data-augmentation | fd8e0edd6e6504aec9954ec351bd97d5e897bf28 | 2021-07-12T15:47:09.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/mt5-base-germeval21-toxic-with-data-augmentation | 3 | null | transformers | 21,041 | Entry not found |
airKlizz/mt5-base-wikinewssum-all-languages | 2e7134e501a5db76ea0a0cd9c7884ee378f2a2b3 | 2021-12-23T12:56:06.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-all-languages | 3 | null | transformers | 21,042 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-all-languages
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-all-languages
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2454
- Rouge1: 8.3826
- Rouge2: 3.5524
- Rougel: 6.8656
- Rougelsum: 7.8362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 3467 | 2.4034 | 8.0363 | 3.2484 | 6.5409 | 7.477 |
| No log | 2.0 | 6934 | 2.3276 | 8.1054 | 3.2905 | 6.5765 | 7.5687 |
| No log | 3.0 | 10401 | 2.2976 | 8.169 | 3.4272 | 6.6597 | 7.6435 |
| No log | 4.0 | 13868 | 2.2795 | 8.2941 | 3.5353 | 6.7881 | 7.7664 |
| 2.8057 | 5.0 | 17335 | 2.2621 | 8.3302 | 3.5599 | 6.8238 | 7.7928 |
| 2.8057 | 6.0 | 20802 | 2.2547 | 8.3818 | 3.5886 | 6.8672 | 7.844 |
| 2.8057 | 7.0 | 24269 | 2.2472 | 8.3809 | 3.5696 | 6.8575 | 7.8327 |
| 2.8057 | 8.0 | 27736 | 2.2454 | 8.3826 | 3.5524 | 6.8656 | 7.8362 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/mt5-base-wikinewssum-english-100 | a0b77cd355f5a215061698ca44a0b01ab394a715 | 2021-12-31T12:02:27.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-english-100 | 3 | null | transformers | 21,043 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-100
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6225
- Rouge1: 3.909
- Rouge2: 0.9312
- Rougel: 3.3835
- Rougelsum: 3.7786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 |
| No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 |
| No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 |
| No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 |
| No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 |
| No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 |
| No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 |
| No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/t5-base-multi-combine-wiki-news | c4e499f32627f5c1eeded16eb8a8020dc6afce76 | 2021-06-23T10:50:02.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-base-multi-combine-wiki-news | 3 | null | transformers | 21,044 | Entry not found |
airKlizz/t5-base-multi-en-wiki-news | ab66f89adfcfbf4237b4531f1a76107a7ae12711 | 2021-06-23T11:53:07.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-base-multi-en-wiki-news | 3 | null | transformers | 21,045 | Entry not found |
airKlizz/t5-base-with-title-multi-fr-wiki-news | 4d539af1764660deb26c373804e7702cf40008a5 | 2021-10-17T20:20:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fr",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-base-with-title-multi-fr-wiki-news | 3 | null | transformers | 21,046 | ---
language: fr
license: mit
---
|
akadriu/wav2vec2-large-xlsr-53-demo-colab | 01befea49814c4d3126f92733434904cf7495ec8 | 2022-01-18T22:07:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akadriu | null | akadriu/wav2vec2-large-xlsr-53-demo-colab | 3 | 1 | transformers | 21,047 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4170
- Wer: 0.4282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.7049 | 0.8 | 200 | 3.0234 | 0.9683 |
| 2.9496 | 1.6 | 400 | 2.9348 | 0.9683 |
| 2.6582 | 2.4 | 600 | 1.2843 | 0.9818 |
| 1.0417 | 3.2 | 800 | 0.6061 | 0.5853 |
| 0.7853 | 4.0 | 1000 | 0.5113 | 0.5013 |
| 0.681 | 4.8 | 1200 | 0.4723 | 0.4695 |
| 0.6074 | 5.6 | 1400 | 0.4528 | 0.4491 |
| 0.5539 | 6.4 | 1600 | 0.4818 | 0.4555 |
| 0.5257 | 7.2 | 1800 | 0.4439 | 0.4298 |
| 0.5038 | 8.0 | 2000 | 0.4495 | 0.4398 |
| 0.4868 | 8.8 | 2200 | 0.4467 | 0.4392 |
| 0.4727 | 9.6 | 2400 | 0.4076 | 0.4045 |
| 0.4493 | 10.4 | 2600 | 0.4559 | 0.4452 |
| 0.4452 | 11.2 | 2800 | 0.4174 | 0.4124 |
| 0.4407 | 12.0 | 3000 | 0.4188 | 0.4098 |
| 0.4385 | 12.8 | 3200 | 0.4123 | 0.4098 |
| 0.4192 | 13.6 | 3400 | 0.4090 | 0.4199 |
| 0.4061 | 14.4 | 3600 | 0.4170 | 0.4282 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
akahana/gpt2-indonesia | ac358da3046380ff3db173f0e0d56ce97b95e025 | 2021-11-30T07:06:10.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | akahana | null | akahana/gpt2-indonesia | 3 | null | transformers | 21,048 | ---
language: "id"
widget:
- text: "dahulu kala ada sebuah"
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/gpt2-indonesia"
generator = pipeline('text-generation',
model=path)
set_seed(42)
kalimat = "dahulu kala ada sebuah"
preds = generator(kalimat,
max_length=64,
num_return_sequences=3)
for data in preds:
print(data)
{'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'}
{'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'}
{'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'}
``` |
akahana/wav2vec2-base-indonesia | db3f55c2111bd26f6a0a9716255a0bc5b02bfe99 | 2021-11-22T13:03:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | akahana | null | akahana/wav2vec2-base-indonesia | 3 | null | transformers | 21,049 | Entry not found |
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab | 330883842b9abd6a11e8521e74194eea0b332965 | 2021-12-21T18:26:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akashsivanandan | null | akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab | 3 | null | transformers | 21,050 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tamil-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Wer: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0967 | 1.0 | 118 | 4.6437 | 1.0 |
| 3.4973 | 2.0 | 236 | 3.2588 | 1.0 |
| 3.1305 | 3.0 | 354 | 2.6566 | 1.0 |
| 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 |
| 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 |
| 0.525 | 6.0 | 708 | 0.6649 | 0.7995 |
| 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 |
| 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 |
| 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 |
| 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 |
| 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 |
| 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 |
| 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 |
| 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 |
| 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 |
| 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 |
| 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 |
| 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 |
| 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 |
| 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 |
| 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 |
| 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 |
| 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 |
| 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 |
| 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 |
| 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 |
| 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 |
| 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 |
| 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 |
| 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
akozlo/conserv_fulltext_1_18_22 | 3ee91e7f082f2e009a836fd723c41d778394ef0c | 2022-01-18T13:42:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | akozlo | null | akozlo/conserv_fulltext_1_18_22 | 3 | null | transformers | 21,051 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: conserv_fulltext_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conserv_fulltext_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
akrathi007/akk213text | e224031a07f3933804049c8837391b5cdeafb8ce | 2022-02-08T06:59:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | akrathi007 | null | akrathi007/akk213text | 3 | null | transformers | 21,052 | Entry not found |
alangganggang/transformer_exercise_01 | 7c7870ba71dc5ca28e9f25f143a1f2349aa1fdc7 | 2021-11-02T14:41:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | alangganggang | null | alangganggang/transformer_exercise_01 | 3 | null | transformers | 21,053 | Entry not found |
algolet/bert-large-chinese | cde4a45bb97c3cc68f1e31120ad00ef48d415834 | 2021-12-14T10:00:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | algolet | null | algolet/bert-large-chinese | 3 | null | transformers | 21,054 | <p>Chinese Bert Large Model</p>
<p>bert large中文预训练模型</p>
#### 训练语料
中文wiki, 2018-2020海量新闻语料 |
ali2066/finetuned_token_2e-05_16_02_2022-14_37_42 | dc81b06d45ef12a49f3183d1fc9c53b1a1a53d26 | 2022-02-16T13:40:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_2e-05_16_02_2022-14_37_42 | 3 | null | transformers | 21,055 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_37_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_37_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-21_08_55 | 3ca859220c60e4555dad2bd744ec8b31515cc6e8 | 2022-02-16T20:11:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-21_08_55 | 3 | null | transformers | 21,056 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_all_16_02_2022-21_08_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_all_16_02_2022-21_08_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2853
- Precision: 0.1677
- Recall: 0.3106
- F1: 0.2178
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3452 | 0.0526 | 0.1055 | 0.0702 | 0.8507 |
| No log | 2.0 | 60 | 0.2598 | 0.1575 | 0.2680 | 0.1984 | 0.8909 |
| No log | 3.0 | 90 | 0.2398 | 0.1866 | 0.2982 | 0.2295 | 0.9007 |
| No log | 4.0 | 120 | 0.2354 | 0.1949 | 0.3049 | 0.2378 | 0.9002 |
| No log | 5.0 | 150 | 0.2314 | 0.2026 | 0.3166 | 0.2471 | 0.9004 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
alireza7/ARMAN-MSR-persian-base-parsinlu-multiple-choice | ad2f4f1a207e799375d3d501606a2111923a750c | 2021-09-29T19:15:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-parsinlu-multiple-choice | 3 | null | transformers | 21,057 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-MSR-persian-base-parsinlu-qqp | e5be71bc0f1e130e10056924feb40000c1a3bb3d | 2021-09-29T19:15:19.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-parsinlu-qqp | 3 | null | transformers | 21,058 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-MSR-persian-base-parsinlu-textual-entailment | d2adba01c6334166bbdefa9364e48c31fcff3e8f | 2021-09-29T19:16:04.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-parsinlu-textual-entailment | 3 | null | transformers | 21,059 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-MSR-persian-base-perkey-title | cd4344566fe4841b1082d28ad4b308b1f5fd2bdb | 2021-09-29T19:16:50.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-perkey-title | 3 | null | transformers | 21,060 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-MSR-persian-base-voa-title | 53e2867acdfe47431c50b8883c54e5e987935cda | 2021-09-29T19:17:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base-voa-title | 3 | null | transformers | 21,061 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-parsinlu-multiple-choice | 64dfd84cb9f9bc31fe29c5044150c8011886fc6f | 2021-09-29T19:18:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-parsinlu-multiple-choice | 3 | null | transformers | 21,062 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-perkey-summary | e3953b2aa589f30248dba6830d2206c031726800 | 2021-09-29T19:19:10.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-perkey-summary | 3 | null | transformers | 21,063 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-voa-title | 7c4e50bfd079609eb1245d970330e6316c1cbdb0 | 2021-09-29T19:19:31.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-voa-title | 3 | null | transformers | 21,064 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SH-persian-base-wiki-summary | 39d6ba44e554f4962582abdf933aaeb9fde2b910 | 2021-09-29T19:19:39.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SH-persian-base-wiki-summary | 3 | null | transformers | 21,065 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-parsinlu-qqp | b97597ab06205d3670dbfbfdf51dc4f9ef011448 | 2021-09-29T19:20:44.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-parsinlu-qqp | 3 | null | transformers | 21,066 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base-parsinlu-textual-entailment | f8724cc83164885223da6101d9af822745389fa9 | 2021-09-29T19:21:04.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-parsinlu-textual-entailment | 3 | null | transformers | 21,067 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-100-persian-base | 0da6458949655383098fd1efde957f0979dbe4a6 | 2021-09-29T19:22:36.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base | 3 | null | transformers | 21,068 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-parsinlu-qqp | 3b6f35168d9dfa6a1a7df37780dfe6c37b3802a6 | 2021-09-29T19:22:58.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-parsinlu-qqp | 3 | null | transformers | 21,069 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-parsinlu-sentiment-food | 867a782fae0da09f6f4f3d6ce694c717e004374e | 2021-09-29T19:23:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-parsinlu-sentiment-food | 3 | null | transformers | 21,070 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-parsinlu-sentiment-movie | 262c5fac7f908bf0075a675eedb01f28fd6b7b2e | 2021-09-29T19:23:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-parsinlu-sentiment-movie | 3 | null | transformers | 21,071 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-perkey-summary | 3a52f16158a7394a1357eb27b1806e02a7ac4aab | 2021-09-29T19:23:27.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-perkey-summary | 3 | null | transformers | 21,072 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/ARMAN-SS-80-persian-base-voa-title | 0fe1aeb103ed8a647ec2f2b1f420a3268628e7c5 | 2021-09-29T19:23:47.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-80-persian-base-voa-title | 3 | null | transformers | 21,073 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-parsinlu-multiple-choice | 37943905030d55ff050768ea56e0d0b0c0cfc021 | 2021-09-29T19:25:09.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-parsinlu-multiple-choice | 3 | null | transformers | 21,074 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-parsinlu-sentiment-movie | b490dce9c57397b176eac90ec9f907d4155576e6 | 2021-09-29T19:25:31.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-parsinlu-sentiment-movie | 3 | null | transformers | 21,075 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/t5-small-squad11 | 66e747ceac285ed6caf5c80dcea0b5de677c60af | 2021-06-23T11:14:57.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/t5-small-squad11 | 3 | 1 | transformers | 21,076 | SQuAD 1.1 question-answering based on T5-small.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-next-word-generator-qoogle"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Who is the winner of 2009 olympics? \n Jack and Jill participated, but James won the games.")```
which should result in the following:
```
['James']
```
|
alvinwatner/pegasus-large-qg-squad-alpha-interro | 891d4dcbf1dea5dca1ec9b30b7c094ad17eb5c73 | 2022-01-04T09:49:48.000Z | [
"pytorch",
"jax",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alvinwatner | null | alvinwatner/pegasus-large-qg-squad-alpha-interro | 3 | null | transformers | 21,077 | Entry not found |
am-shb/bert-base-multilingual-cased-finetuned | ce90a79d85de9ca706444eaded6ea2eb99fd7d81 | 2022-02-03T21:59:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | am-shb | null | am-shb/bert-base-multilingual-cased-finetuned | 3 | null | transformers | 21,078 | ---
tags:
- generated_from_trainer
model-index:
- name: '57426955'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57426955
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin0 | ee49e2b0ecf320c73b0a58061bb6f0dc407f1da1 | 2021-10-16T05:08:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin0 | 3 | null | transformers | 21,079 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0605
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.1566 | 1.07 | 5000 | 9.5587 | 1.0 |
| 3.149 | 2.13 | 10000 | 9.0950 | 1.0 |
| 3.1518 | 3.2 | 15000 | 9.7352 | 1.0 |
| 3.1716 | 4.27 | 20000 | 9.0866 | 1.0 |
| 3.1611 | 5.33 | 25000 | 9.6718 | 1.0 |
| 3.1308 | 6.4 | 30000 | 9.6227 | 1.0 |
| 3.1762 | 7.46 | 35000 | 9.4326 | 1.0 |
| 3.1503 | 8.53 | 40000 | 9.7609 | 1.0 |
| 3.1591 | 9.6 | 45000 | 9.3503 | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_multi-nithin6 | cc5a7880d74b53f5101cf3a152113c1fed3b8f83 | 2021-11-10T05:47:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_multi-nithin6 | 3 | null | transformers | 21,080 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_multi-nithin6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_multi-nithin6
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7654
- Wer: 0.4952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3222 | 4.31 | 2500 | 1.4875 | 0.5021 |
| 1.164 | 8.62 | 5000 | 1.4255 | 0.4816 |
| 1.0753 | 12.93 | 7500 | 1.4086 | 0.4717 |
| 0.9196 | 17.24 | 10000 | 1.4163 | 0.4695 |
| 0.8326 | 21.55 | 12500 | 1.5326 | 0.4650 |
| 0.7306 | 25.86 | 15000 | 1.5793 | 0.4670 |
| 0.5763 | 30.17 | 17500 | 1.7485 | 0.4728 |
| 0.4869 | 34.48 | 20000 | 1.9050 | 0.4797 |
| 0.4183 | 38.79 | 22500 | 2.1386 | 0.4835 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-base-ami_single-vumichien | 05143c5f62d1b815852173119e6c2293791892df | 2021-10-21T08:43:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:ami",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-base-ami_single-vumichien | 3 | null | transformers | 21,081 | ---
language:
- en
license: apache-2.0
datasets:
- ami
metrics:
- wer
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-base-ami_single-vumichien
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ami_single-vumichien
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 0.0 | 1.06 | 2500 | nan | 1.0 |
| 0.0 | 2.12 | 5000 | nan | 1.0 |
| 0.0 | 3.17 | 7500 | nan | 1.0 |
| 0.0 | 4.23 | 10000 | nan | 1.0 |
| 0.0 | 5.29 | 12500 | nan | 1.0 |
| 0.0 | 6.35 | 15000 | nan | 1.0 |
| 0.0 | 7.4 | 17500 | nan | 1.0 |
| 0.0 | 8.46 | 20000 | nan | 1.0 |
| 0.0 | 9.52 | 22500 | nan | 1.0 |
| 0.0 | 10.58 | 25000 | nan | 1.0 |
| 0.0 | 11.63 | 27500 | nan | 1.0 |
| 0.0 | 12.69 | 30000 | nan | 1.0 |
| 0.0 | 13.75 | 32500 | nan | 1.0 |
| 0.0 | 14.81 | 35000 | nan | 1.0 |
| 0.0 | 15.86 | 37500 | nan | 1.0 |
| 0.0 | 16.92 | 40000 | nan | 1.0 |
| 0.0 | 17.98 | 42500 | nan | 1.0 |
| 0.0 | 19.04 | 45000 | nan | 1.0 |
| 0.0 | 20.09 | 47500 | nan | 1.0 |
| 0.0 | 21.15 | 50000 | nan | 1.0 |
| 0.0 | 22.21 | 52500 | nan | 1.0 |
| 0.0 | 23.27 | 55000 | nan | 1.0 |
| 0.0 | 24.32 | 57500 | nan | 1.0 |
| 0.0 | 25.38 | 60000 | nan | 1.0 |
| 0.0 | 26.44 | 62500 | nan | 1.0 |
| 0.0 | 27.5 | 65000 | nan | 1.0 |
| 0.0 | 28.55 | 67500 | nan | 1.0 |
| 0.0 | 29.61 | 70000 | nan | 1.0 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ami-wav2vec2/wav2vec2-large-robust-ami_multi-nithin9 | 7771d7b0a90e21d8a4de8ad76c40c7779f4ed91a | 2021-12-03T08:22:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"ami",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ami-wav2vec2 | null | ami-wav2vec2/wav2vec2-large-robust-ami_multi-nithin9 | 3 | 1 | transformers | 21,082 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- ami
- generated_from_trainer
model-index:
- name: wav2vec2-large-robust-ami_multi-nithin9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-robust-ami_multi-nithin9
This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the AMI-IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4380
- Wer: 0.4318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3421 | 2.16 | 2500 | 1.2730 | 0.4097 |
| 1.229 | 4.31 | 5000 | 1.2522 | 0.3908 |
| 1.1494 | 6.47 | 7500 | 1.1937 | 0.3857 |
| 1.0801 | 8.62 | 10000 | 1.1936 | 0.3838 |
| 1.0366 | 10.78 | 12500 | 1.1860 | 0.3936 |
| 1.0292 | 12.93 | 15000 | 1.2014 | 0.3819 |
| 0.9217 | 15.09 | 17500 | 1.2313 | 0.3857 |
| 0.9182 | 17.24 | 20000 | 1.2617 | 0.3923 |
| 0.8731 | 19.4 | 22500 | 1.2850 | 0.3940 |
| 0.8471 | 21.55 | 25000 | 1.3432 | 0.3912 |
| 0.8372 | 23.71 | 27500 | 1.3238 | 0.3888 |
| 0.7905 | 25.86 | 30000 | 1.3911 | 0.3962 |
| 0.7553 | 28.02 | 32500 | 1.4314 | 0.3974 |
| 0.7448 | 30.17 | 35000 | 1.4246 | 0.4007 |
| 0.7228 | 32.33 | 37500 | 1.4303 | 0.4006 |
| 0.6941 | 34.48 | 40000 | 1.5059 | 0.4006 |
| 0.6804 | 36.64 | 42500 | 1.5281 | 0.4008 |
| 0.6652 | 38.79 | 45000 | 1.5382 | 0.4004 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42 | 4c044a008229d8ef85c6b0b388d2d402e9ad2f4a | 2022-02-21T18:31:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42 | 3 | null | transformers | 21,083 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-42 | 16b28ddaf4b2a1b33d39128b12e55acf2c3e25ec | 2022-02-21T20:54:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-42 | 3 | null | transformers | 21,084 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 8.618732261116367, 'f1': 14.074017518582023}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-42 | 0a2ead2c74be2de1013a4ba7489e2a2bcc48a32d | 2022-02-21T23:04:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-42 | 3 | null | transformers | 21,085 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.573320719016083, 'f1': 22.855895753681814}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42 | 2f4e5ab10f026427cd4556ae1762f066f768fafa | 2022-02-21T22:04:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42 | 3 | null | transformers | 21,086 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 4.541154210028382, 'f1': 10.04181288563879}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat | 6a0488e722a821e9003858e1dd661fef072d00db | 2021-10-02T16:53:01.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat | 3 | null | transformers | 21,087 | ---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- conll2003
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
args: conll2003
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pistherea-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat | a4736176372d5ea532e34a33a2df07f3d1d1d1a7 | 2021-10-02T10:20:06.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat | 3 | null | transformers | 21,088 | ---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- conll2003
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
args: conll2003
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-qa-with-ner | 7f5d3d397c179cf76822c1ceea8748e82f8618da | 2021-07-19T01:20:54.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-qa-with-ner | 3 | null | transformers | 21,089 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-qa-with-ner
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-qa-with-ner
This model is a fine-tuned version of [andi611/distilbert-base-uncased-qa](https://huggingface.co/andi611/distilbert-base-uncased-qa) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/distilbert-base-uncased-squad2-with-ner | d5c2739d1cd793e29b0a3fc03101a57699961480 | 2021-07-25T14:29:48.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | false | andi611 | null | andi611/distilbert-base-uncased-squad2-with-ner | 3 | null | transformers | 21,090 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: distilbert-base-uncased-squad2-with-ner
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-with-ner
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andrejmiscic/simcls-scorer-xsum | baf069137b9c604979671e234f423a382040238b | 2021-10-16T21:06:24.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"en",
"dataset:xsum",
"arxiv:2106.01890",
"arxiv:1808.08745",
"transformers",
"simcls"
] | feature-extraction | false | andrejmiscic | null | andrejmiscic/simcls-scorer-xsum | 3 | null | transformers | 21,091 | ---
language:
- en
tags:
- simcls
datasets:
- xsum
---
# SimCLS
SimCLS is a framework for abstractive summarization presented in [SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization](https://arxiv.org/abs/2106.01890).
It is a two-stage approach consisting of a *generator* and a *scorer*. In the first stage, a large pre-trained model for abstractive summarization (the *generator*) is used to generate candidate summaries, whereas, in the second stage, the *scorer* assigns a score to each candidate given the source document. The final summary is the highest-scoring candidate.
This model is the *scorer* trained for summarization of XSum ([paper](https://arxiv.org/abs/1808.08745), [datasets](https://huggingface.co/datasets/xsum)). It should be used in conjunction with [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum). See [our Github repository](https://github.com/andrejmiscic/simcls-pytorch) for details on training, evaluation, and usage.
## Usage
```bash
git clone https://github.com/andrejmiscic/simcls-pytorch.git
cd simcls-pytorch
pip3 install torch torchvision torchaudio transformers sentencepiece
```
```python
from src.model import SimCLS, GeneratorType
summarizer = SimCLS(generator_type=GeneratorType.Pegasus,
generator_path="google/pegasus-xsum",
scorer_path="andrejmiscic/simcls-scorer-xsum")
article = "This is a news article."
summary = summarizer(article)
print(summary)
```
### Results
All of our results are reported together with 95% confidence intervals computed using 10000 iterations of bootstrap. See [SimCLS paper](https://arxiv.org/abs/2106.01890) for a description of baselines.
| System | Rouge-1 | Rouge-2 | Rouge-L |
|------------------|----------------------:|----------------------:|----------------------:|
| Pegasus | 47.21 | 24.56 | 39.25 |
| **SimCLS paper** | --- | --- | --- |
| Origin | 47.10 | 24.53 | 39.23 |
| Min | 40.97 | 19.18 | 33.68 |
| Max | 52.45 | 28.28 | 43.36 |
| Random | 46.72 | 23.64 | 38.55 |
| **SimCLS** | 47.61 | 24.57 | 39.44 |
| **Our results** | --- | --- | --- |
| Origin | 47.16, [46.85, 47.48] | 24.59, [24.25, 24.92] | 39.30, [38.96, 39.62] |
| Min | 41.06, [40.76, 41.34] | 18.30, [18.03, 18.56] | 32.70, [32.42, 32.97] |
| Max | 51.83, [51.53, 52.14] | 28.92, [28.57, 29.26] | 44.02, [43.69, 44.36] |
| Random | 46.47, [46.17, 46.78] | 23.45, [23.13, 23.77] | 38.28, [37.96, 38.60] |
| **SimCLS** | 47.17, [46.87, 47.46] | 23.90, [23.59, 24.23] | 38.96, [38.64, 39.29] |
### Citation of the original work
```bibtex
@inproceedings{liu-liu-2021-simcls,
title = "{S}im{CLS}: A Simple Framework for Contrastive Learning of Abstractive Summarization",
author = "Liu, Yixin and
Liu, Pengfei",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.135",
doi = "10.18653/v1/2021.acl-short.135",
pages = "1065--1072",
}
```
|
andrek/LAT2NOB | f86a78e4802757975e5e9a0ce098c4b1c87bb615 | 2021-09-23T13:06:22.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"no",
"transformers",
"translation",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | andrek | null | andrek/LAT2NOB | 3 | null | transformers | 21,092 | ---
language: no
license: cc-by-4.0
tags:
- translation
widget:
- text: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua.
---
|
andresestevez/bert-base-cased-finetuned-squad | d822ad6e875db38d03f0fd6cb00c1b37eea22cd7 | 2022-02-23T19:12:49.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | andresestevez | null | andresestevez/bert-base-cased-finetuned-squad | 3 | null | transformers | 21,093 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.13.3
- Tokenizers 0.10.3
|
angiquer/twitterko-cha-electra-base-generator | bb46d12a1386423bd50532ba7ac4aef76c8fd9ee | 2020-07-07T04:41:55.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | angiquer | null | angiquer/twitterko-cha-electra-base-generator | 3 | null | transformers | 21,094 | Entry not found |
angiquer/twitterko-electra-base-discriminator | bf1011d6fc00f0fb48ca63c39e962caef8a88a9d | 2020-07-10T01:39:01.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | angiquer | null | angiquer/twitterko-electra-base-discriminator | 3 | null | transformers | 21,095 | Entry not found |
ann101020/le2sbot-hp | 8c0bd616ad148729d9f8c7e030ccecb15bde4d24 | 2021-06-04T11:59:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ann101020 | null | ann101020/le2sbot-hp | 3 | null | transformers | 21,096 | ---
tags:
- conversational
---
# My Awesome Model |
annadmitrieva/old-church-slavonic-pos | 169ef25bc86330a98aecc4913bb7040a9ba402e3 | 2021-11-28T15:51:41.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | annadmitrieva | null | annadmitrieva/old-church-slavonic-pos | 3 | null | transformers | 21,097 | A POS-tagger for Old Church Slavonic trained on the Old Church Slavonic UD treebank (https://github.com/UniversalDependencies/UD_Old_Church_Slavonic-PROIEL). GitHub with api: https://github.com/annadmitrieva/chu-api |
anon-submission-mk/distilbert-base-macedonian-cased | 958821afc6bb8b43ea2221b304ead4641df21156 | 2021-05-19T11:46:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | null | false | anon-submission-mk | null | anon-submission-mk/distilbert-base-macedonian-cased | 3 | null | transformers | 21,098 | Entry not found |
anon-submission-mk/electra-base-macedonian-bulgarian-cased-discriminator | 7db4a03ffca652467a97f1b8c805dd933932e9f5 | 2020-06-17T21:40:34.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | anon-submission-mk | null | anon-submission-mk/electra-base-macedonian-bulgarian-cased-discriminator | 3 | null | transformers | 21,099 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.