modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anton-l/distilhubert-ft-keyword-spotting | a2c3a200d28ea8ddd3f1b8f098178ddc92805a74 | 2021-10-27T19:00:06.000Z | [
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anton-l | null | anton-l/distilhubert-ft-keyword-spotting | 14 | null | transformers | 9,800 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: distilhubert-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-keyword-spotting
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1163
- Accuracy: 0.9706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8176 | 1.0 | 200 | 0.7718 | 0.8116 |
| 0.2364 | 2.0 | 400 | 0.2107 | 0.9662 |
| 0.1198 | 3.0 | 600 | 0.1374 | 0.9678 |
| 0.0891 | 4.0 | 800 | 0.1163 | 0.9706 |
| 0.085 | 5.0 | 1000 | 0.1180 | 0.9690 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-xls-r-1b-hi-with-lm | a1c61a357e267474d3f243982b8d98a453ad2aff | 2022-03-23T18:26:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-1b-hi-with-lm | 14 | 1 | transformers | 9,801 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-1B - Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 15.899
- name: Test CER
type: cer
value: 5.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-1B - Hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6921
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0674 | 2.07 | 400 | 1.3411 | 0.8835 |
| 1.324 | 4.15 | 800 | 0.9311 | 0.7142 |
| 1.2023 | 6.22 | 1200 | 0.8060 | 0.6170 |
| 1.1573 | 8.29 | 1600 | 0.7415 | 0.4972 |
| 1.1117 | 10.36 | 2000 | 0.7248 | 0.4588 |
| 1.0672 | 12.44 | 2400 | 0.6729 | 0.4350 |
| 1.0336 | 14.51 | 2800 | 0.7117 | 0.4346 |
| 1.0025 | 16.58 | 3200 | 0.7019 | 0.4272 |
| 0.9578 | 18.65 | 3600 | 0.6792 | 0.4118 |
| 0.9272 | 20.73 | 4000 | 0.6863 | 0.4156 |
| 0.9321 | 22.8 | 4400 | 0.6535 | 0.3972 |
| 0.8802 | 24.87 | 4800 | 0.6766 | 0.3906 |
| 0.844 | 26.94 | 5200 | 0.6782 | 0.3949 |
| 0.8387 | 29.02 | 5600 | 0.6916 | 0.3921 |
| 0.8042 | 31.09 | 6000 | 0.6806 | 0.3797 |
| 0.793 | 33.16 | 6400 | 0.7120 | 0.3831 |
| 0.7567 | 35.23 | 6800 | 0.6862 | 0.3808 |
| 0.7463 | 37.31 | 7200 | 0.6893 | 0.3709 |
| 0.7053 | 39.38 | 7600 | 0.7096 | 0.3701 |
| 0.6906 | 41.45 | 8000 | 0.6921 | 0.3676 |
| 0.6891 | 43.52 | 8400 | 0.7167 | 0.3663 |
| 0.658 | 45.6 | 8800 | 0.6833 | 0.3580 |
| 0.6576 | 47.67 | 9200 | 0.6914 | 0.3569 |
| 0.6358 | 49.74 | 9600 | 0.6922 | 0.3551 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi-with-lm --dataset mozilla-foundation/common_voice_8_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.209 | 15.899 |
|
anzorq/t5-v1_1-small-ru_kbd-cased | f1b18523ddc629dbb3860faf5b80e3ee4f459586 | 2022-01-16T05:24:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"kbd",
"dataset:anzorq/kbd-ru-1.67M-temp",
"dataset:17753 Russian-Kabardian pairs of text",
"transformers",
"translation",
"autotrain_compatible"
]
| translation | false | anzorq | null | anzorq/t5-v1_1-small-ru_kbd-cased | 14 | null | transformers | 9,802 | ---
language:
- ru
- kbd
tags:
- translation
datasets:
- anzorq/kbd-ru-1.67M-temp
- 17753 Russian-Kabardian pairs of text
widget:
- text: "ru->kbd: Я иду домой."
example_title: "Я иду домой."
- text: "ru->kbd: Дети играют во дворе."
example_title: "Дети играют во дворе."
- text: "ru->kbd: Сколько тебе лет?"
example_title: "Сколько тебе лет?"
---
## [google/t5-v1_1-small](google/t5-v1_1-small) model
### pretrained on [anzorq/kbd-ru-1.67M-temp](https://huggingface.co/datasets/anzorq/kbd-ru-1.67M-temp)
### fine-tuned on **17753** Russian-Kabardian word/sentence pairs
kbd text uses custom latin script for optimization reasons.
Translation input should start with '**ru->kbd:** '.
**Tokenizer**: T5 sentencepiece, char, cased. |
artemis13fowl/bert-finetuned-ner | a43f3802988799a28b1faef7c1896c762a185237 | 2022-01-22T10:35:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | artemis13fowl | null | artemis13fowl/bert-finetuned-ner | 14 | null | transformers | 9,803 | Entry not found |
asapp/sew-d-mid-100k | 5fcfbf9cf6a7c46ef5ac99320bfc3fbb81dbed5c | 2021-10-28T13:56:56.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
]
| feature-extraction | false | asapp | null | asapp/sew-d-mid-100k | 14 | null | transformers | 9,804 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
bayartsogt/structbert-large | 2cc408700d1f2c213e4fa7159f5c4fa66e76dc8a | 2021-07-26T21:15:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"arxiv:1908.04577",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | bayartsogt | null | bayartsogt/structbert-large | 14 | null | transformers | 9,805 | # StructBERT: Un-Official Copy
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
**Claimer**
* This model card is not produced by [AliceMind Team](https://github.com/alibaba/AliceMind/)
## Reproduce HFHub models:
Download model/tokenizer vocab
```bash
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/large_bert_config.json && mv large_bert_config.json config.json
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/vocab.txt
wget https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model && mv en_model pytorch_model.bin
```
```python
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
config = AutoConfig.from_pretrained("./config.json")
model = AutoModelForMaskedLM.from_pretrained(".", config=config)
tokenizer = AutoTokenizer.from_pretrained(".", config=config)
model.push_to_hub("structbert-large")
tokenizer.push_to_hub("structbert-large")
```
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
# StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
## Introduction
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
order of words and sentences, which leverage language structures at the word and sentence levels,
respectively.
## Pre-trained models
|Model | Description | #params | Download |
|------------------------|-------------------------------------------|------|------|
|structbert.en.large | StructBERT using the BERT-large architecture | 340M | [structbert.en.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model) |
|structroberta.en.large | StructRoBERTa continue training from RoBERTa | 355M | Coming soon |
|structbert.ch.large | Chinese StructBERT; BERT-large architecture | 330M | [structbert.ch.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/ch_model) |
## Results
The results of GLUE & CLUE tasks can be reproduced using the hyperparameters listed in the following "Example usage" section.
#### structbert.en.large
[GLUE benchmark](https://gluebenchmark.com/leaderboard)
|Model| MNLI | QNLIv2 | QQP | SST-2 | MRPC |
|--------------------|-------|-------|-------|-------|-------|
|structbert.en.large |86.86% |93.04% |91.67% |93.23% |86.51% |
#### structbert.ch.large
[CLUE benchmark](https://www.cluebenchmarks.com/)
|Model | CMNLI | OCNLI | TNEWS | AFQMC |
|--------------------|-------|-------|-------|-------|
|structbert.ch.large |84.47% |81.28% |68.67% |76.11% |
## Example usage
#### Requirements and Installation
* [PyTorch](https://pytorch.org/) version >= 1.0.1
* Install other libraries via
```
pip install -r requirements.txt
```
* For faster training install NVIDIA's [apex](https://github.com/NVIDIA/apex) library
#### Finetune MNLI
```
python run_classifier_multi_task.py \
--task_name MNLI \
--do_train \
--do_eval \
--do_test \
--amp_type O1 \
--lr_decay_factor 1 \
--dropout 0.1 \
--do_lower_case \
--detach_index -1 \
--core_encoder bert \
--data_dir path_to_glue_data \
--vocab_file config/vocab.txt \
--bert_config_file config/large_bert_config.json \
--init_checkpoint path_to_pretrained_model \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--fast_train \
--gradient_accumulation_steps 1 \
--output_dir path_to_output_dir
```
## Citation
If you use our work, please cite:
```
@article{wang2019structbert,
title={Structbert: Incorporating language structures into pre-training for deep language understanding},
author={Wang, Wei and Bi, Bin and Yan, Ming and Wu, Chen and Bao, Zuyi and Xia, Jiangnan and Peng, Liwei and Si, Luo},
journal={arXiv preprint arXiv:1908.04577},
year={2019}
}
``` |
beomi/beep-koelectra-base-v3-discriminator-hate | 9c3b5123609877a71181cca0d3201a174b7bdaf6 | 2021-10-23T06:06:51.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/beep-koelectra-base-v3-discriminator-hate | 14 | null | transformers | 9,806 | Entry not found |
bhavikardeshna/multilingual-bert-base-cased-german | 4a990b0764095ef074fcb3fa10243efa38eeb422 | 2021-12-21T11:43:10.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-german | 14 | null | transformers | 9,807 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bioformers/bioformer-cased-v1.0-qnli | 6f1e92e54711c4dd603efab1feaceadf8c330abf | 2021-09-23T02:52:03.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:1804.07461",
"transformers"
]
| text-classification | false | bioformers | null | bioformers/bioformer-cased-v1.0-qnli | 14 | null | transformers | 9,808 | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.883397
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli)
Original GLUE paper: https://arxiv.org/abs/1804.07461 |
blanchefort/rubert-base-cased-sentiment-mokoron | 6473a1c7f0eb1912745d8501144c507f2b484cc8 | 2021-05-19T13:00:13.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"ru",
"dataset:RuTweetCorp",
"transformers",
"sentiment"
]
| text-classification | false | blanchefort | null | blanchefort/rubert-base-cased-sentiment-mokoron | 14 | null | transformers | 9,809 | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuTweetCorp
---
# RuBERT for Sentiment Analysis of Tweets
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/).
## Labels
0: POSITIVE
1: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuTweetCorp](https://study.mokoron.com/)**
> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
|
blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa | 7e208b19476da4116d5792935c54c4a4a5574794 | 2021-09-12T15:54:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | blizrys | null | blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa | 14 | null | transformers | 9,810 | ---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-base-cased-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3182
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8591 | 0.58 |
| No log | 2.0 | 114 | 0.9120 | 0.58 |
| No log | 3.0 | 171 | 0.8159 | 0.62 |
| No log | 4.0 | 228 | 1.1651 | 0.54 |
| No log | 5.0 | 285 | 1.2350 | 0.6 |
| No log | 6.0 | 342 | 1.5563 | 0.68 |
| No log | 7.0 | 399 | 2.0233 | 0.58 |
| No log | 8.0 | 456 | 2.2054 | 0.5 |
| 0.4463 | 9.0 | 513 | 2.2434 | 0.5 |
| 0.4463 | 10.0 | 570 | 2.3182 | 0.5 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
boronbrown48/1_model_topic_classification_v2 | c826e738108c7332f29269c2a45a3917233ede46 | 2021-12-10T16:13:27.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | false | boronbrown48 | null | boronbrown48/1_model_topic_classification_v2 | 14 | null | transformers | 9,811 | Entry not found |
bypequeno/DialoGPT-small-michaelscott | 07b21d8ca02f98a228558c6630ae6ec5f164f27b | 2022-01-25T23:01:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | bypequeno | null | bypequeno/DialoGPT-small-michaelscott | 14 | null | transformers | 9,812 | ---
tags:
- conversational
---
# Michael Scott dialog model |
cahya/wav2vec2-large-xlsr-turkish-artificial | d93d3b7e1822eb1c31e59ceefef4a6a11843289d | 2021-07-06T00:04:36.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-turkish-artificial | 14 | 1 | transformers | 9,813 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish with Artificial Voices by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 66.98
---
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 66.98 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
carlosaguayo/distilbert-base-uncased-finetuned-emotion | 431d72b2184faba4ba6c4c76fb66427512c28bce | 2022-07-13T14:50:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | carlosaguayo | null | carlosaguayo/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,814 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9299984897610097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Accuracy: 0.9295
- F1: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2853 | 1.0 | 250 | 0.1975 | 0.9235 | 0.9233 |
| 0.1568 | 2.0 | 500 | 0.1689 | 0.9295 | 0.9300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
cemdenizsel/10k-finetuned-bert-model | 379b77d2ffc148caf5d6a59eb715a2925cd04f7d | 2021-05-28T15:09:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cemdenizsel | null | cemdenizsel/10k-finetuned-bert-model | 14 | null | transformers | 9,815 | Entry not found |
chgk13/tiny_russian_toxic_bert | e3e1b2b782bfb88e961f8d754931f57722fbcb15 | 2022-01-30T09:49:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | chgk13 | null | chgk13/tiny_russian_toxic_bert | 14 | null | transformers | 9,816 | Entry not found |
chinhon/pegasus-multi_news-malay_headlines_02 | dc21808d1bcdd6ba0f01419423842b9faaa543df | 2021-11-13T18:40:42.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | chinhon | null | chinhon/pegasus-multi_news-malay_headlines_02 | 14 | null | transformers | 9,817 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-malay_headlines_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-malay_headlines_02
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9295
- Rouge1: 39.9859
- Rouge2: 20.1943
- Rougel: 36.1927
- Rougelsum: 36.2105
- Gen Len: 35.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0943 | 1.0 | 53582 | 1.9295 | 39.9859 | 20.1943 | 36.1927 | 36.2105 | 35.6062 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
chrisliu298/arxiv_ai_gpt2 | 271fd3cbffcdcac20b87df6dc804fb6b9fcf7483 | 2021-05-21T14:59:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:https://github.com/staeiou/arxiv_archive/tree/v1.0.1",
"transformers",
"arxiv"
]
| text-generation | false | chrisliu298 | null | chrisliu298/arxiv_ai_gpt2 | 14 | null | transformers | 9,818 | ---
language: "en"
tags:
- gpt2
- arxiv
- transformers
datasets:
- https://github.com/staeiou/arxiv_archive/tree/v1.0.1
---
# ArXiv AI GPT-2
## Model description
This GPT-2 (774M) model is capable of generating abstracts given paper titles. It was trained using all research paper titles and abstracts under artificial intelligence (AI), machine learning (LG), computation and language (CL), and computer vision and pattern recognition (CV) on arXiv.
## Intended uses & limitations
#### How to use
To generate paper abstracts, use the provided `generate.py` [here](https://gist.github.com/chrisliu298/ccb8144888eace069da64ad3e6472d64). This is very similar to the HuggingFace's `run_generation.py` [here](https://github.com/huggingface/transformers/tree/master/examples/text-generation). You can simply replace the text with with your own model path (line 89) and change the input string to your paper title (line 127). If you want to use your own script, make sure to prepend `<|startoftext|> ` at the front and append ` <|sep|>` at the end of the paper title.
## Training data
I selected a subset of the [arXiv Archive](https://github.com/staeiou/arxiv_archive) dataset (Geiger, 2019) as the training and evaluation data to fine-tune GPT-2. The original arXiv Archive dataset contains a full archive of metadata about papers on arxiv.org, from the start of the site in 1993 to the end of 2019. Our subset includes all the paper titles (query) and abstracts (context) under the Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Computation and Language (cs.CL), and Computer Vision and Pattern Recognition (cs.CV) categories. I provide the information of the sub-dataset and the distribution of the training and evaluation dataset as follows.
| Splits | Count | Percentage (%) | BPE Token Count |
| :--------: | :--------: | :------------: | :-------------: |
| Train | 90,000 | 90.11 | 20,834,012 |
| Validation | 4,940 | 4.95 | 1,195,056 |
| Test | 4,940 | 4.95 | 1,218,754 |
| **Total** | **99,880** | **100** | **23,247,822** |
The original dataset is in the format of a tab-separated value, so we wrote a simple preprocessing script to convert it into a text file format, which is the input file type (a document) of the GPT-2 model. An example of a paper’s title and its abstract is shown below.
```text
<|startoftext|> Some paper title <|sep|> Some paper abstract <|endoftext|>
```
Because there are a lot of cross-domain papers in the dataset, I deduplicate the dataset using the arXiv ID, which is unique for every paper. I sort the paper by submission date, by doing so, one can examine GPT-2’s ability to use learned terminologies when it is prompted with paper titles from the “future.”
## Training procedure
I used block size = 512, batch size = 1, gradidnet accumulation = 1, learning rate = 1e-5, epochs = 5, and everything else follows the default model configuration.
## Eval results
The resulting GPT-2 large model's perplexity score on the test set is **14.9413**.
## Reference
```bibtex
@dataset{r_stuart_geiger_2019_2533436,
author= {R. Stuart Geiger},
title={{ArXiV Archive: A tidy and complete archive of metadata for papers on arxiv.org, 1993-2019}},
month=jan,
year= 2019,
publisher={Zenodo},
version= {v1.0.1},
doi={10.5281/zenodo.2533436},
url={https://doi.org/10.5281/zenodo.2533436}
}
```
|
codegram/calbert-base-uncased | 39f73fa3b23a980fc195fb15fd8a108759ceb34e | 2020-12-11T21:36:11.000Z | [
"pytorch",
"albert",
"ca",
"transformers",
"masked-lm",
"catalan",
"exbert",
"license:mit"
]
| null | false | codegram | null | codegram/calbert-base-uncased | 14 | 1 | transformers | 9,819 | ---
language: "ca"
tags:
- masked-lm
- catalan
- exbert
license: mit
---
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased")
model = AutoModel.from_pretrained("codegram/calbert-base-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43},
# {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684},
# {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 768])
embeddings.detach()
# tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017],
# [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734],
# [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085],
# ...,
# [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813],
# [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017],
# [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
coldfir3/distilbert-base-uncased-finetuned-emotion | 368cafd5954bd36d499d2902b782533bb63273fb | 2022-07-13T12:50:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | coldfir3 | null | coldfir3/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,820 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222116474112371
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3073 | 0.904 | 0.9021 |
| 0.2484 | 2.0 | 500 | 0.2175 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
cvcio/roberta-el-news | 338812f7313c3dc0241e60c401b87026c43cd7ff | 2022-02-19T09:58:16.000Z | [
"pytorch",
"roberta",
"fill-mask",
"el",
"transformers",
"generated_from_trainer",
"Greek",
"news",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | cvcio | null | cvcio/roberta-el-news | 14 | null | transformers | 9,821 | ---
language: el
license: gpl-3.0
tags:
- generated_from_trainer
- roberta
- Greek
- news
- transformers
model-index:
- name: roberta-el-news
results: []
widget:
- text: "Η κυβέρνηση μουδιασμένη από τη <mask> της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια."
---
# RoBERTa Greek base model
Pretrained model on Greek language with the Masked Language Modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is *NOT* case-sensitive and all Greek diacritics retained.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
# example url
# https://www.news247.gr/politiki/misologa-maximoy-gia-tin-ekthesi-tsiodra-lytra-gia-ti-thnitotita-ektos-meth.9462425.html
# not present in train/eval set
from transformers import pipeline
pipe = pipeline('fill-mask', model='cvcio/roberta-el-news')
pipe(
'Η κυβέρνηση μουδιασμένη από τη <mask> της έκθεσης Τσιόδρα-Λύτρα, '
'επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.'
)
# outputs
[
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσιοποίηση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.5881184339523315, 'token': 20235, 'token_str': ' δημοσιοποίηση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσίευση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.05952141433954239, 'token': 9696, 'token_str': ' δημοσίευση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαχείριση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.029887061566114426, 'token': 4315, 'token_str': ' διαχείριση'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαρροή της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.022848669439554214, 'token': 24940, 'token_str': ' διαρροή'
},
{
'sequence': 'Η κυβέρνηση μουδιασμένη από τη ματαίωση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.',
'score': 0.01729060709476471, 'token': 46913, 'token_str': ' ματαίωση'
}
]
```
## Training data
The model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with [MediaWatch](https://mediawatch.io/), from October 2016 upto December 2021.
## Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. `&` => `&`).
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate.
### Training results
| epochs | steps | train/train_loss | train/loss | eval/loss |
|-------:|--------:|-----------------:|------------:|----------:|
| 3 | 765,414 | 0.3960 | 1.2356 | 0.9028 |
### Evaluation results
The model fine-tuned on ner task using the [elNER](https://github.com/nmpartzio/elner) dataset and achieved the following results:
| task | epochs | lr | batch | dataset | precision | recall | f1 | accuracy |
|-----:|-------:|-----:|------:|--------:|----------:|-------:|-------:|---------:|
| ner | 5 | 1e-5 | 16/16 | elNER4 | 0.8954 | 0.9280 | 0.9114 | 0.9872 |
| ner | 5 | 1e-4 | 16/16 | elNER18 | 0.9069 | 0.9268 | 0.9168 | 0.9823 |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 14
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest. |
d42kw01f/Sinhala-RoBERTa | 5b3c8e5d1732e7894dff1c45f4c9ea5b03cfbf13 | 2021-11-06T20:09:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | d42kw01f | null | d42kw01f/Sinhala-RoBERTa | 14 | null | transformers | 9,822 | # Description:
This is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset.
# How to Use:
The model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Sinhala-RoBERTa")
>>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Sinhala-RoBERTa")
>>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> fill_mask("මම ගෙදර <mask>.")
[{'score': 0.1822454035282135,
'sequence': 'මම ගෙදර ආව.',
'token': 701,
'token_str': ' ආව'},
{'score': 0.10513380169868469,
'sequence': 'මම ගෙදර ය.',
'token': 310,
'token_str': ' ය'},
{'score': 0.06417194753885269,
'sequence': 'මම ගෙදර එක.',
'token': 328,
'token_str': ' එක'},
{'score': 0.05026362091302872,
'sequence': 'මම ගෙදර ඇත.',
'token': 330,
'token_str': ' ඇත'},
{'score': 0.029960114508867264,
'sequence': 'මම ගෙදර යනව.',
'token': 834,
'token_str': ' යනව'}]
``` |
damien-ir/kosentelectra-discriminator-v2-mixed | 650222d672752e55620fd09acfd04366efa12e5c | 2020-10-06T03:22:29.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | damien-ir | null | damien-ir/kosentelectra-discriminator-v2-mixed | 14 | null | transformers | 9,823 | Entry not found |
danchang11/GPT2-TraditionalChat | 886855011bced5c8cf73e5290896901f556f7170 | 2021-12-04T13:14:24.000Z | [
"pytorch",
"gpt2",
"transformers",
"text-generation"
]
| text-generation | false | danchang11 | null | danchang11/GPT2-TraditionalChat | 14 | null | transformers | 9,824 | ---
tags:
- text-generation
---
#dialogue |
dbdmg/wav2vec2-xls-r-300m-italian | ea24a93313bd48aced4943013d19625e1add554b | 2022-03-23T18:28:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | dbdmg | null | dbdmg/wav2vec2-xls-r-300m-italian | 14 | 1 | transformers | 9,825 | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300m - Italian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: it
metrics:
- name: Test WER
type: wer
value: 19.44
- name: Test CER
type: cer
value: 4.47
- name: Test WER (+LM)
type: wer
value: 14.08
- name: Test CER (+LM)
type: cer
value: 3.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Test WER
type: wer
value: 31.01
- name: Test CER
type: cer
value: 9.27
- name: Test WER (+LM)
type: wer
value: 22.09
- name: Test CER (+LM)
type: cer
value: 7.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: it
metrics:
- name: Test WER
type: wer
value: 38.07
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-italian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - IT dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.1710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 100 | inf | 1.0 |
| No log | 0.09 | 200 | inf | 0.9983 |
| No log | 0.13 | 300 | inf | 0.7672 |
| No log | 0.18 | 400 | inf | 0.6919 |
| 2.9929 | 0.22 | 500 | inf | 0.6266 |
| 2.9929 | 0.26 | 600 | inf | 0.5513 |
| 2.9929 | 0.31 | 700 | inf | 0.5081 |
| 2.9929 | 0.35 | 800 | inf | 0.4945 |
| 2.9929 | 0.39 | 900 | inf | 0.4720 |
| 0.5311 | 0.44 | 1000 | inf | 0.4387 |
| 0.5311 | 0.48 | 1100 | inf | 0.4411 |
| 0.5311 | 0.53 | 1200 | inf | 0.4429 |
| 0.5311 | 0.57 | 1300 | inf | 0.4322 |
| 0.5311 | 0.61 | 1400 | inf | 0.4532 |
| 0.4654 | 0.66 | 1500 | inf | 0.4492 |
| 0.4654 | 0.7 | 1600 | inf | 0.3879 |
| 0.4654 | 0.75 | 1700 | inf | 0.3836 |
| 0.4654 | 0.79 | 1800 | inf | 0.3743 |
| 0.4654 | 0.83 | 1900 | inf | 0.3687 |
| 0.4254 | 0.88 | 2000 | inf | 0.3793 |
| 0.4254 | 0.92 | 2100 | inf | 0.3766 |
| 0.4254 | 0.97 | 2200 | inf | 0.3705 |
| 0.4254 | 1.01 | 2300 | inf | 0.3272 |
| 0.4254 | 1.05 | 2400 | inf | 0.3185 |
| 0.3997 | 1.1 | 2500 | inf | 0.3244 |
| 0.3997 | 1.14 | 2600 | inf | 0.3082 |
| 0.3997 | 1.18 | 2700 | inf | 0.3040 |
| 0.3997 | 1.23 | 2800 | inf | 0.3028 |
| 0.3997 | 1.27 | 2900 | inf | 0.3112 |
| 0.3668 | 1.32 | 3000 | inf | 0.3110 |
| 0.3668 | 1.36 | 3100 | inf | 0.3067 |
| 0.3668 | 1.4 | 3200 | inf | 0.2961 |
| 0.3668 | 1.45 | 3300 | inf | 0.3081 |
| 0.3668 | 1.49 | 3400 | inf | 0.2936 |
| 0.3645 | 1.54 | 3500 | inf | 0.3037 |
| 0.3645 | 1.58 | 3600 | inf | 0.2974 |
| 0.3645 | 1.62 | 3700 | inf | 0.3010 |
| 0.3645 | 1.67 | 3800 | inf | 0.2985 |
| 0.3645 | 1.71 | 3900 | inf | 0.2976 |
| 0.3624 | 1.76 | 4000 | inf | 0.2928 |
| 0.3624 | 1.8 | 4100 | inf | 0.2860 |
| 0.3624 | 1.84 | 4200 | inf | 0.2922 |
| 0.3624 | 1.89 | 4300 | inf | 0.2866 |
| 0.3624 | 1.93 | 4400 | inf | 0.2776 |
| 0.3527 | 1.97 | 4500 | inf | 0.2792 |
| 0.3527 | 2.02 | 4600 | inf | 0.2858 |
| 0.3527 | 2.06 | 4700 | inf | 0.2767 |
| 0.3527 | 2.11 | 4800 | inf | 0.2824 |
| 0.3527 | 2.15 | 4900 | inf | 0.2799 |
| 0.3162 | 2.19 | 5000 | inf | 0.2673 |
| 0.3162 | 2.24 | 5100 | inf | 0.2962 |
| 0.3162 | 2.28 | 5200 | inf | 0.2736 |
| 0.3162 | 2.33 | 5300 | inf | 0.2652 |
| 0.3162 | 2.37 | 5400 | inf | 0.2551 |
| 0.3063 | 2.41 | 5500 | inf | 0.2680 |
| 0.3063 | 2.46 | 5600 | inf | 0.2558 |
| 0.3063 | 2.5 | 5700 | inf | 0.2598 |
| 0.3063 | 2.54 | 5800 | inf | 0.2518 |
| 0.3063 | 2.59 | 5900 | inf | 0.2541 |
| 0.2913 | 2.63 | 6000 | inf | 0.2507 |
| 0.2913 | 2.68 | 6100 | inf | 0.2500 |
| 0.2913 | 2.72 | 6200 | inf | 0.2435 |
| 0.2913 | 2.76 | 6300 | inf | 0.2376 |
| 0.2913 | 2.81 | 6400 | inf | 0.2348 |
| 0.2797 | 2.85 | 6500 | inf | 0.2512 |
| 0.2797 | 2.9 | 6600 | inf | 0.2382 |
| 0.2797 | 2.94 | 6700 | inf | 0.2523 |
| 0.2797 | 2.98 | 6800 | inf | 0.2522 |
| 0.2797 | 3.03 | 6900 | inf | 0.2409 |
| 0.2766 | 3.07 | 7000 | inf | 0.2453 |
| 0.2766 | 3.12 | 7100 | inf | 0.2326 |
| 0.2766 | 3.16 | 7200 | inf | 0.2286 |
| 0.2766 | 3.2 | 7300 | inf | 0.2342 |
| 0.2766 | 3.25 | 7400 | inf | 0.2305 |
| 0.2468 | 3.29 | 7500 | inf | 0.2238 |
| 0.2468 | 3.33 | 7600 | inf | 0.2321 |
| 0.2468 | 3.38 | 7700 | inf | 0.2305 |
| 0.2468 | 3.42 | 7800 | inf | 0.2174 |
| 0.2468 | 3.47 | 7900 | inf | 0.2201 |
| 0.2439 | 3.51 | 8000 | inf | 0.2133 |
| 0.2439 | 3.55 | 8100 | inf | 0.2217 |
| 0.2439 | 3.6 | 8200 | inf | 0.2189 |
| 0.2439 | 3.64 | 8300 | inf | 0.2105 |
| 0.2439 | 3.69 | 8400 | inf | 0.2118 |
| 0.2357 | 3.73 | 8500 | inf | 0.2093 |
| 0.2357 | 3.77 | 8600 | inf | 0.2103 |
| 0.2357 | 3.82 | 8700 | inf | 0.2035 |
| 0.2357 | 3.86 | 8800 | inf | 0.2019 |
| 0.2357 | 3.91 | 8900 | inf | 0.2032 |
| 0.2217 | 3.95 | 9000 | inf | 0.2056 |
| 0.2217 | 3.99 | 9100 | inf | 0.2022 |
| 0.2217 | 4.04 | 9200 | inf | 0.1932 |
| 0.2217 | 4.08 | 9300 | inf | 0.1935 |
| 0.2217 | 4.12 | 9400 | inf | 0.1906 |
| 0.2025 | 4.17 | 9500 | inf | 0.1879 |
| 0.2025 | 4.21 | 9600 | inf | 0.1882 |
| 0.2025 | 4.26 | 9700 | inf | 0.1854 |
| 0.2025 | 4.3 | 9800 | inf | 0.1865 |
| 0.2025 | 4.34 | 9900 | inf | 0.1844 |
| 0.1869 | 4.39 | 10000 | inf | 0.1822 |
| 0.1869 | 4.43 | 10100 | inf | 0.1815 |
| 0.1869 | 4.48 | 10200 | inf | 0.1812 |
| 0.1869 | 4.52 | 10300 | inf | 0.1792 |
| 0.1869 | 4.56 | 10400 | inf | 0.1797 |
| 0.1863 | 4.61 | 10500 | inf | 0.1774 |
| 0.1863 | 4.65 | 10600 | inf | 0.1767 |
| 0.1863 | 4.7 | 10700 | inf | 0.1765 |
| 0.1863 | 4.74 | 10800 | inf | 0.1753 |
| 0.1863 | 4.78 | 10900 | inf | 0.1731 |
| 0.178 | 4.83 | 11000 | inf | 0.1727 |
| 0.178 | 4.87 | 11100 | inf | 0.1724 |
| 0.178 | 4.91 | 11200 | inf | 0.1722 |
| 0.178 | 4.96 | 11300 | inf | 0.1712 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
deval/bert-base-NER-finetuned-ner | 80c7c1074771c24bf967824a1168fdea6dc6b459 | 2021-09-20T16:15:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:x_glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | deval | null | deval/bert-base-NER-finetuned-ner | 14 | null | transformers | 9,826 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- x_glue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-NER-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: x_glue
type: x_glue
args: ner
metrics:
- name: Precision
type: precision
value: 0.2273838630806846
- name: Recall
type: recall
value: 0.11185727172496743
- name: F1
type: f1
value: 0.14994961370507223
- name: Accuracy
type: accuracy
value: 0.8485324947589099
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the x_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4380
- Precision: 0.2274
- Recall: 0.1119
- F1: 0.1499
- Accuracy: 0.8485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0822 | 1.0 | 878 | 1.1648 | 0.2068 | 0.1101 | 0.1437 | 0.8471 |
| 0.0102 | 2.0 | 1756 | 1.2697 | 0.2073 | 0.1110 | 0.1445 | 0.8447 |
| 0.0049 | 3.0 | 2634 | 1.3945 | 0.2006 | 0.1073 | 0.1399 | 0.8368 |
| 0.0025 | 4.0 | 3512 | 1.3994 | 0.2243 | 0.1126 | 0.1499 | 0.8501 |
| 0.0011 | 5.0 | 4390 | 1.4380 | 0.2274 | 0.1119 | 0.1499 | 0.8485 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dhtocks/tunib-electra-stereotype-classifier | 5bad472a25ae47fb67af405224f01ae852ddf8dd | 2021-10-14T10:03:57.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | dhtocks | null | dhtocks/tunib-electra-stereotype-classifier | 14 | null | transformers | 9,827 | ### TUNiB-Electra Stereotype Detector
Finetuned TUNiB-Electra base with K-StereoSet.
Original Code: https://github.com/newfull5/Stereotype-Detector |
doc2query/reddit-t5-base-v1 | 89082475ea6139380955db2b764b28e7ea2c365c | 2021-10-27T09:56:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:datasets/sentence-transformers/reddit-title-body",
"arxiv:1904.08375",
"arxiv:2104.08663",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | doc2query | null | doc2query/reddit-t5-base-v1 | 14 | null | transformers | 9,828 | ---
language: en
datasets:
- datasets/sentence-transformers/reddit-title-body
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/reddit-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/reddit-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 533k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
doc2query/yahoo_answers-t5-base-v1 | 2be2bae3b0a21125c6e18eb79f38929f9539d002 | 2021-10-27T12:56:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:datasets/sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | doc2query | null | doc2query/yahoo_answers-t5-base-v1 | 14 | null | transformers | 9,829 | ---
language: en
datasets:
- datasets/sentence-transformers/embedding-training-data
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/yahoo_answers-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/yahoo_answers-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 111k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, answer) pairs from [Yahoo Answers](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
|
elena-soare/t5-base-ecommerce | fe6e16ef96ae9813b71f4de8337dc6e38e822986 | 2022-02-22T18:19:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | elena-soare | null | elena-soare/t5-base-ecommerce | 14 | null | transformers | 9,830 | T5 pre-trained on e-commerce data |
eltoto1219/lxmert-gqa-untuned | 8065c198e534891e59c3865778e97bd819971481 | 2020-09-07T09:03:00.000Z | [
"pytorch",
"lxmert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | eltoto1219 | null | eltoto1219/lxmert-gqa-untuned | 14 | null | transformers | 9,831 | Entry not found |
emrecan/bert-base-multilingual-cased-snli_tr | 52a34181e34871ee4e344ec22024436f6b710670 | 2021-12-01T19:43:01.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/bert-base-multilingual-cased-snli_tr | 14 | null | transformers | 9,832 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/convbert-base-turkish-mc4-cased-multinli_tr | f66dade2e8b4989ef6ed3700ddaaae438d2a29ad | 2021-12-01T19:44:01.000Z | [
"pytorch",
"convbert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/convbert-base-turkish-mc4-cased-multinli_tr | 14 | null | transformers | 9,833 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
ensamblador/gpt2_espanol_8hx512pos | d66fc91e911078ab71d8a74663668804f47d20b2 | 2021-05-21T15:57:50.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ensamblador | null | ensamblador/gpt2_espanol_8hx512pos | 14 | null | transformers | 9,834 | Entry not found |
ericzhou/tsundere_v1 | 15f1f88d1b6d1334410eaaac01e825bc95d22743 | 2022-02-05T03:36:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | ericzhou | null | ericzhou/tsundere_v1 | 14 | null | transformers | 9,835 | ---
tags:
- conversational
--- |
exafluence/BERT-ClinicalQA | ebf29b0dce258764921c2c7d1f6bdf82efd29a92 | 2021-10-19T01:56:17.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | exafluence | null | exafluence/BERT-ClinicalQA | 14 | null | transformers | 9,836 | Entry not found |
facebook/s2t-small-mustc-en-es-st | df68b5aec8a734d7f7012ca1b11759b8c2a4b4c3 | 2022-02-07T15:16:46.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"es",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
]
| automatic-speech-recognition | false | facebook | null | facebook/s2t-small-mustc-en-es-st | 14 | null | transformers | 9,837 | ---
language:
- en
- es
datasets:
- mustc
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-MUSTC-EN-ES-ST
`s2t-small-mustc-en-es-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Spanish text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-es-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-es-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-es-st is trained on English-Spanish subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-es (BLEU score): 27.2
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
federicopascual/finetuning-sentiment-analysis-model-3000-samples | 6665ee129a6307e613ff9efad1b1e6c1da8cfc3c | 2021-12-30T20:32:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | federicopascual | null | federicopascual/finetuning-sentiment-analysis-model-3000-samples | 14 | null | transformers | 9,838 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-analysis-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.88125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-analysis-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3130
- Accuracy: 0.8733
- F1: 0.8812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
flax-community/gpt-neo-1.3B-apps-all | 89ca5aa4a3511ff4db62a7030627f60058bd5106 | 2021-09-22T08:25:24.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"en",
"python",
"dataset:apps",
"arxiv:2107.03374",
"transformers",
"code_synthesis",
"license:mit"
]
| text-generation | false | flax-community | null | flax-community/gpt-neo-1.3B-apps-all | 14 | 2 | transformers | 9,839 | ---
language:
- en
- python
license: mit
tags:
- gpt_neo
- code_synthesis
datasets:
- apps
---
# GPT-Neo-1.3B-APPS-all
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
GPT-Neo-1.3B-APPS-all is a GPT-Neo-1.3B finetuned on APPS dataset. This model is specialized to solve programming tasks.
## Training data
The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-1.3B-apps).
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
```
python run_clm_apps.py \
--output_dir ./gpt-neo-1.3B-apps \
--model_name_or_path EleutherAI/gpt-neo-1.3B \
--dataset_name ./apps.py \
--dataset_config_name formatted \
--do_train --do_eval \
--block_size="1024" \
--per_device_train_batch_size="3" \
--per_device_eval_batch_size="3" \
--preprocessing_num_workers="16" \
--learning_rate="8e-5" \
--warmup_steps="800" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--weight_decay="0.1" \
--overwrite_output_dir \
--num_train_epochs="5" \
--logging_steps="50" \
--eval_steps="2000" \
--report_to="wandb" \
--dtype="bfloat16" \
--save_strategy epoch \
--gradient_accumulation_steps 1 \
--all_data true \
```
## Intended Use and Limitations
The model is finetuned to solve programming problems given a text description and optional starter code.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps-alldata")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps-alldata")
prompt = """
A function to greet user. Given a user name it should say hello
def greet(name):
ANSWER:
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt
formatting is different from that used in APPS dataset.
GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon...
|
flax-community/gpt2-persian-question-answering | 253942e068bfda609ed0395439a8200fd0e195cf | 2021-07-16T22:27:57.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"fa",
"dataset:persian_qa",
"transformers"
]
| text-generation | false | flax-community | null | flax-community/gpt2-persian-question-answering | 14 | 1 | transformers | 9,840 | ---
language: fa
tags:
- text-generation
datasets:
- persian_qa
widget:
- text: "ناف جایی قرار گرفته که در واقع بندناف در داخل رحم در آنجا به شکم جنین وصل بودهاست. بندناف که جفت را به جنین متصل کرده بعد از تولد از نوزاد جدا میشود. برای جدا کردن بند ناف از دو پنس استفاده میکنند و بین آن دو را میبرند. پنس دیگری نزدیک شکم نوزاد قرار داده میشود که بعد از دو روز برداشته خواهد شد. بندناف باقیمانده طی ۱۵ روز خشک شده و میافتد و به جای آن اسکاری طبیعی به جای میماند. البته بر خلاف تصور عامه مردم شکل ناف در اثر بریدن بند ناف به وجود نمیآید و پیش از این در شکم مادر حالت ناف شکل گرفتهاست. شکل ناف در میان مردم مختلف متفاوت است و اندازه آن بین ۱.۵ تا ۲ سانتیمتر است. تمام پستانداران جفتزیست ناف دارند. ناف در انسانها به سادگی قابل مشاهدهاست. پرسش: بند ناف انسان به کجا وصل است؟ پاسخ:"
- text: "خوب، بد، زشت یک فیلم درژانر وسترن اسپاگتی حماسی است که توسط سرجو لئونه در سال ۱۹۶۶ در ایتالیا ساخته شد. زبانی که بازیگران این فیلم به آن تکلم میکنند مخلوطی از ایتالیایی و انگلیسی است. این فیلم سومین (و آخرین) فیلم از سهگانهٔ دلار (Dollars Trilogy) سرجو لئونه است. این فیلم در حال حاضر در فهرست ۲۵۰ فیلم برتر تاریخ سینما در وبگاه IMDB با امتیاز ۸٫۸ از ۱۰، رتبهٔ هشتم را به خود اختصاص دادهاست و به عنوان بهترین فیلم وسترن تاریخ سینمای جهان شناخته میشود. «خوب» (کلینت ایستوود، در فیلم، با نام «بلوندی») و «زشت» (ایلای والاک، در فیلم، با نام «توکو») با هم کار میکنند و با شگرد خاصی، به گول زدن کلانترهای مناطق مختلف و پول درآوردن از این راه میپردازند. «بد» (لی وان کلیف) آدمکشی حرفهای است که بهخاطر پول حاضر به انجام هر کاری است. «بد»، که در فیلم او را «اِنجل آیز (اِینجل آیز)» (به انگلیسی: Angel Eyes) صدا میکنند. بهدنبال گنجی است که در طی جنگهای داخلی آمریکا، به دست سربازی به نام «جکسون»، که بعدها به «کارسون» نامش را تغییر داده، مخفی شدهاست. پرسش: در فیلم خوب بد زشت شخصیت ها کجایی صحبت می کنند؟ پاسخ:"
- text: "چهارشنبهسوری یکی از جشنهای ایرانی است که از غروب آخرین سهشنبه ی ماه اسفند، تا پس از نیمهشب تا آخرین چهارشنبه ی سال، برگزار میشود و برافروختن و پریدن از روی آتش مشخصهٔ اصلی آن است. این جشن، نخستین جشن از مجموعهٔ جشنها و مناسبتهای نوروزی است که با برافروختن آتش و برخی رفتارهای نمادین دیگر، بهصورت جمعی در فضای باز برگزار میشود. بهگفتهٔ ابراهیم پورداوود چهارشنبهسوری ریشه در گاهنبارِ هَمَسْپَتْمَدَم زرتشتیان و نیز جشن نزول فروهرها دارد که شش روز پیش از فرارسیدن نوروز برگزار میشد. احتمال دیگر این است که چهارشنبهسوری بازمانده و شکل تحولیافتهای از جشن سده باشد، که احتمال بعیدی است. علاوه برافروختن آتش، آیینهای مختلف دیگری نیز در بخشهای گوناگون ایران در زمان این جشن انجام میشوند. برای نمونه، در تبریز، مردم به چهارشنبهبازار میروند که با چراغ و شمع، بهطرز زیبایی چراغانی شدهاست. هر خانواده یک آینه، دانههای اسفند، و یک کوزه برای سال نو خریداری میکنند. همهساله شهروندانی از ایران در اثر انفجارهای ناخوشایند مربوط به این جشن، کشته یا مصدوم میشوند. پرسش: نام جشن اخرین شنبه ی سال چیست؟ پاسخ:"
---
# Question-Answering Using GPT2 - Persian
> This is a side project of this thread
[Flax/Jax Community Week - GPT2 4 Persian](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-persian/7560), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team Members
- [Mehrdad Farahani](https://huggingface.co/m3hrdadfi)
## Dataset
We used [PersianQA](https://huggingface.co/datasets/SajjadAyoubi/persian_qa) dataset which is a reading comprehension dataset on Persian Wikipedia.
## How To Use TODO: Update
## Demo TODO: Update
## Evaluation TODO: Update |
flax-community/roberta-base-danish | 6f322ced15675ac85f89ae401bd1648b5236d81b | 2021-09-23T13:54:11.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"da",
"transformers",
"danish",
"license:cc-by-4.0",
"autotrain_compatible"
]
| fill-mask | false | flax-community | null | flax-community/roberta-base-danish | 14 | null | transformers | 9,841 | ---
language: da
license: cc-by-4.0
tags:
- danish
- roberta
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du låne en <mask>.
---
# RøBÆRTa - Danish Roberta Base
## Description
RøBÆRTa is a danish pretrained Roberta base model. RøBÆRTa was pretrained on the danish mC4 dataset during the flax community week. This project was organized by Dansk Data Science Community (DDSC) 👇 <br><br>
https://www.linkedin.com/groups/9017904/
## Team RøBÆRTa:
- Dan Saattrup Nielsen (saattrupdan)
- Malte Højmark-Bertelsen (Maltehb)
- Morten Kloster Pedersen (MortenKP)
- Kasper Junge (Juunge)
- Per Egil Kummervold (pere)
- Birger Moëll (birgermoell)
---
|
flax-community/t5-base-dutch-demo | aa1060eea8f3c5f3151fcfc710ddcb45273afa37 | 2021-07-21T07:14:50.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"dutch",
"dataset:cnn_dailymail",
"dataset:xsum",
"transformers",
"summarization",
"seq2seq",
"text-generation",
"autotrain_compatible"
]
| text2text-generation | false | flax-community | null | flax-community/t5-base-dutch-demo | 14 | null | transformers | 9,842 | ---
language:
- dutch
tags:
- summarization
- seq2seq
- text-generation
datasets:
- cnn_dailymail
- xsum
pipeline_tag: text2text-generation
widget:
- text: "Onderzoekers ontdekten dat vier van de vijf kinderen in Engeland die op school lunches hadden gegeten, op school voedsel hadden geprobeerd dat ze thuis niet hadden geprobeerd.De helft van de ondervraagde ouders zei dat hun kinderen hadden gevraagd om voedsel dat ze op school hadden gegeten om thuis te worden gekookt.De enquête, van ongeveer 1.000 ouders, vond dat de meest populaire groenten wortelen, suikermaïs en erwten waren.Aubergine, kikkererwten en spinazie waren een van de minst populaire.Van de ondervraagde ouders, 628 hadden kinderen die lunches op school aten. (% duidt op een deel van de ouders die zeiden dat hun kind elke groente zou eten) England's School Food Trust gaf opdracht tot het onderzoek na een onderzoek door de Mumsnet-website suggereerde dat sommige ouders hun kinderen lunchpakket gaven omdat ze dachten dat ze te kieskeurig waren om iets anders te eten. \"Schoolmaaltijden kunnen een geweldige manier zijn om ouders te helpen hun kinderen aan te moedigen om nieuw voedsel te proberen en om de verscheidenheid van voedsel in hun dieet te verhogen. \"Mumsnet medeoprichter, Carrie Longton, zei: \"Het krijgen van kinderen om gezond te eten is de droom van elke ouder, maar maaltijdtijden thuis kan vaak een slagveld en emotioneel geladen zijn. \"Vanuit Mumsnetters' ervaring lijkt het erop dat eenmaal op school is er een verlangen om in te passen bij iedereen anders en zelfs een aantal positieve peer pressure om op te scheppen over de verscheidenheid van wat voedsel je kunt eten. \"Schoolmaaltijden zijn ook verplaatst op nogal een beetje van toen Mumsnetters op school waren, met gezondere opties en meer afwisseling. \"Schoolmaaltijden in Engeland moeten nu voldoen aan strenge voedingsrichtlijnen.Ongeveer vier op de tien basisschoolkinderen in Engeland eten nu schoollunches, iets meer dan op middelbare scholen.Meer kinderen in Schotland eten schoollunches - ongeveer 46%.Het onderzoek werd online uitgevoerd tussen 26 februari en 5 maart onder een panel van ouders die ten minste één kind op school hadden van 4-17 jaar oud."
- text: "Het Londense trio staat klaar voor de beste Britse act en beste album, evenals voor twee nominaties in de beste song categorie. \"We kregen te horen zoals vanmorgen 'Oh I think you're genomineerd',\" zei Dappy. \"En ik was als 'Oh yeah, what one?' En nu zijn we genomineerd voor vier awards. Ik bedoel, wow! \"Bandmate Fazer voegde eraan toe: \"We dachten dat het het beste van ons was om met iedereen naar beneden te komen en hallo te zeggen tegen de camera's.En nu vinden we dat we vier nominaties hebben. \"De band heeft twee shots bij de beste song prijs, het krijgen van het knikje voor hun Tyncy Stryder samenwerking nummer één, en single Strong Again.Their album Uncle B zal ook gaan tegen platen van Beyonce en Kany \"Aan het eind van de dag zijn we dankbaar om te zijn waar we zijn in onze carrières. \"Als het niet gebeurt dan gebeurt het niet - live om te vechten een andere dag en blijven maken albums en hits voor de fans. \"Dappy onthulde ook dat ze kunnen worden optreden live op de avond.De groep zal doen Nummer Een en ook een mogelijke uitlevering van de War Child single, I Got Soul.Het liefdadigheidslied is een re-working van The Killers' All These Things That I've Done en is ingesteld op artiesten als Chipmunk, Ironik en Pixie Lott.Dit jaar zal Mobos worden gehouden buiten Londen voor de eerste keer, in Glasgow op 30 september.N-Dubz zei dat ze op zoek waren naar optredens voor hun Schotse fans en bogen over hun recente shows ten noorden van de Londense We hebben Aberdeen ongeveer drie of vier maanden geleden gedaan - we hebben die show daar verbrijzeld! Overal waar we heen gaan slaan we hem in elkaar!\""
---
# t5-base-dutch-demo 📰
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) & [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
This model is based on [t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
and fine-tuned to create summaries of news articles.
For a demo of the model, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
## Dataset
`t5-base-dutch-demo` is fine-tuned on three mixed news sources:
1. **CNN DailyMail** translated to Dutch with MarianMT.
2. **XSUM** translated to Dutch with MarianMt.
3. News article summaries distilled from the nu.nl website.
The total number of training examples in this dataset is 1366592.
## Training
Training consisted of fine-tuning [t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) with
the following parameters:
* Constant learning rate 0.0005
* Batch size 8
* 1 epoch (170842 steps)
## Evaluation
The performance of the summarization model is measured with the Rouge metric from the
Huggingface Datasets library.
```
"rouge{n}" (e.g. `"rouge1"`, `"rouge2"`) where: {n} is the n-gram based scoring,
"rougeL": Longest common subsequence based scoring.
```
* Rouge1: 23.8
* Rouge2: 6.9
* RougeL: 19.7
These scores are expected to improve if the model is trained with evaluation configured
for the CNN DM and XSUM datasets (translated to Dutch) individually. |
ghadeermobasher/BC4_Modified-biobert-v1.1 | 6f6ba75fbb3f94085b7faa9d8df3c833e6e43952 | 2022-02-22T20:23:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4_Modified-biobert-v1.1 | 14 | null | transformers | 9,843 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Modified_bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 5c011fd0e99c626b7d655302f98b1be30528d87b | 2022-02-21T22:13:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified_bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 14 | null | transformers | 9,844 | Entry not found |
google/t5-efficient-tiny-nh1 | 2b2cdc23e8f0fa26ba471bd78880cc52bd8104d1 | 2022-02-15T10:57:15.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-tiny-nh1 | 14 | null | transformers | 9,845 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-TINY-NH1 (Deep-Narrow version)
T5-Efficient-TINY-NH1 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-nh1** - is of model type **Tiny** with the following variations:
- **nh** is **1**
It has **13.22** million parameters and thus requires *ca.* **52.88 MB** of memory in full precision (*fp32*)
or **26.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
gustavecortal/T0_3B-8bit | 91ebeda86dc1aa840c56c2238cad4fd241e0a44c | 2022-03-04T10:32:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fr",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"transformers",
"en",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | gustavecortal | null | gustavecortal/T0_3B-8bit | 14 | 4 | transformers | 9,846 | ---
language: fr
license: mit
tags:
- en
datasets:
- bigscience/P3
---
### Quantized BigScience's T0 3B with 8-bit weights
This is a version of [BigScience's T0](https://huggingface.co/bigscience/T0_3B) with 3 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
This model can be easily loaded using the `T5ForConditionalGeneration` functionality:
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("gustavecortal/T0_3B-8bit")
```
Before loading, you have to Monkey-Patch T5:
```python
class T5ForConditionalGeneration(transformers.models.t5.modeling_t5.T5ForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.t5.modeling_t5.T5ForConditionalGeneration = T5ForConditionalGeneration
```
## Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
## Links
* [BigScience](https://bigscience.huggingface.co/)
* [Hivemind](https://training-transformers-together.github.io/)
* [Gustave Cortal](https://twitter.com/gustavecortal)
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
haji2438/bertweet-base-finetuned-SNS-brand-personality | 7490a99520656b122837e8b37311ec9a6fb58818 | 2022-01-09T03:24:39.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | haji2438 | null | haji2438/bertweet-base-finetuned-SNS-brand-personality | 14 | null | transformers | 9,847 | ---
tags:
- generated_from_trainer
model-index:
- name: bertweet-base-finetuned-SNS-brand-personality
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-SNS-brand-personality
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0757 | 1.0 | 1549 | 0.0723 |
| 0.0605 | 2.0 | 3098 | 0.0573 |
| 0.0498 | 3.0 | 4647 | 0.0498 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
heabeoun/DiabloGPT-small-nuon-conv | 1a07fefac21f48e2bfe0f3d0f6d90e668c4e9aab | 2022-02-09T02:31:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | heabeoun | null | heabeoun/DiabloGPT-small-nuon-conv | 14 | null | transformers | 9,848 | ---
tags:
- conversational
---
# diablo GPT random |
hf-test/xls-r-300m-sv | 93e2b8ad1e01b2ed26d08abb46add72d6ceee748 | 2022-03-28T20:07:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"hello",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"sv",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | hf-test | null | hf-test/xls-r-300m-sv | 14 | 2 | transformers | 9,849 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hello
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- sv
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Swedish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 16.98
- name: Test CER
type: cer
value: 5.66
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 27.01
- name: Test CER
type: cer
value: 13.14
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300m-SV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3171
- Wer: 0.2468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3349 | 1.45 | 500 | 3.2858 | 1.0 |
| 2.9298 | 2.91 | 1000 | 2.9225 | 1.0000 |
| 2.0839 | 4.36 | 1500 | 1.1546 | 0.8295 |
| 1.7093 | 5.81 | 2000 | 0.6827 | 0.5701 |
| 1.5855 | 7.27 | 2500 | 0.5597 | 0.4947 |
| 1.4831 | 8.72 | 3000 | 0.4923 | 0.4527 |
| 1.4416 | 10.17 | 3500 | 0.4670 | 0.4270 |
| 1.3848 | 11.63 | 4000 | 0.4341 | 0.3980 |
| 1.3749 | 13.08 | 4500 | 0.4203 | 0.4011 |
| 1.3311 | 14.53 | 5000 | 0.4310 | 0.3961 |
| 1.317 | 15.99 | 5500 | 0.3898 | 0.4322 |
| 1.2799 | 17.44 | 6000 | 0.3806 | 0.3572 |
| 1.2771 | 18.89 | 6500 | 0.3828 | 0.3427 |
| 1.2451 | 20.35 | 7000 | 0.3702 | 0.3359 |
| 1.2182 | 21.8 | 7500 | 0.3685 | 0.3270 |
| 1.2152 | 23.26 | 8000 | 0.3650 | 0.3308 |
| 1.1837 | 24.71 | 8500 | 0.3568 | 0.3187 |
| 1.1721 | 26.16 | 9000 | 0.3659 | 0.3249 |
| 1.1764 | 27.61 | 9500 | 0.3547 | 0.3145 |
| 1.1606 | 29.07 | 10000 | 0.3514 | 0.3104 |
| 1.1431 | 30.52 | 10500 | 0.3469 | 0.3062 |
| 1.1047 | 31.97 | 11000 | 0.3313 | 0.2979 |
| 1.1315 | 33.43 | 11500 | 0.3298 | 0.2992 |
| 1.1022 | 34.88 | 12000 | 0.3296 | 0.2973 |
| 1.0935 | 36.34 | 12500 | 0.3278 | 0.2926 |
| 1.0676 | 37.79 | 13000 | 0.3208 | 0.2868 |
| 1.0571 | 39.24 | 13500 | 0.3322 | 0.2885 |
| 1.0536 | 40.7 | 14000 | 0.3245 | 0.2831 |
| 1.0525 | 42.15 | 14500 | 0.3285 | 0.2826 |
| 1.0464 | 43.6 | 15000 | 0.3223 | 0.2796 |
| 1.0415 | 45.06 | 15500 | 0.3166 | 0.2774 |
| 1.0356 | 46.51 | 16000 | 0.3177 | 0.2746 |
| 1.04 | 47.96 | 16500 | 0.3150 | 0.2735 |
| 1.0209 | 49.42 | 17000 | 0.3175 | 0.2731 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id hf-test/xls-r-300m-sv --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id hf-test/xls-r-300m-sv --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "hf-test/xls-r-300m-sv"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "sv-SE", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "jag lämnade grovjobbet åt honom"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 24.68 | 16.98 |
|
hiiamsid/BETO_es_binary_classification | 83699a5e8e265d5248eab86e59b1a96fc0888f73 | 2021-09-23T11:16:37.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"dataset:self made to classify whether text is related to technology or not.",
"transformers",
"ticket classification",
"license:apache-2.0"
]
| text-classification | false | hiiamsid | null | hiiamsid/BETO_es_binary_classification | 14 | 2 | transformers | 9,850 | ---
language:
- es
tags:
- es
- ticket classification
license: "apache-2.0"
datasets:
- self made to classify whether text is related to technology or not.
metrics:
- fscore
- accuracy
- precision
- recall
---
# BETO(cased)
This model was built using pytorch.
## Model description
Input for the model: Any spanish text
Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification")
model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training procedure
I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
|
hoanhkhoa/bert-base-uncased-finetuned-ner | 2d02d3e486b52bb9ab7180a7daa0e27667a67e96 | 2021-08-17T03:17:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | hoanhkhoa | null | hoanhkhoa/bert-base-uncased-finetuned-ner | 14 | null | transformers | 9,851 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
metric:
name: Accuracy
type: accuracy
value: 0.9853695435592783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9247
- Recall: 0.9343
- F1: 0.9295
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2082 | 1.0 | 753 | 0.0657 | 0.8996 | 0.9256 | 0.9125 | 0.9821 |
| 0.0428 | 2.0 | 1506 | 0.0595 | 0.9268 | 0.9343 | 0.9305 | 0.9848 |
| 0.0268 | 3.0 | 2259 | 0.0604 | 0.9247 | 0.9343 | 0.9295 | 0.9854 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingartists/melanie-martinez | c06dc0052b57ab966c0dc64c0a269f6a794ee001 | 2021-09-19T17:22:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/melanie-martinez",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/melanie-martinez | 14 | null | transformers | 9,852 | ---
language: en
datasets:
- huggingartists/melanie-martinez
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/917de5970c2afbbf03a7705f18eb6951.811x811x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Melanie Martinez</div>
<a href="https://genius.com/artists/melanie-martinez">
<div style="text-align: center; font-size: 14px;">@melanie-martinez</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Melanie Martinez.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/melanie-martinez).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/melanie-martinez")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lb3ks0y5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Melanie Martinez's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2rvs9wvc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2rvs9wvc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/melanie-martinez')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/melanie-martinez")
model = AutoModelWithLMHead.from_pretrained("huggingartists/melanie-martinez")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingface/funnel-small-base | ac5132872928a3a38977b6a644bddbd564edd5ff | 2020-08-31T23:51:41.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"transformers"
]
| feature-extraction | false | huggingface | null | huggingface/funnel-small-base | 14 | null | transformers | 9,853 | Entry not found |
huggingface-course/bert-finetuned-ner-accelerate | 7f4db82b7b29a428b52ab0a65e25b1d240f1a63d | 2021-10-07T14:07:48.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | huggingface-course | null | huggingface-course/bert-finetuned-ner-accelerate | 14 | null | transformers | 9,854 | Entry not found |
huggingtweets/afm_marketing | aa48f49db5b0dc1104088189e5fa9952522d1acb | 2021-12-02T01:51:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/afm_marketing | 14 | null | transformers | 9,855 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1216156392/afm-marketing_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AFM Marketing</div>
<div style="text-align: center; font-size: 14px;">@afm_marketing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AFM Marketing.
| Data | AFM Marketing |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 1051 |
| Short tweets | 64 |
| Tweets kept | 2123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6tgdc3wa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afm_marketing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36mudapr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36mudapr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afm_marketing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cocacola | 8324a6aec25f3220f8f2a001d383c75b99462eec | 2021-06-25T16:35:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/cocacola | 14 | null | transformers | 9,856 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1234873883850952704/JQhv0G7n_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coca-Cola</div>
<div style="text-align: center; font-size: 14px;">@cocacola</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coca-Cola.
| Data | Coca-Cola |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 101 |
| Tweets kept | 3149 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7oxqhbkd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cocacola's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3l65cvcu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3l65cvcu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cocacola')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/joebiden-potus | 9cf7391362d72de85f405450d6eb2dbea773d7f5 | 2021-06-09T15:51:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/joebiden-potus | 14 | null | transformers | 9,857 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1380530524779859970/TfwVAbyX_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308769664240160770/AfgzWVE7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">President Biden & Joe Biden</div>
<div style="text-align: center; font-size: 14px;">@joebiden-potus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from President Biden & Joe Biden.
| Data | President Biden | Joe Biden |
| --- | --- | --- |
| Tweets downloaded | 872 | 3250 |
| Retweets | 32 | 384 |
| Short tweets | 3 | 38 |
| Tweets kept | 837 | 2828 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1c3s9vhj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joebiden-potus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tcstvtkt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tcstvtkt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joebiden-potus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/johnowhitaker | 1a1aec15cb476379db862450b9c0bbdd0ac45c7e | 2021-08-11T10:36:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/johnowhitaker | 14 | null | transformers | 9,858 | ---
language: en
thumbnail: https://www.huggingtweets.com/johnowhitaker/1628678191103/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1165660747504005120/5nA4Go6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jonathan Whitaker</div>
<div style="text-align: center; font-size: 14px;">@johnowhitaker</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jonathan Whitaker.
| Data | Jonathan Whitaker |
| --- | --- |
| Tweets downloaded | 508 |
| Retweets | 45 |
| Short tweets | 13 |
| Tweets kept | 450 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2iuk80nc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johnowhitaker's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xsei074) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xsei074/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/johnowhitaker')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
iarfmoose/roberta-base-bulgarian-pos | 3465e385a9a61045c61bc48b963fef1f2be991eb | 2021-05-20T16:49:07.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
]
| token-classification | false | iarfmoose | null | iarfmoose/roberta-base-bulgarian-pos | 14 | null | transformers | 9,859 | ---
language: bg
---
# RoBERTa-base-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from [UD_Bulgarian-BTB](https://github.com/UniversalDependencies/UD_Bulgarian-BTB).
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
|
iarfmoose/roberta-small-bulgarian-pos | 47b521b0fcd0275a142c92268fd628e96541441f | 2021-05-20T16:52:10.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"bg",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
]
| token-classification | false | iarfmoose | null | iarfmoose/roberta-small-bulgarian-pos | 14 | 1 | transformers | 9,860 | ---
language: bg
---
# RoBERTa-small-bulgarian-POS
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-small-Bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) fine-tuned for part-of-speech tagging.
## Intended uses
The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token.
An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py).
## Limitations and bias
The pretraining data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from (UD_Bulgarian-BTB)[https://github.com/UniversalDependencies/UD_Bulgarian-BTB].
## Training procedure
The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 98% on the test set.
|
ielab/TILDEv2-TILDE200-exp | 390e5de269cfbef944423cd375f1b8e46384645c | 2021-10-31T13:50:55.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | ielab | null | ielab/TILDEv2-TILDE200-exp | 14 | null | transformers | 9,861 | TILDEv2 trained with passages expand with TILDE (m=200) |
ietz/comment-linking-distilbert-base-german-cased | 901728b6dfde80b7e8b9cdb8c8609e2f7e82568d | 2020-10-22T17:41:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | ietz | null | ietz/comment-linking-distilbert-base-german-cased | 14 | null | transformers | 9,862 | Entry not found |
infinitejoy/wav2vec2-large-xls-r-300m-armenian | 700595bd88ddafe6f8bbfda84c7627b0621812fb | 2022-03-24T11:55:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hy-AM",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-armenian | 14 | null | transformers | 9,863 | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Armenian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: hy-AM
metrics:
- name: Test WER
type: wer
value: 101.627
- name: Test CER
type: cer
value: 158.767
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-armenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9669
- Wer: 0.6942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.7294 | 27.78 | 500 | 0.8540 | 0.9944 |
| 0.8863 | 55.56 | 1000 | 0.7282 | 0.7312 |
| 0.5789 | 83.33 | 1500 | 0.8178 | 0.8102 |
| 0.3899 | 111.11 | 2000 | 0.8034 | 0.7701 |
| 0.2869 | 138.89 | 2500 | 0.9061 | 0.6999 |
| 0.1934 | 166.67 | 3000 | 0.9400 | 0.7105 |
| 0.1551 | 194.44 | 3500 | 0.9667 | 0.6955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-bulgarian | fa202d261efd557145f4d957759e4e2f119add7c | 2022-03-24T11:47:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-bulgarian | 14 | 1 | transformers | 9,864 | ---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- bg
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: bg
metrics:
- name: Test WER
type: wer
value: 46.68
- name: Test CER
type: cer
value: 10.75
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 63.68
- name: Test CER
type: cer
value: 19.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 64.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4487
- Wer: 0.4674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9774 | 6.33 | 500 | 2.9769 | 1.0 |
| 1.3453 | 12.66 | 1000 | 0.6523 | 0.6980 |
| 1.1658 | 18.99 | 1500 | 0.5636 | 0.6359 |
| 1.0797 | 25.32 | 2000 | 0.5004 | 0.5759 |
| 1.044 | 31.65 | 2500 | 0.4958 | 0.5569 |
| 0.9915 | 37.97 | 3000 | 0.4971 | 0.5350 |
| 0.9429 | 44.3 | 3500 | 0.4829 | 0.5229 |
| 0.9266 | 50.63 | 4000 | 0.4515 | 0.5074 |
| 0.8965 | 56.96 | 4500 | 0.4599 | 0.5039 |
| 0.878 | 63.29 | 5000 | 0.4735 | 0.4954 |
| 0.8494 | 69.62 | 5500 | 0.4460 | 0.4878 |
| 0.8343 | 75.95 | 6000 | 0.4510 | 0.4795 |
| 0.8236 | 82.28 | 6500 | 0.4538 | 0.4789 |
| 0.8069 | 88.61 | 7000 | 0.4526 | 0.4748 |
| 0.7958 | 94.94 | 7500 | 0.4496 | 0.4700 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
it5/mt5-base-news-summarization | fc828f00595bf8c1b0148aa8018c3934b3d05a04 | 2022-03-09T07:51:55.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| summarization | false | it5 | null | it5/mt5-base-news-summarization | 14 | null | transformers | 9,865 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: mt5-base-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.340
name: "Test Rouge1 IlPost"
- type: rouge2
value: 0.164
name: "Test Rouge2 IlPost"
- type: rougeL
value: 0.275
name: "Test RougeL IlPost"
- type: bertscore
value: 0.399
name: "Test BERTScore IlPost"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: rouge1
value: 0.341
name: "Test Rouge1 Fanpage"
- type: rouge2
value: 0.158
name: "Test Rouge2 Fanpage"
- type: rougeL
value: 0.249
name: "Test RougeL Fanpage"
- type: bertscore
value: 0.387
name: "Test BERTScore Fanpage"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Base for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/mt5-base-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jannesg/takalane_afr_roberta | 728703455b883cc6b7f5001f306c460b286b3fe0 | 2021-09-22T08:51:59.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"af",
"transformers",
"masked-lm",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | jannesg | null | jannesg/takalane_afr_roberta | 14 | null | transformers | 9,866 | ---
language:
- af
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- af
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Salie - Afrikaans 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_afr_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_afr_roberta")
```
#### Limitations and bias
Updates will be added continuously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 2.8M
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jcblaise/electra-tagalog-base-cased-generator | e58b2d50c2693be2dc48ddc4fb8330c93a8b549e | 2021-11-11T06:19:45.000Z | [
"pytorch",
"electra",
"fill-mask",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"autotrain_compatible"
]
| fill-mask | false | jcblaise | null | jcblaise/electra-tagalog-base-cased-generator | 14 | null | transformers | 9,867 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Cased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
junnyu/roformer_small_generator | ab9a762f22c2f2c8071cf1e50880e9522eb5eb33 | 2021-09-22T08:54:25.000Z | [
"pytorch",
"roformer",
"fill-mask",
"en",
"dataset:openwebtext",
"transformers",
"electra",
"masked-lm",
"rotary position embedding",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | junnyu | null | junnyu/roformer_small_generator | 14 | null | transformers | 9,868 | ---
language: en
thumbnail: https://github.com/junnyu
tags:
- pytorch
- electra
- masked-lm
- rotary position embedding
widget:
- text: Paris is the [MASK] of France.
license: mit
datasets:
- openwebtext
---
# 一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-RoFormer-Small-OWT (this)**|55.76|90.45|87.3|86.64|89.61|81.17|88.85|62.71|80.31|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 50W
- GPU RTX3090
- 训练时间总共耗费55h
# 四、wandb日志
- [**预训练日志**](https://wandb.ai/junyu/electra_rotary_small_pretrain?workspace=user-junyu)
- [**GLUE微调日志**](https://wandb.ai/junyu/electra_rotary_glue_100?workspace=user-junyu)
# 五、 使用
```python
import torch
from transformers import ElectraTokenizer,RoFormerForMaskedLM
text = "Beijing is the capital of [MASK]."
tokenizer = ElectraTokenizer.from_pretrained("junnyu/roformer_small_generator")
pt_model = RoFormerForMaskedLM.from_pretrained(
"junnyu/roformer_small_generator")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))+" "
print(pt_outputs_sentence)
# pytorch: beijing is the capital of [china||beijing||taiwan||india||shanghai].
``` |
kingabzpro/wav2vec2-urdu | 3782bfddee2a0d155414e1da9c4217c1a65b4f95 | 2022-03-23T18:27:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-urdu | 14 | null | transformers | 9,869 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice ur
args: ur
metrics:
- type: wer
value: 52.4
name: Test WER
args:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
- type: cer
value: 26.46
name: Test CER
args:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
- type: wer
value: 45.63
name: Test WER LM CV8
- type: cer
value: 20.45
name: Test CER LM CV8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.5747
- Cer: 0.3268
## Model description
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
## Training procedure
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 4.3054 | 16.67 | 50 | 9.0055 | 0.8306 | 0.4869 |
| 2.0629 | 33.33 | 100 | 9.5849 | 0.6061 | 0.3414 |
| 0.8966 | 50.0 | 150 | 4.8686 | 0.6052 | 0.3426 |
| 0.4197 | 66.67 | 200 | 12.3261 | 0.5817 | 0.3370 |
| 0.294 | 83.33 | 250 | 11.9653 | 0.5712 | 0.3328 |
| 0.2329 | 100.0 | 300 | 7.6846 | 0.5747 | 0.3268 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
krlng/sts-GBERT-bi-encoder | 80f9a1ffa1dd01aac389f59883f0156fa6bc5dc3 | 2021-09-07T15:02:07.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | krlng | null | krlng/sts-GBERT-bi-encoder | 14 | null | sentence-transformers | 9,870 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sts-GBERT-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sts-GBERT-bi-encoder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sts-GBERT-bi-encoder')
model = AutoModel.from_pretrained('sts-GBERT-bi-encoder')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sts-GBERT-bi-encoder)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 859 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 344,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
laboro-ai/distilbert-base-japanese-finetuned-ddqa | baf532fa7bd1fa6ea44ab069db5b5351d59277b6 | 2020-12-18T03:10:13.000Z | [
"pytorch",
"distilbert",
"question-answering",
"ja",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
]
| question-answering | false | laboro-ai | null | laboro-ai/distilbert-base-japanese-finetuned-ddqa | 14 | 1 | transformers | 9,871 | ---
language: ja
tags:
- distilbert
license: cc-by-nc-4.0
---
|
lgris/bp500-xlsr | 8defc1df867be01167ab65ec6ad3c55c6576914d | 2022-04-01T20:33:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"arxiv:2012.03411",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | lgris | null | lgris/bp500-xlsr | 14 | 1 | transformers | 9,872 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp400-xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 13.6
license: apache-2.0
---
# bp500-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus;
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt);
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control;
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/1J8aR1ltDLQFe-dVrGuyxoRm2uyJjCWgf/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500 (demonstration below) | 0.051 | 0.136 | 0.032 | 0.118 | 0.095 | 0.248 | 0.082 | 0.108 |
| bp\_500 + 4-gram (demonstration below) | 0.032 | 0.097 | 0.022 | 0.114 | 0.125 | 0.246 | 0.065 | 0.100 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|não há um departamento de mediadores independente das federações e das agremiações|não há um **dearamento** de mediadores independente das federações e das **agrebiações**|
|mas que bodega|**masque** bodega|
|a cortina abriu o show começou|a cortina abriu o **chô** começou|
|por sorte havia uma passadeira|**busote avinhoa** **passadeiro**|
|estou maravilhada está tudo pronto|**stou** estou maravilhada está tudo pronto|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159097808687998
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.13659981509705973
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.03196969696969697
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1178481066463896
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.09544588416964224
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24868046340420813
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.08246076839826841
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.03222801788375573
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09713866021093655
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.022310606060606065
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11408590958696524
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12502797252979136
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24603179403904793
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.06542207792207791
|
liaad/srl-pt_bertimbau-large | ae2c786be53e69d02fbd923273bbfa92148c6184 | 2021-09-22T08:56:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"arxiv:2101.01213",
"transformers",
"bert-large-portuguese-cased",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
]
| feature-extraction | false | liaad | null | liaad/srl-pt_bertimbau-large | 14 | 1 | transformers | 9,873 | ---
language:
- multilingual
- pt
tags:
- bert-large-portuguese-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
metrics:
- F1 Measure
---
# BERTimbau large fine-tuned on Portuguese semantic role labeling
## Model description
This model is the [`neuralmind/bert-large-portuguese-cased`](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_bertimbau-large")
model = AutoModel.from_pretrained("liaad/srl-pt_bertimbau-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Training procedure
The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda | 514f79fae0ced7d927aebfdb20804cbfe75ca9c5 | 2021-11-25T09:04:05.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"rw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda | 14 | null | transformers | 9,874 | ---
language:
- rw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
---
# xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-kinyarwanda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Kinyarwanda part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda) (This model) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | kin | 79.55 | 75.56 | 83.99 | 69.00 | 79.00 | 77.00 | 90.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-kinyarwanda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | kin | 76.31 | 72.64 | 80.37 | 70.00 | 76.00 | 75.00 | 84.00 |
| [xlm-roberta-base-finetuned-ner-kinyarwanda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-kinyarwanda) | [base](https://huggingface.co/xlm-roberta-base) | kin | 74.59 | 72.17 | 77.17 | 70.00 | 75.00 | 70.00 | 82.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-kinyarwanda'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ambasaderi wa EU mu Rwanda , Nicola Bellomo yagize ati “ Inkunga yacu ni imwe mu nkunga yagutse yiswe # TeamEurope ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili | b5779a26afab96ea7d567cbad1019f8ae17d4b10 | 2021-11-25T09:04:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili | 14 | null | transformers | 9,875 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-naija-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-naija](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) (This model) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
miguelvictor/python-gpt2-medium | 5f70aa97c3637b414a4fdc9469435e7c4d494a70 | 2021-05-23T09:34:26.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | miguelvictor | null | miguelvictor/python-gpt2-medium | 14 | null | transformers | 9,876 | Entry not found |
nikunjbjj/jd-resume-model | f04e1548d969e3cf05c97093f36835bb82a2dc1d | 2021-05-20T01:50:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | nikunjbjj | null | nikunjbjj/jd-resume-model | 14 | null | transformers | 9,877 | # Sentiment Analysis in Spanish
## beto-sentiment-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
Uses `POS`, `NEG`, `NEU` labels.
**Coming soon**: a brief paper describing the model and training.
Enjoy! 🤗
|
orendar/language_model | be83c4a7278ea793c995a1689ce37d1ea247d00f | 2021-06-09T06:42:58.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | orendar | null | orendar/language_model | 14 | null | transformers | 9,878 | Entry not found |
patrickvonplaten/phoneme_test_5_sv | bd3fb524c36cc5695f3bee7e158600e0842e4908 | 2021-12-08T17:13:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:multilingual_librispeech",
"transformers",
"multilingual_librispeech",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/phoneme_test_5_sv | 14 | null | transformers | 9,879 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- multilingual_librispeech
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: wav2vec2-300m-mls-german-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-mls-german-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MULTILINGUAL_LIBRISPEECH - GERMAN 10h dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2398
- Wer: 0.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.0132 | 7.25 | 500 | 2.9393 | 1.0 |
| 2.9241 | 14.49 | 1000 | 2.8734 | 1.0 |
| 1.0766 | 21.74 | 1500 | 0.2773 | 0.2488 |
| 0.8416 | 28.99 | 2000 | 0.2224 | 0.1990 |
| 0.8048 | 36.23 | 2500 | 0.2063 | 0.1792 |
| 0.7664 | 43.48 | 3000 | 0.2088 | 0.1748 |
| 0.6571 | 50.72 | 3500 | 0.2042 | 0.1668 |
| 0.7014 | 57.97 | 4000 | 0.2136 | 0.1649 |
| 0.6171 | 65.22 | 4500 | 0.2139 | 0.1641 |
| 0.6609 | 72.46 | 5000 | 0.2144 | 0.1621 |
| 0.6318 | 79.71 | 5500 | 0.2129 | 0.1600 |
| 0.6222 | 86.96 | 6000 | 0.2124 | 0.1582 |
| 0.608 | 94.2 | 6500 | 0.2255 | 0.1639 |
| 0.6099 | 101.45 | 7000 | 0.2265 | 0.1622 |
| 0.6069 | 108.7 | 7500 | 0.2246 | 0.1593 |
| 0.5929 | 115.94 | 8000 | 0.2323 | 0.1617 |
| 0.6218 | 123.19 | 8500 | 0.2287 | 0.1566 |
| 0.5751 | 130.43 | 9000 | 0.2275 | 0.1563 |
| 0.5181 | 137.68 | 9500 | 0.2316 | 0.1579 |
| 0.6306 | 144.93 | 10000 | 0.2372 | 0.1556 |
| 0.5874 | 152.17 | 10500 | 0.2362 | 0.1533 |
| 0.5546 | 159.42 | 11000 | 0.2342 | 0.1543 |
| 0.6294 | 166.67 | 11500 | 0.2381 | 0.1536 |
| 0.5989 | 173.91 | 12000 | 0.2360 | 0.1527 |
| 0.5697 | 181.16 | 12500 | 0.2399 | 0.1526 |
| 0.5379 | 188.41 | 13000 | 0.2375 | 0.1523 |
| 0.5022 | 195.65 | 13500 | 0.2395 | 0.1519 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/reformer-random | 3f9291d6b99e6fd1e324b6125af81631bcde37e5 | 2021-05-20T02:18:08.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
]
| text-generation | false | patrickvonplaten | null | patrickvonplaten/reformer-random | 14 | null | transformers | 9,880 | Entry not found |
philschmid/BERT-tweet-eval-emotion | d2088dc6099afd0074be9b10c388ef82fd123263 | 2021-10-07T13:19:11.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:tweet_eval",
"transformers",
"autonlp",
"model-index"
]
| text-classification | false | philschmid | null | philschmid/BERT-tweet-eval-emotion | 14 | null | transformers | 9,881 | ---
tags: autonlp
language: en
widget:
- text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"
datasets:
- tweet_eval
model-index:
- name: BERT-tweet-eval-emotion
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "tweeteval"
type: tweet-eval
metrics:
- name: Accuracy
type: accuracy
value: 81.00
- name: Macro F1
type: macro-f1
value: 77.37
- name: Weighted F1
type: weighted-f1
value: 80.63
---
# `BERT-tweet-eval-emotion` trained using autoNLP
- Problem type: Multi-class Classification
## Validation Metrics
- Loss: 0.5408923625946045
- Accuracy: 0.8099929627023223
- Macro F1: 0.7737195387641751
- Micro F1: 0.8099929627023222
- Weighted F1: 0.8063100677512649
- Macro Precision: 0.8083955817268176
- Micro Precision: 0.8099929627023223
- Weighted Precision: 0.8104009668394634
- Macro Recall: 0.7529197049888299
- Micro Recall: 0.8099929627023223
- Weighted Recall: 0.8099929627023223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/BERT-tweet-eval-emotion
```
Or Python API:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/BERT-tweet-eval-emotion'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry")
``` |
pszemraj/t5-large-for-lexical-analysis | dfcc0b9ace293cf0a3f5e70625de890f41923e4f | 2022-02-22T23:16:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"arxiv:2105.08209",
"transformers",
"analysis",
"book",
"notes",
"autotrain_compatible"
]
| text2text-generation | false | pszemraj | null | pszemraj/t5-large-for-lexical-analysis | 14 | null | transformers | 9,882 | ---
language:
- en
tags:
- t5
- analysis
- book
- notes
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: "I'm just a girl standing in front of a boy asking him to love her."
example_title: "Notting Hill"
- text: "Son, your ego is writing checks your body can't cash."
example_title: "top gun"
- text: "I really love to eat beans."
example_title: "beans"
- text: "The ledge, where I placed my candle, had a few mildewed books piled up in one corner; and it was covered with writing scratched on the paint. This writing, however, was nothing but a name repeated in all kinds of characters, large and small—Catherine Earnshaw, here and there varied to Catherine Heathcliff, and then again to Catherine Linton. In vapid listlessness I leant my head against the window, and continued spelling over Catherine Earnshaw—Heathcliff—Linton, till my eyes closed; but they had not rested five minutes when a glare of white letters started from the dark, as vivid as spectres—the air swarmed with Catherines; and rousing myself to dispel the obtrusive name, I discovered my candle wick reclining on one of the antique volumes, and perfuming the place with an odour of roasted calf-skin."
example_title: "Wuthering Heights"
- text: "Did you ever hear the tragedy of Darth Plagueis The Wise? I thought not. It’s not a story the Jedi would tell you. It’s a Sith legend. Darth Plagueis was a Dark Lord of the Sith, so powerful and so wise he could use the Force to influence the midichlorians to create life… He had such a knowledge of the dark side that he could even keep the ones he cared about from dying. The dark side of the Force is a pathway to many abilities some consider to be unnatural. He became so powerful… the only thing he was afraid of was losing his power, which eventually, of course, he did. Unfortunately, he taught his apprentice everything he knew, then his apprentice killed him in his sleep. Ironic. He could save others from death, but not himself."
example_title: "Darth Plagueis The Wise"
inference:
parameters:
no_repeat_ngram_size: 2
max_length: 64
early_stopping: True
---
# T5-large for Lexical Analysis
- This model was trained a text-to-text task with input text as a summary of a chapter, and the output text as the analysis of that chapter on the [booksum](https://arxiv.org/abs/2105.08209) dataset.
- it has somewhat learned how to complete literary analysis on an arbitrary input text.
- **NOTE: this is fairly intensive computationally and recommended to be run on GPU. please see example usage in [this demo notebook](https://colab.research.google.com/gist/pszemraj/8e9cc5bee5cac7916ef9241b66e01b05/demo-t5-large-for-lexical-analysis.ipynb)**
- The API is set to return max 64 tokens to avoid timeouts on CPU.
## Example
- PROMPT:
> Carmen: We're kids, not monsters. Dr. Romero: What's the difference?
- OUTPUT:
> Commentary on Act IV, scenes i-ii In these scenes, we see Dracula's transformation of the children into "monstrous" creatures. Doctor Romero says, "We're kidnapped, but not monsters." This is an important question for the audience to ask: Is there a difference between childhood and adulthood?
_longer examples are available in both [the demo notebook](https://colab.research.google.com/gist/pszemraj/8e9cc5bee5cac7916ef9241b66e01b05/demo-t5-large-for-lexical-analysis.ipynb) and at the bottom of this README._
# checkpoints
- a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large)
- 16 epochs of training total
## Model description
- automatic literary analysis on arbitrary text
- `booksum` is a dataset created primarily for long-range summarization. It's excellent at that, and usually, this is done with the `chapter` and `summary_text` columns. However, there is a `summary_analysis` column as well, containing literary analysis on the passage in question :eyes:
- this model was trained on text-to-text with `summary_text` as the input and `summary_analysis` as the output, so it will analyze whatever text for _deeper meaning_.
## Intended uses & limitations
- given the absence of more specific info, the analysis will reference authors/titles from the training data in the context of the analysis.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
## Extended Examples
the below responses from the model are unedited.
### A prose passage from _The Lord of the Rings_
> Far above the Ephel Duath in the West the night-sky was still dim and pale. There, peering among the cloud-wrack above a dark tor high up in the mountains, Sam saw a white star twinkle for a while. The beauty of it smote his heart, as he looked up out of the forsaken land, and hope returned to him. For like a shaft, clear and cold, the thought pierced him that in the end the Shadow was only a small and passing thing: there was light and high beauty for ever beyond its reach.
analysis:
> Notes Sam's encounter with the white star is one of the most memorable scenes in the novel. It was a moment of great relief for him to see that there was light beyond his reach, and he had been longing for it all along. He looked up into the heavens at the sight of it: "It was as if I were looking out through the window of my own soul" . In this chapter, however, we are reminded of another scene from the Book of Eliot, when Sam saw the White Star twinkled for the first time since the book began. The night-sky still shrouded by clouds and cloud-wreck on the mountain tops, but now the darkness has begun to darken the whole world. When Sam looks up above the dusk over the mountains, which pierces his heart, Sam realizes that the Shadow will not last forever.
### the rick and morty copypasta
> Notes This is a very long post, but it's important to remember that this is the first of many posts on this site. It will take some time for me to get over the fact that I don't have an incredibly high iq to appreciate all of the humor in the show. In other words, people who dislike the series are idiots- they have no idea what these jokes mean or why they're funny at all. The only way to truly appreciate them is to be one of their diehard fans. If you do not like them, you'll end up making fools out of your own stupidity. For example, Dan Harmon writes: "I hated him because I loved him so much" . He was also extremely intelligent, and he would never have been able to fully appreciate the depth of his humour. That said, however, is that if you did not love him, then you should go ahead and make fun of him. And yet another reason why Morty dislikes him is partly due to his lack of narcissism rather than any kind of self-delusion. But there is something special about Mr. Moriarty himself- despite his lowly wittedness, which makes him seem almost superstitious. His attitude towards life seems to stem from his belief that nothing can ever be good enough to save the world. However, as noted above, Dickens says, "Life is full of paradoxes and contradictions... Life is more complex than anything else." Indeed, most critics have pointed out that even those with lower IQ points could possibly be seen as being subversive; indeed, readers might find it hard to sympathize with such simpletons. Of course, Stevenson has made it clear that we need to look beyond the surface level of normalcy in order to understand the absurdity of modern society. There are several examples of this sort of hypocrisy going on in contemporary literature. One of my favorite books is Fathers Sons, written by Alexander Nevsky, published in 1897. These books were published around 18 years before the novel was published. They were serialised in serial format, meaning that they were produced in 1921. Their publication dates back to 1864, when they appeared in London during the late eighteenth century England. At the time of its publication date, it was released in November 1793. When it came out in December, the book had already been published after 1859. |
sagorsarker/codeswitch-spaeng-ner-lince | c872aad5c3c72de6fc7ddbb01f26e45b8d0d7b85 | 2021-05-19T01:16:32.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"en",
"dataset:lince",
"transformers",
"codeswitching",
"spanish-english",
"ner",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | sagorsarker | null | sagorsarker/codeswitch-spaeng-ner-lince | 14 | null | transformers | 9,883 | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- ner
---
# codeswitch-spaeng-ner-lince
This is a pretrained model for **Name Entity Recognition** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Name Entity Recognition of Spanish-English Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-ner-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-spaeng-ner-lince")
ner_model = pipeline('ner', model=model, tokenizer=tokenizer)
ner_model("put any spanish english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import NER
ner = NER('spa-eng')
text = "" # your mixed sentence
result = ner.tag(text)
print(result)
```
|
salti/arabic-t5-small-question-paraphrasing | 430cdeea380c069b79229b9b7dcb0ae77b1c4332 | 2021-07-31T04:44:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ar",
"transformers",
"question-paraphrasing",
"autotrain_compatible"
]
| text2text-generation | false | salti | null | salti/arabic-t5-small-question-paraphrasing | 14 | 1 | transformers | 9,884 | ---
language:
- ar
tags:
- question-paraphrasing
widget:
- text: "أعد صياغة: ما عدد حروف اللغة العربية؟"
metrics:
- sacrebleu
- rouge
- meteor
---
# Arabic T5v1.1 for question paraphrasing
This is a fine-tuned [arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on the task of question paraphrasing.
A demo of the trained model using HF Spaces can be found [here](https://huggingface.co/spaces/salti/arabic-question-paraphrasing)
## Training data
The model was fine-tuned using the [Semantic Question Similarity in Arabic](https://www.kaggle.com/c/nsurl-2019-task8/data) data on kaggle.
Only the rows of the dataset where the label is `True` (the two questions have the same meaning) were taken.
The training data was then also mirrored; so if `q1` and `q2` were two questions with the same meaning, then `(q1, q2)` and `(q2, q1)` were both present in the training set. The evaluation set was kept unmirrored of course.
## Training config
| | |
| :-------------: | :------: |
| `batch size` | 128 |
| `dropout rate` | 0.1 |
| `learning rate` | 0.001 |
| `lr schedule` | constant |
| `weight decay` | 1e-7 |
| `epochs` | 3 |
## Results
| | |
| :---------------: | :----: |
| `training loss` | 0.7086 |
| `evaluation loss` | 0.9819 |
| `meteor` | 49.277 |
| `sacreBLEU-1` | 57.088 |
| `sacreBLEU-2` | 39.846 |
| `sacreBLEU-3` | 29.444 |
| `sacreBLEU-4` | 22.601 |
| `Rouge F1 max` | 1.299 |
|
satyaalmasian/temporal_tagger_bert2bert | a2ee0420d1dea8b2ea41112281fbf48f8be5767e | 2021-09-21T11:23:36.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | satyaalmasian | null | satyaalmasian/temporal_tagger_bert2bert | 14 | null | transformers | 9,885 | # BERT2BERT temporal tagger
Seq2seq model for temporal tagging of plain text using BERT language model. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
RoBERTa version of the same model is also available [here](https://huggingface.co/satyaalmasian/temporal_tagger_roberta2roberta) and has better performance.
# Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT in an encoder-decoder architecture for text generation, where the input is raw text and the output is the temporally annotated text. The model is pre-trained on a weakly annotated dataset from a rule-based system (HeidelTime) and fine-tuned on the temporal benchmark datasets (Wikiwars, Tweets, Tempeval-3).
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide cleaning functions for the output and insert the temporal tags from the generated text in the input text. If you have temporally annotated data you can fine-tune this model.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")
model = EncoderDecoderModel.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")
```
for inference use:
```
model_inputs = tokenizer(input_text, truncation=True, return_tensors="pt")
out = model.generate(**model_inputs)
decoded_preds = tokenizer.batch_decode(out, skip_special_tokens=True)
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
to further fine-tune, use the `Seq2SeqTrainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_seq2seq_bert_roberta.py).
```
trainer = Seq2SeqTrainer(
model=model2model,
tokenizer=tokenizer,
args=training_args,
compute_metrics=metrics.compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
train_result=trainer.train()
```
where the `training_args` is an instance of `Seq2SeqTrainingArguments`.
#Training data
We use four data sources:
For Pretraining :1 million weakly annotated samples from heideltime. The samples are from news articles between the 1st January 2019 and the 30th July.
Fine-tunning: [Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is pre-trained on the weakly labeled data for $3$ epochs on the train set, from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 12. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
Additionally, we use 2000 warmup steps.
We fine-tune the 3 benchmark data for 8 epochs with 5 different random seeds, this version of the model is the only seed=4.
The batch size and the learning rate is the same as the pre-training setup, but the warm-up steps are reduced to 100.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
For inference in seq2seq models, we use Greedy decoding, since beam search had sub-optimal results.
|
sebastian-hofstaetter/prettr-distilbert-split_at_3-margin_mse-T2-msmarco | b872cbc5e3e3223ab78159b9a309458d08686e75 | 2021-07-10T10:14:14.000Z | [
"pytorch",
"distilbert",
"en",
"dataset:ms_marco",
"arxiv:2004.14255",
"arxiv:2010.02666",
"transformers",
"knowledge-distillation"
]
| null | false | sebastian-hofstaetter | null | sebastian-hofstaetter/prettr-distilbert-split_at_3-margin_mse-T2-msmarco | 14 | null | transformers | 9,886 | ---
language: "en"
tags:
- knowledge-distillation
datasets:
- ms_marco
---
# Margin-MSE Trained PreTTR
We provide a retrieval trained DistilBert-based PreTTR model (https://arxiv.org/abs/2004.14255). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage.
This instance can be used to **re-rank a candidate set**. The architecture is a 6-layer DistilBERT, split at layer 3, with an additional single linear layer at the end for scoring the CLS token.
If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd
## Configuration
- We split the DistilBERT in half at layer 3
## Model Code
````python
from transformers import DistilBertModel,AutoTokenizer
from transformers.models.distilbert.modeling_distilbert import *
import math
import torch
from torch import nn as nn
class PreTTRConfig(DistilBertConfig):
join_layer_idx = 3
class PreTTR(DistilBertModel):
'''
PreTTR changes the distilbert model from huggingface to be able to split query and document until a set layer,
we skipped compression present in the original
from: Efficient Document Re-Ranking for Transformers by Precomputing Term Representations
MacAvaney, et al. https://arxiv.org/abs/2004.14255
'''
config_class = PreTTRConfig
def __init__(self, config):
super().__init__(config)
self.transformer = SplitTransformer(config) # Encoder, we override the classes, but the names stay the same -> so it gets properly initialized
self.embeddings = PosOffsetEmbeddings(config) # Embeddings
self._classification_layer = torch.nn.Linear(self.config.hidden_size, 1, bias=False)
self.join_layer_idx = config.join_layer_idx
def forward(
self,
query,
document,
use_fp16: bool = False) -> torch.Tensor:
with torch.cuda.amp.autocast(enabled=use_fp16):
query_input_ids = query["input_ids"]
query_attention_mask = query["attention_mask"]
document_input_ids = document["input_ids"][:, 1:]
document_attention_mask = document["attention_mask"][:, 1:]
query_embs = self.embeddings(query_input_ids) # (bs, seq_length, dim)
document_embs = self.embeddings(document_input_ids, query_input_ids.shape[-1]) # (bs, seq_length, dim)
tfmr_output = self.transformer(
query_embs=query_embs,
query_mask=query_attention_mask,
doc_embs=document_embs,
doc_mask=document_attention_mask,
join_layer_idx=self.join_layer_idx
)
hidden_state = tfmr_output[0]
score = self._classification_layer(hidden_state[:, 0, :]).squeeze()
return score
class PosOffsetEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.dim, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.dim)
if config.sinusoidal_pos_embds:
create_sinusoidal_embeddings(
n_pos=config.max_position_embeddings, dim=config.dim, out=self.position_embeddings.weight
)
self.LayerNorm = nn.LayerNorm(config.dim, eps=1e-12)
self.dropout = nn.Dropout(config.dropout)
def forward(self, input_ids, pos_offset=0):
"""
Parameters
----------
input_ids: torch.tensor(bs, max_seq_length)
The token ids to embed.
Outputs
-------
embeddings: torch.tensor(bs, max_seq_length, dim)
The embedded tokens (plus position embeddings, no token_type embeddings)
"""
seq_length = input_ids.size(1)
position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device) # (max_seq_length)
position_ids = position_ids.unsqueeze(0).expand_as(input_ids) + pos_offset # (bs, max_seq_length)
word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)
position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim)
embeddings = word_embeddings + position_embeddings # (bs, max_seq_length, dim)
embeddings = self.LayerNorm(embeddings) # (bs, max_seq_length, dim)
embeddings = self.dropout(embeddings) # (bs, max_seq_length, dim)
return embeddings
class SplitTransformer(nn.Module):
def __init__(self, config):
super().__init__()
self.n_layers = config.n_layers
layer = TransformerBlock(config)
self.layer = nn.ModuleList([copy.deepcopy(layer) for _ in range(config.n_layers)])
def forward(self, query_embs, query_mask, doc_embs, doc_mask, join_layer_idx, output_attentions=False, output_hidden_states=False):
"""
Parameters
----------
x: torch.tensor(bs, seq_length, dim)
Input sequence embedded.
attn_mask: torch.tensor(bs, seq_length)
Attention mask on the sequence.
Outputs
-------
hidden_state: torch.tensor(bs, seq_length, dim)
Sequence of hiddens states in the last (top) layer
all_hidden_states: Tuple[torch.tensor(bs, seq_length, dim)]
Tuple of length n_layers with the hidden states from each layer.
Optional: only if output_hidden_states=True
all_attentions: Tuple[torch.tensor(bs, n_heads, seq_length, seq_length)]
Tuple of length n_layers with the attention weights from each layer
Optional: only if output_attentions=True
"""
all_hidden_states = ()
all_attentions = ()
#
# query / doc sep.
#
hidden_state_q = query_embs
hidden_state_d = doc_embs
for layer_module in self.layer[:join_layer_idx]:
layer_outputs_q = layer_module(
x=hidden_state_q, attn_mask=query_mask, head_mask=None, output_attentions=output_attentions
)
hidden_state_q = layer_outputs_q[-1]
layer_outputs_d = layer_module(
x=hidden_state_d, attn_mask=doc_mask, head_mask=None, output_attentions=output_attentions
)
hidden_state_d = layer_outputs_d[-1]
#
# combine
#
x = torch.cat([hidden_state_q, hidden_state_d], dim=1)
attn_mask = torch.cat([query_mask, doc_mask], dim=1)
#
# combined
#
hidden_state = x
for layer_module in self.layer[join_layer_idx:]:
layer_outputs = layer_module(
x=hidden_state, attn_mask=attn_mask, head_mask=None, output_attentions=output_attentions
)
hidden_state = layer_outputs[-1]
# Add last layer
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_state,)
outputs = (hidden_state,)
if output_hidden_states:
outputs = outputs + (all_hidden_states,)
if output_attentions:
outputs = outputs + (all_attentions,)
return outputs # last-layer hidden state, (all hidden states), (all attentions)
#
# init the model & tokenizer (using the distilbert tokenizer)
#
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # honestly not sure if that is the best way to go, but it works :)
model = PreTTR.from_pretrained("sebastian-hofstaetter/prettr-distilbert-split_at_3-margin_mse-T2-msmarco")
````
## Effectiveness on MSMARCO Passage
We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory).
For re-ranking we used the top-1000 BM25 results.
### MSMARCO-DEV
Here, we use the larger 49K query DEV set (same range as the smaller 7K DEV set, minimal changes possible)
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .194 | .241 |
| **Margin-MSE PreTTR** (Re-ranking) | .386 | .447 |
For more metrics, baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@misc{hofstaetter2020_crossarchitecture_kd,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury},
year={2020},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
seduerr/pai-tl | 3c6b463c383a1da5ae94ac471a5c3cfbb1de4e88 | 2021-04-06T05:37:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"transformers",
"summarization",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | seduerr | null | seduerr/pai-tl | 14 | null | transformers | 9,887 | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
sentence-transformers/nli-bert-large-max-pooling | 0d18f120af805907d3bb96df53da45297d1a9bfc | 2022-06-16T00:46:45.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-bert-large-max-pooling | 14 | null | sentence-transformers | 9,888 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-bert-large-max-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-bert-large-max-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-large-max-pooling')
model = AutoModel.from_pretrained('sentence-transformers/nli-bert-large-max-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-large-max-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
shoarora/alectra-small-owt | 59978c5e61c0d36a392f9cb0396e9f547eba3644 | 2020-12-11T22:01:54.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | shoarora | null | shoarora/alectra-small-owt | 14 | null | transformers | 9,889 | # ALECTRA-small-OWT
This is an extension of
[ELECTRA](https://openreview.net/forum?id=r1xMH1BtvB) small model, trained on the
[OpenWebText corpus](https://skylion007.github.io/OpenWebTextCorpus/).
The training task (discriminative LM / replaced-token-detection) can be generalized to any transformer type. Here, we train an ALBERT model under the same scheme.
## Pretraining task

(figure from [Clark et al. 2020](https://openreview.net/pdf?id=r1xMH1BtvB))
ELECTRA uses discriminative LM / replaced-token-detection for pretraining.
This involves a generator (a Masked LM model) creating examples for a discriminator
to classify as original or replaced for each token.
The generator generalizes to any `*ForMaskedLM` model and the discriminator could be
any `*ForTokenClassification` model. Therefore, we can extend the task to ALBERT models,
not just BERT as in the original paper.
## Usage
```python
from transformers import AlbertForSequenceClassification, BertTokenizer
# Both models use the bert-base-uncased tokenizer and vocab.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
alectra = AlbertForSequenceClassification.from_pretrained('shoarora/alectra-small-owt')
```
NOTE: this ALBERT model uses a BERT WordPiece tokenizer.
## Code
The pytorch module that implements this task is available [here](https://github.com/shoarora/lmtuners/blob/master/lmtuners/lightning_modules/discriminative_lm.py).
Further implementation information [here](https://github.com/shoarora/lmtuners/tree/master/experiments/disc_lm_small),
and [here](https://github.com/shoarora/lmtuners/blob/master/experiments/disc_lm_small/train_alectra_small.py) is the script that created this model.
This specific model was trained with the following params:
- `batch_size: 512`
- `training_steps: 5e5`
- `warmup_steps: 4e4`
- `learning_rate: 2e-3`
## Downstream tasks
#### GLUE Dev results
| Model | # Params | CoLA | SST | MRPC | STS | QQP | MNLI | QNLI | RTE |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ELECTRA-Small++ | 14M | 57.0 | 91. | 88.0 | 87.5 | 89.0 | 81.3 | 88.4 | 66.7|
| ELECTRA-Small-OWT | 14M | 56.8 | 88.3| 87.4 | 86.8 | 88.3 | 78.9 | 87.9 | 68.5|
| ELECTRA-Small-OWT (ours) | 17M | 56.3 | 88.4| 75.0 | 86.1 | 89.1 | 77.9 | 83.0 | 67.1|
| ALECTRA-Small-OWT (ours) | 4M | 50.6 | 89.1| 86.3 | 87.2 | 89.1 | 78.2 | 85.9 | 69.6|
#### GLUE Test results
| Model | # Params | CoLA | SST | MRPC | STS | QQP | MNLI | QNLI | RTE |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| BERT-Base | 110M | 52.1 | 93.5| 84.8 | 85.9 | 89.2 | 84.6 | 90.5 | 66.4|
| GPT | 117M | 45.4 | 91.3| 75.7 | 80.0 | 88.5 | 82.1 | 88.1 | 56.0|
| ELECTRA-Small++ | 14M | 57.0 | 91.2| 88.0 | 87.5 | 89.0 | 81.3 | 88.4 | 66.7|
| ELECTRA-Small-OWT (ours) | 17M | 57.4 | 89.3| 76.2 | 81.9 | 87.5 | 78.1 | 82.4 | 68.1|
| ALECTRA-Small-OWT (ours) | 4M | 43.9 | 87.9| 82.1 | 82.0 | 87.6 | 77.9 | 85.8 | 67.5|
|
shtoshni/gpt2-chess-uci | 8c22eb8796d7570aa05a50feb3666bf9f0ee6073 | 2021-05-23T12:53:34.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | shtoshni | null | shtoshni/gpt2-chess-uci | 14 | null | transformers | 9,890 | GPT2 language model for chess in UCI notation
|
stefan-it/electra-base-gc4-64k-1000000-cased-generator | 1e4d8f0692845e4fcac21aa3eea256a0f1ceb944 | 2021-05-01T11:24:59.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-1000000-cased-generator | 14 | null | transformers | 9,891 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
sureshs/distilbert-large-sms-spam | 8ad11555103f850d676e627fc47eab3316f71faf | 2021-08-14T14:10:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | sureshs | null | sureshs/distilbert-large-sms-spam | 14 | 1 | transformers | 9,892 | # SMS Classifier
Finetuned 'distilbert-large' model for classifying SMS messages. Look at SMS dataset in this hub for your own version. |
test123/autonlp-ingredient_pseudo_label_training_ner-29576765 | 78806e96ad4804848edc1e9f15037b902cd0f720 | 2021-11-05T07:40:28.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:test123/autonlp-data-ingredient_pseudo_label_training_ner",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | test123 | null | test123/autonlp-ingredient_pseudo_label_training_ner-29576765 | 14 | null | transformers | 9,893 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- test123/autonlp-data-ingredient_pseudo_label_training_ner
co2_eq_emissions: 129.63722838909717
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 29576765
- CO2 Emissions (in grams): 129.63722838909717
## Validation Metrics
- Loss: 0.0062578353099524975
- Accuracy: 0.9982143458254896
- Precision: 0.9832763577033642
- Recall: 0.9849215922798552
- F1: 0.9840982873583328
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/test123/autonlp-ingredient_pseudo_label_training_ner-29576765
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("test123/autonlp-ingredient_pseudo_label_training_ner-29576765", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("test123/autonlp-ingredient_pseudo_label_training_ner-29576765", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
tiennvcs/bert-large-uncased-finetuned-infovqa | 7362c88b502b4a5e63105a2163e745d65d0adabf | 2021-10-23T06:01:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | tiennvcs | null | tiennvcs/bert-large-uncased-finetuned-infovqa | 14 | null | transformers | 9,894 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-infovqa
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-infovqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.7861 | 0.12 | 1000 | 3.2778 |
| 3.2186 | 0.23 | 2000 | 3.0658 |
| 2.8504 | 0.35 | 3000 | 3.0456 |
| 2.8621 | 0.46 | 4000 | 2.8758 |
| 2.7851 | 0.58 | 5000 | 2.8680 |
| 2.8016 | 0.69 | 6000 | 2.9244 |
| 2.7592 | 0.81 | 7000 | 2.7735 |
| 2.5737 | 0.93 | 8000 | 2.7640 |
| 2.3493 | 1.04 | 9000 | 2.7257 |
| 2.1041 | 1.16 | 10000 | 2.8442 |
| 2.1713 | 1.27 | 11000 | 2.7723 |
| 2.0594 | 1.39 | 12000 | 2.9982 |
| 2.1825 | 1.5 | 13000 | 2.8272 |
| 2.2486 | 1.62 | 14000 | 2.8897 |
| 2.097 | 1.74 | 15000 | 2.8557 |
| 2.1645 | 1.85 | 16000 | 2.6342 |
| 2.15 | 1.97 | 17000 | 2.8680 |
| 1.5662 | 2.08 | 18000 | 3.2126 |
| 1.6168 | 2.2 | 19000 | 3.1646 |
| 1.5886 | 2.32 | 20000 | 3.3139 |
| 1.6539 | 2.43 | 21000 | 3.2610 |
| 1.6486 | 2.55 | 22000 | 3.3144 |
| 1.637 | 2.66 | 23000 | 3.0437 |
| 1.7186 | 2.78 | 24000 | 2.9936 |
| 1.7543 | 2.89 | 25000 | 3.1641 |
| 1.5301 | 3.01 | 26000 | 4.0560 |
| 1.1436 | 3.13 | 27000 | 4.0116 |
| 1.1902 | 3.24 | 28000 | 4.0240 |
| 1.2728 | 3.36 | 29000 | 4.3068 |
| 1.2586 | 3.47 | 30000 | 3.7894 |
| 1.3164 | 3.59 | 31000 | 3.9242 |
| 1.3093 | 3.7 | 32000 | 4.0444 |
| 1.2812 | 3.82 | 33000 | 4.1779 |
| 1.3165 | 3.94 | 34000 | 3.6633 |
| 0.8357 | 4.05 | 35000 | 5.8137 |
| 0.9583 | 4.17 | 36000 | 5.3305 |
| 0.9135 | 4.28 | 37000 | 5.4973 |
| 1.0011 | 4.4 | 38000 | 5.0349 |
| 0.9553 | 4.51 | 39000 | 5.2086 |
| 1.0182 | 4.63 | 40000 | 5.1197 |
| 0.9569 | 4.75 | 41000 | 5.4579 |
| 0.9437 | 4.86 | 42000 | 5.4467 |
| 0.9791 | 4.98 | 43000 | 4.7657 |
| 0.648 | 5.09 | 44000 | 6.5780 |
| 0.7528 | 5.21 | 45000 | 6.2827 |
| 0.7247 | 5.33 | 46000 | 6.8500 |
| 0.702 | 5.44 | 47000 | 6.4572 |
| 0.6786 | 5.56 | 48000 | 6.5462 |
| 0.7272 | 5.67 | 49000 | 6.2406 |
| 0.6778 | 5.79 | 50000 | 6.4727 |
| 0.6446 | 5.9 | 51000 | 6.3170 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.8.0+cu101
- Datasets 1.11.0
- Tokenizers 0.10.3
|
tog/gpt-j-6B-8bit | 6d6d00abc27670929496266c17ca8d5189ccf88f | 2022-01-25T20:12:21.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | tog | null | tog/gpt-j-6B-8bit | 14 | null | transformers | 9,895 | Entry not found |
transformersbook/xlm-roberta-base-finetuned-panx-all | 39aaafd06a441010978aa3b101af7c48cd37c26a | 2022-06-25T09:44:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | transformersbook | null | transformersbook/xlm-roberta-base-finetuned-panx-all | 14 | 2 | transformers | 9,896 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
datasets:
- wikiann
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: wikiann
type: wikiann
config: en
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.843189280620875
verified: true
- name: Precision
type: precision
value: 0.8410061269097046
verified: true
- name: Recall
type: recall
value: 0.8568527450211155
verified: true
- name: F1
type: f1
value: 0.8488554853827908
verified: true
- name: loss
type: loss
value: 0.6632214784622192
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1739
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2912 | 1.0 | 835 | 0.1883 | 0.8238 |
| 0.1548 | 2.0 | 1670 | 0.1738 | 0.8480 |
| 0.101 | 3.0 | 2505 | 0.1739 | 0.8581 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
vitouphy/wav2vec2-xls-r-1b-khmer | dece104b52fb83d85cb945364c477e7a08dd9d06 | 2022-05-16T16:04:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"km",
"dataset:openslr",
"transformers",
"openslr",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-1b-khmer | 14 | 1 | transformers | 9,897 | ---
language:
- km
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr
- robust-speech-event
- km
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- openslr
model-index:
- name: wav2vec2-xls-r-1b-km
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR km
type: openslr
args: km
metrics:
- name: Test WER
type: wer
value: 32.13
- name: Test CER
type: cer
value: 9.35
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: km
metrics:
- name: Test WER
type: wer
value: 32.13
- name: Test CER
type: cer
value: 9.35
---
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the openslr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4239
- Wer: 0.4221
# Evaluation results on OpenSLR "test" (self-split 10%) (Running ./eval.py):
- WER: 0.4490281634272114
- CER: 0.12198285179047481
# Evaluation results on OpenSLR "test" with LM ngram (self-split 10%) (Running ./eval.py):
- WER: 0.32130107100357
- CER: 0.09345053678218891
# Note
- Since this dataset is small (4 hours of voice recording), we decided not to train that for too long to avoid overfitting and under-generalization.
- This model performs worse than its 300M-variant. Probably, we don't explore the hyper-parameter enough?
## Installation
Install the following libraries on top of HuggingFace Transformers for the supports of language model.
```
pip install pyctcdecode
pip install https://github.com/kpu/kenlm/archive/master.zip
```
## Usage
**Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
```python
from transformers import pipeline
# Load the model
pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-khmer")
# Process raw audio
output = pipe("sound_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
```
**Approach 2:** More custom way to predict phonemes.
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import librosa
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-khmer")
# Read and process the input
speech_array, sampling_rate = librosa.load("sound_file.wav", sr=16_000)
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, axis=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
print(predicted_sentences)
```
## Intended uses & limitations
The data used for this model is only around 4 hours of recordings.
- We split into 80/10/10. Hence, the training hour is 3.2 hours, which is very very small.
- Yet, its performance is not too bad. Quite interesting for such small dataset, actually. You can try it out.
- Its limitation is:
- Rare characters, e.g. ឬស្សី ឪឡឹក
- Speech needs to be clear and articulate.
- More data to cover more vocabulary and character may help improve this system.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 75
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5671 | 5.47 | 400 | 12.0218 | 1.0 |
| 3.5159 | 10.95 | 800 | 10.6337 | 1.0 |
| 2.4543 | 16.43 | 1200 | 1.8256 | 0.9839 |
| 1.9437 | 21.91 | 1600 | 1.1237 | 0.9173 |
| 1.696 | 27.39 | 2000 | 0.8246 | 0.7700 |
| 1.5342 | 32.87 | 2400 | 0.6433 | 0.6594 |
| 1.4509 | 38.35 | 2800 | 0.5500 | 0.5787 |
| 1.3478 | 43.83 | 3200 | 0.5070 | 0.4907 |
| 1.3096 | 49.31 | 3600 | 0.4692 | 0.4726 |
| 1.2532 | 54.79 | 4000 | 0.4448 | 0.4479 |
| 1.2291 | 60.27 | 4400 | 0.4374 | 0.4366 |
| 1.196 | 65.75 | 4800 | 0.4314 | 0.4310 |
| 1.1862 | 71.23 | 5200 | 0.4239 | 0.4221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
wicharnkeisei/thai-xlm-roberta-base-squad2 | b2bff9dd3cf84f54ac057e37d1e6830dc91dad44 | 2021-11-07T08:32:46.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"th",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | wicharnkeisei | null | wicharnkeisei/thai-xlm-roberta-base-squad2 | 14 | null | transformers | 9,898 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
language: th
model-index:
- name: thai-xlm-roberta-base-squad2
results: []
widget:
- text: "สราวุธ มาตรทอง เข้าสู่วงการบันเทิงเมื่อปีอะไร"
context: "สราวุธ มาตรทอง (ชื่อเล่น: อ้น เกิดเมื่อวันที่ 2 ตุลาคม พ.ศ. 2519) เป็นนักแสดงชาวไทย จบการศึกษาจากมหาวิทยาลัยราชภัฏพระนค เข้าสู่วงการบันเทิงเมื่อปี พ.ศ. 2538 จากการ ชักชวนของ กมล ภู่วัฒนวนิชย์ แห่งบริษัทบรอดคาซท์ ไทยเทเลวิชั่น มีผลงานแสดงชิ้นแรกจาก ใส่ไข่ อะไรเอ่ย, 6/16 ร้ายบริสุทธิ์ และมีผลงานสร้างชื่อคือละครเรื่อง ฉลุย และ น้ำใสใจจริง นอกจากนี้ยังได้ทำอัลบั้มประกอบละคร ฉลุย คู่กับ ทีน สราวุฒิ พุ่มทอง มีผลงานภาพยนตร์เรื่อง ความรักครั้งสุดท้าย (2546) เคยได้รับการเสนอชื่อเข้าชิงรางวัลภาพยนตร์ไทย ชมรมวิจารณ์บันเทิง ครั้งที่ 12 สาขานักแสดงสมทบยอดเยี่ยมจากภาพยนตร์เรื่องนี้ และยังมีละครซิตคอมเรื่อง เทวดาสาธุ นอกจากนี้ยังเคยเป็นดีเจให้กับ สถานีวิทยุ เรดิโอโหวต แซตเทิลไลท์ 93.5 MHz และยังเป็นพิธกร รายการเวเอฟเวอร์ ออกอากาศทางช่อง 3 ในวันเสาร์ เวลา 07.55-08.20 น. ในเดือนตุลาคม พ.ศ. 2551 เจ้าตัวได้ยอมรับว่าคลิปหลุดทางอินเทอร์เน็ต ที่มีเพศสัมพันธ์กับหญิงสาวเป็นเจ้าตัวจริง คนที่เอาไปลงน่าจะเป็นคนที่พบโทรศัพท์ของตนเอง"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thai-squad
This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on Thai dataset from [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset).
## Intended uses & limitations
This model intends to use with Thai question and answering task
## Training and evaluation data
Trained and evaluated by [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
## Performance
Evaluated on the SQuAD 1.0 test dataset
```
"exact": 62.51728907330567
"f1": 73.62388955749958
"total": 723
```
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
widyanto/indobert-base-uncased-qa-evaluator | d616a047f94d69a278f0d25b0546a604c4a93938 | 2021-08-24T00:51:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | widyanto | null | widyanto/indobert-base-uncased-qa-evaluator | 14 | null | transformers | 9,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.