modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
imadd/segformer-b0-finetuned-segments-water-2 | e6efa638af3065dc1e10dc74934aa0855a2dce01 | 2022-07-07T18:05:48.000Z | [
"pytorch",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | imadd | null | imadd/segformer-b0-finetuned-segments-water-2 | 30 | null | transformers | 7,200 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-water-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-water-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the imadd/water_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5845
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5241 | 6.67 | 20 | 0.5845 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
xyma/PROP-wiki | 0e685864e4a5efd3b000f87e8746c0d60b48f28f | 2022-07-12T13:49:52.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:wikipedia",
"arxiv:2010.10137",
"transformers",
"PROP",
"Pretrain4IR",
"fill-mask",
"license:apache-2.0"
] | fill-mask | false | xyma | null | xyma/PROP-wiki | 30 | null | transformers | 7,201 | ---
language: en
tags:
- PROP
- Pretrain4IR
- fill-mask
license: apache-2.0
datasets:
- wikipedia
---
# PROP-wiki
**PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf).
# Citation
If you find our work useful, please consider citing our paper:
```bibtex
@inproceedings{DBLP:conf/wsdm/MaGZFJC21,
author = {Xinyu Ma and
Jiafeng Guo and
Ruqing Zhang and
Yixing Fan and
Xiang Ji and
Xueqi Cheng},
editor = {Liane Lewin{-}Eytan and
David Carmel and
Elad Yom{-}Tov and
Eugene Agichtein and
Evgeniy Gabrilovich},
title = {{PROP:} Pre-training with Representative Words Prediction for Ad-hoc
Retrieval},
booktitle = {{WSDM} '21, The Fourteenth {ACM} International Conference on Web Search
and Data Mining, Virtual Event, Israel, March 8-12, 2021},
pages = {283--291},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3437963.3441777},
doi = {10.1145/3437963.3441777},
timestamp = {Wed, 07 Apr 2021 16:17:44 +0200},
biburl = {https://dblp.org/rec/conf/wsdm/MaGZFJC21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
gaochang/tbsz-picard | b623151f4828c4be73f13c922e58c0f16b548dca | 2022-07-14T09:36:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | gaochang | null | gaochang/tbsz-picard | 30 | null | transformers | 7,202 | Entry not found |
Duplets/distilbert-base-uncased-finetuned-squad | 4747c44a9e4d9ef40951a6875c7bad19792de301 | 2022-07-21T00:02:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Duplets | null | Duplets/distilbert-base-uncased-finetuned-squad | 30 | null | transformers | 7,203 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
We
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1487
## Model description
F1 score: 85.1341
## Intended uses & limitations
More information needed
## Training and evaluation data
SQUAD 2.0
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2118 | 1.0 | 5533 | 1.1484 |
| 0.9424 | 2.0 | 11066 | 1.1121 |
| 0.7441 | 3.0 | 16599 | 1.1487 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
erickdp/sentiment-analysisi-distillbert-es | e569ee3f6a396bbf41cf024067d50a902167d38f | 2022-07-21T23:28:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:erickdp/autotrain-data-sentiment-analysis-distillbert-es",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | erickdp | null | erickdp/sentiment-analysisi-distillbert-es | 30 | null | transformers | 7,204 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- erickdp/autotrain-data-sentiment-analysis-distillbert-es
co2_eq_emissions: 4.070674106910222
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1164342966
- CO2 Emissions (in grams): 4.070674106910222
## Validation Metrics
- Loss: 0.5068035125732422
- Accuracy: 0.8406482106684673
- Macro F1: 0.8355269443836222
- Micro F1: 0.8406482106684673
- Weighted F1: 0.8423675674232264
- Macro Precision: 0.8364960686615248
- Micro Precision: 0.8406482106684673
- Weighted Precision: 0.8455742631643787
- Macro Recall: 0.8361938729437037
- Micro Recall: 0.8406482106684673
- Weighted Recall: 0.8406482106684673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BoxCrab/DialoGPT-unk-AR | 6044b9496ce7249329c4a1479d75d838365bb10a | 2022-07-23T08:13:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BoxCrab | null | BoxCrab/DialoGPT-unk-AR | 30 | null | transformers | 7,205 | ---
tags:
- conversational
---
#homestuck DialoGPT Model |
Giuliano/vit-lung-cancer | 452db03c5e434110f82cd92cee390f310147251e | 2022-07-26T05:02:06.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | Giuliano | null | Giuliano/vit-lung-cancer | 30 | null | transformers | 7,206 | Entry not found |
arminmehrabian/all-MiniLM-L6-v2-all-MiniLM-L6-v2-agu | 0e521a4dec7004de7542daed622d95bd7d18d6e1 | 2022-07-29T07:33:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | arminmehrabian | null | arminmehrabian/all-MiniLM-L6-v2-all-MiniLM-L6-v2-agu | 30 | null | transformers | 7,207 | Entry not found |
Elluran/Hate_speech_detector | be14317cb136a9133e6e60e838a7d859656fb7b6 | 2021-05-20T11:49:13.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Elluran | null | Elluran/Hate_speech_detector | 29 | null | transformers | 7,208 | Entry not found |
Graphcore/bert-large-uncased | a9c367bbc13c994d658ac8f6e82ab27a4a03d2f2 | 2022-05-25T18:30:21.000Z | [
"pytorch",
"bert",
"dataset:Graphcore/wikipedia-bert-128",
"dataset:Graphcore/wikipedia-bert-512",
"arxiv:1904.00962",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | Graphcore | null | Graphcore/bert-large-uncased | 29 | 4 | transformers | 7,209 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Graphcore/wikipedia-bert-128
- Graphcore/wikipedia-bert-512
model-index:
- name: Graphcore/bert-large-uncased
results: []
---
# Graphcore/bert-large-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets.
## Training and evaluation data
Trained on wikipedia datasets:
- [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128)
- [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512)
## Training procedure
Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962).
Trained on 64 Graphcore Mk2 IPUs using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore)
Command lines:
Phase 1:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-large-uncased \
--tokenizer_name bert-large-uncased \
--ipu_config_name Graphcore/bert-large-ipu \
--dataset_name Graphcore/wikipedia-bert-128 \
--do_train \
--logging_steps 5 \
--max_seq_length 128 \
--max_steps 10550 \
--is_already_preprocessed \
--dataloader_num_workers 64 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 512 \
--pod_type pod64 \
--learning_rate 0.006 \
--lr_scheduler_type linear \
--loss_scaling 32768 \
--weight_decay 0.01 \
--warmup_ratio 0.28 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \
--output_dir output-pretrain-bert-large-phase1
```
Phase 2:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-large-uncased \
--tokenizer_name bert-large-uncased \
--model_name_or_path ./output-pretrain-bert-large-phase1 \
--ipu_config_name Graphcore/bert-large-ipu \
--dataset_name Graphcore/wikipedia-bert-512 \
--do_train \
--logging_steps 5 \
--max_seq_length 512 \
--max_steps 2038 \
--is_already_preprocessed \
--dataloader_num_workers 96 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 512 \
--pod_type pod64 \
--learning_rate 0.002828 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.128 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \
--output_dir output-pretrain-bert-large-phase2
```
### Training hyperparameters
The following hyperparameters were used during phase 1 training:
- learning_rate: 0.006
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 65536
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.28
- training_steps: 10550
- training precision: Mixed Precision
The following hyperparameters were used during phase 2 training:
- learning_rate: 0.002828
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 16384
- total_eval_batch_size: 512
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.128
- training_steps: 2038
- training precision: Mixed Precision
### Training results
```
train/epoch: 2.04
train/global_step: 2038
train/loss: 1.2002
train/train_runtime: 12022.3897
train/train_steps_per_second: 0.17
train/train_samples_per_second: 2777.367
```
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-ee-en | a69e3d990dc8b84d8d727b9502c20511a50233ed | 2021-09-09T21:33:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ee",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ee-en | 29 | null | transformers | 7,210 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-en
* source languages: ee
* target languages: en
* OPUS readme: [ee-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-en/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.en | 39.3 | 0.556 |
| Tatoeba.ee.en | 21.2 | 0.569 |
|
Helsinki-NLP/opus-mt-en-sg | d8ca39bde6ae7caa48ac06aaa90045cfdf24f8fd | 2021-09-09T21:39:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sg | 29 | null | transformers | 7,211 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sg
* source languages: en
* target languages: sg
* OPUS readme: [en-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sg | 37.0 | 0.544 |
|
Helsinki-NLP/opus-mt-es-af | dbd402381410acd3c00ab076e11402f2b2bb176a | 2021-01-18T08:21:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"af",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-af | 29 | null | transformers | 7,212 | ---
language:
- es
- af
tags:
- translation
license: apache-2.0
---
### spa-afr
* source group: Spanish
* target group: Afrikaans
* OPUS readme: [spa-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.afr | 55.0 | 0.718 |
### System Info:
- hf_name: spa-afr
- source_languages: spa
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'af']
- src_constituents: {'spa'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: afr
- short_pair: es-af
- chrF2_score: 0.718
- bleu: 55.0
- brevity_penalty: 0.9740000000000001
- ref_len: 3044.0
- src_name: Spanish
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: af
- prefer_old: False
- long_pair: spa-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-csn | e98aef098fb92058c3dce5d306020228ad0f4280 | 2021-09-09T21:41:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"csn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-csn | 29 | null | transformers | 7,213 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-csn
* source languages: es
* target languages: csn
* OPUS readme: [es-csn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csn | 87.8 | 0.901 |
|
Helsinki-NLP/opus-mt-itc-itc | 43c984c16f7004fa69db29b2937644c0178c9568 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-itc-itc | 29 | null | transformers | 7,214 | ---
language:
- it
- ca
- rm
- es
- ro
- gl
- sc
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- itc
tags:
- translation
license: apache-2.0
---
### itc-itc
* source group: Italic languages
* target group: Italic languages
* OPUS readme: [itc-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md)
* model: transformer
* source language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn
* target language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip)
* test set translations: [opus-2020-07-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt)
* test set scores: [opus-2020-07-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.arg-fra.arg.fra | 40.8 | 0.501 |
| Tatoeba-test.arg-spa.arg.spa | 59.9 | 0.739 |
| Tatoeba-test.ast-fra.ast.fra | 45.4 | 0.628 |
| Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 |
| Tatoeba-test.ast-spa.ast.spa | 46.8 | 0.636 |
| Tatoeba-test.cat-fra.cat.fra | 51.6 | 0.689 |
| Tatoeba-test.cat-ita.cat.ita | 49.2 | 0.699 |
| Tatoeba-test.cat-por.cat.por | 48.0 | 0.688 |
| Tatoeba-test.cat-ron.cat.ron | 35.4 | 0.719 |
| Tatoeba-test.cat-spa.cat.spa | 69.0 | 0.826 |
| Tatoeba-test.cos-fra.cos.fra | 22.3 | 0.383 |
| Tatoeba-test.cos-pms.cos.pms | 3.4 | 0.199 |
| Tatoeba-test.egl-fra.egl.fra | 9.5 | 0.283 |
| Tatoeba-test.egl-ita.egl.ita | 3.0 | 0.206 |
| Tatoeba-test.egl-spa.egl.spa | 3.7 | 0.194 |
| Tatoeba-test.fra-arg.fra.arg | 3.8 | 0.090 |
| Tatoeba-test.fra-ast.fra.ast | 25.9 | 0.457 |
| Tatoeba-test.fra-cat.fra.cat | 42.2 | 0.637 |
| Tatoeba-test.fra-cos.fra.cos | 3.3 | 0.185 |
| Tatoeba-test.fra-egl.fra.egl | 2.2 | 0.120 |
| Tatoeba-test.fra-frm.fra.frm | 1.0 | 0.191 |
| Tatoeba-test.fra-gcf.fra.gcf | 0.2 | 0.099 |
| Tatoeba-test.fra-glg.fra.glg | 40.5 | 0.625 |
| Tatoeba-test.fra-hat.fra.hat | 22.6 | 0.472 |
| Tatoeba-test.fra-ita.fra.ita | 46.7 | 0.679 |
| Tatoeba-test.fra-lad.fra.lad | 15.9 | 0.345 |
| Tatoeba-test.fra-lat.fra.lat | 2.9 | 0.247 |
| Tatoeba-test.fra-lij.fra.lij | 1.0 | 0.201 |
| Tatoeba-test.fra-lld.fra.lld | 1.1 | 0.257 |
| Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.241 |
| Tatoeba-test.fra-msa.fra.msa | 0.4 | 0.111 |
| Tatoeba-test.fra-oci.fra.oci | 7.3 | 0.322 |
| Tatoeba-test.fra-pap.fra.pap | 69.8 | 0.912 |
| Tatoeba-test.fra-pcd.fra.pcd | 0.6 | 0.144 |
| Tatoeba-test.fra-pms.fra.pms | 1.0 | 0.181 |
| Tatoeba-test.fra-por.fra.por | 39.7 | 0.619 |
| Tatoeba-test.fra-roh.fra.roh | 5.7 | 0.286 |
| Tatoeba-test.fra-ron.fra.ron | 36.4 | 0.591 |
| Tatoeba-test.fra-scn.fra.scn | 2.1 | 0.101 |
| Tatoeba-test.fra-spa.fra.spa | 47.5 | 0.670 |
| Tatoeba-test.fra-srd.fra.srd | 2.8 | 0.306 |
| Tatoeba-test.fra-vec.fra.vec | 3.0 | 0.345 |
| Tatoeba-test.fra-wln.fra.wln | 3.5 | 0.212 |
| Tatoeba-test.frm-fra.frm.fra | 11.4 | 0.472 |
| Tatoeba-test.gcf-fra.gcf.fra | 7.1 | 0.267 |
| Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 |
| Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.230 |
| Tatoeba-test.gcf-spa.gcf.spa | 13.4 | 0.314 |
| Tatoeba-test.glg-fra.glg.fra | 54.7 | 0.702 |
| Tatoeba-test.glg-ita.glg.ita | 40.1 | 0.661 |
| Tatoeba-test.glg-por.glg.por | 57.6 | 0.748 |
| Tatoeba-test.glg-spa.glg.spa | 70.0 | 0.817 |
| Tatoeba-test.hat-fra.hat.fra | 14.2 | 0.419 |
| Tatoeba-test.hat-spa.hat.spa | 17.9 | 0.449 |
| Tatoeba-test.ita-cat.ita.cat | 51.0 | 0.693 |
| Tatoeba-test.ita-egl.ita.egl | 1.1 | 0.114 |
| Tatoeba-test.ita-fra.ita.fra | 58.2 | 0.727 |
| Tatoeba-test.ita-glg.ita.glg | 41.7 | 0.652 |
| Tatoeba-test.ita-lad.ita.lad | 17.5 | 0.419 |
| Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.294 |
| Tatoeba-test.ita-lij.ita.lij | 1.0 | 0.208 |
| Tatoeba-test.ita-msa.ita.msa | 0.9 | 0.115 |
| Tatoeba-test.ita-oci.ita.oci | 12.3 | 0.378 |
| Tatoeba-test.ita-pms.ita.pms | 1.6 | 0.182 |
| Tatoeba-test.ita-por.ita.por | 44.8 | 0.665 |
| Tatoeba-test.ita-ron.ita.ron | 43.3 | 0.653 |
| Tatoeba-test.ita-spa.ita.spa | 56.6 | 0.733 |
| Tatoeba-test.ita-vec.ita.vec | 2.0 | 0.187 |
| Tatoeba-test.lad-fra.lad.fra | 30.4 | 0.458 |
| Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.163 |
| Tatoeba-test.lad-ita.lad.ita | 12.3 | 0.426 |
| Tatoeba-test.lad-lat.lad.lat | 1.6 | 0.178 |
| Tatoeba-test.lad-por.lad.por | 8.8 | 0.394 |
| Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.717 |
| Tatoeba-test.lad-spa.lad.spa | 28.3 | 0.531 |
| Tatoeba-test.lat-fra.lat.fra | 9.4 | 0.300 |
| Tatoeba-test.lat-ita.lat.ita | 20.0 | 0.421 |
| Tatoeba-test.lat-lad.lat.lad | 3.8 | 0.173 |
| Tatoeba-test.lat-por.lat.por | 13.0 | 0.354 |
| Tatoeba-test.lat-ron.lat.ron | 14.0 | 0.358 |
| Tatoeba-test.lat-spa.lat.spa | 21.8 | 0.436 |
| Tatoeba-test.lij-fra.lij.fra | 13.8 | 0.346 |
| Tatoeba-test.lij-ita.lij.ita | 14.7 | 0.442 |
| Tatoeba-test.lld-fra.lld.fra | 18.8 | 0.428 |
| Tatoeba-test.lld-spa.lld.spa | 11.1 | 0.377 |
| Tatoeba-test.lmo-fra.lmo.fra | 11.0 | 0.329 |
| Tatoeba-test.msa-fra.msa.fra | 0.8 | 0.129 |
| Tatoeba-test.msa-ita.msa.ita | 1.1 | 0.138 |
| Tatoeba-test.msa-msa.msa.msa | 19.1 | 0.453 |
| Tatoeba-test.msa-pap.msa.pap | 0.0 | 0.037 |
| Tatoeba-test.msa-por.msa.por | 2.4 | 0.155 |
| Tatoeba-test.msa-ron.msa.ron | 1.2 | 0.129 |
| Tatoeba-test.msa-spa.msa.spa | 1.0 | 0.139 |
| Tatoeba-test.multi.multi | 40.8 | 0.599 |
| Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 |
| Tatoeba-test.oci-fra.oci.fra | 24.5 | 0.467 |
| Tatoeba-test.oci-ita.oci.ita | 23.3 | 0.493 |
| Tatoeba-test.oci-spa.oci.spa | 26.1 | 0.505 |
| Tatoeba-test.pap-fra.pap.fra | 31.0 | 0.629 |
| Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.051 |
| Tatoeba-test.pcd-fra.pcd.fra | 13.8 | 0.381 |
| Tatoeba-test.pcd-spa.pcd.spa | 2.6 | 0.227 |
| Tatoeba-test.pms-cos.pms.cos | 3.4 | 0.217 |
| Tatoeba-test.pms-fra.pms.fra | 13.4 | 0.347 |
| Tatoeba-test.pms-ita.pms.ita | 13.0 | 0.373 |
| Tatoeba-test.pms-spa.pms.spa | 13.1 | 0.374 |
| Tatoeba-test.por-ast.por.ast | 100.0 | 1.000 |
| Tatoeba-test.por-cat.por.cat | 45.1 | 0.673 |
| Tatoeba-test.por-fra.por.fra | 52.5 | 0.698 |
| Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.128 |
| Tatoeba-test.por-glg.por.glg | 57.5 | 0.750 |
| Tatoeba-test.por-ita.por.ita | 50.1 | 0.710 |
| Tatoeba-test.por-lad.por.lad | 15.7 | 0.341 |
| Tatoeba-test.por-lat.por.lat | 11.1 | 0.362 |
| Tatoeba-test.por-msa.por.msa | 2.4 | 0.136 |
| Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 |
| Tatoeba-test.por-roh.por.roh | 0.0 | 0.132 |
| Tatoeba-test.por-ron.por.ron | 40.0 | 0.632 |
| Tatoeba-test.por-spa.por.spa | 58.6 | 0.756 |
| Tatoeba-test.roh-fra.roh.fra | 23.1 | 0.564 |
| Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 |
| Tatoeba-test.roh-spa.roh.spa | 19.8 | 0.489 |
| Tatoeba-test.ron-cat.ron.cat | 59.5 | 0.854 |
| Tatoeba-test.ron-fra.ron.fra | 47.4 | 0.647 |
| Tatoeba-test.ron-ita.ron.ita | 45.7 | 0.683 |
| Tatoeba-test.ron-lad.ron.lad | 44.2 | 0.712 |
| Tatoeba-test.ron-lat.ron.lat | 14.8 | 0.449 |
| Tatoeba-test.ron-msa.ron.msa | 1.2 | 0.098 |
| Tatoeba-test.ron-por.ron.por | 42.7 | 0.650 |
| Tatoeba-test.ron-spa.ron.spa | 50.4 | 0.686 |
| Tatoeba-test.scn-fra.scn.fra | 2.4 | 0.180 |
| Tatoeba-test.scn-spa.scn.spa | 5.1 | 0.212 |
| Tatoeba-test.spa-arg.spa.arg | 10.8 | 0.267 |
| Tatoeba-test.spa-ast.spa.ast | 24.6 | 0.514 |
| Tatoeba-test.spa-cat.spa.cat | 61.6 | 0.783 |
| Tatoeba-test.spa-egl.spa.egl | 2.2 | 0.106 |
| Tatoeba-test.spa-fra.spa.fra | 51.1 | 0.683 |
| Tatoeba-test.spa-gcf.spa.gcf | 7.8 | 0.067 |
| Tatoeba-test.spa-glg.spa.glg | 62.8 | 0.776 |
| Tatoeba-test.spa-hat.spa.hat | 16.6 | 0.398 |
| Tatoeba-test.spa-ita.spa.ita | 51.8 | 0.718 |
| Tatoeba-test.spa-lad.spa.lad | 14.6 | 0.393 |
| Tatoeba-test.spa-lat.spa.lat | 21.5 | 0.486 |
| Tatoeba-test.spa-lld.spa.lld | 2.0 | 0.222 |
| Tatoeba-test.spa-msa.spa.msa | 0.8 | 0.113 |
| Tatoeba-test.spa-oci.spa.oci | 10.3 | 0.377 |
| Tatoeba-test.spa-pcd.spa.pcd | 0.9 | 0.115 |
| Tatoeba-test.spa-pms.spa.pms | 1.5 | 0.194 |
| Tatoeba-test.spa-por.spa.por | 49.4 | 0.698 |
| Tatoeba-test.spa-roh.spa.roh | 4.6 | 0.261 |
| Tatoeba-test.spa-ron.spa.ron | 39.1 | 0.618 |
| Tatoeba-test.spa-scn.spa.scn | 2.0 | 0.113 |
| Tatoeba-test.spa-wln.spa.wln | 8.7 | 0.295 |
| Tatoeba-test.srd-fra.srd.fra | 6.7 | 0.369 |
| Tatoeba-test.vec-fra.vec.fra | 59.9 | 0.608 |
| Tatoeba-test.vec-ita.vec.ita | 14.2 | 0.405 |
| Tatoeba-test.wln-fra.wln.fra | 8.9 | 0.344 |
| Tatoeba-test.wln-spa.wln.spa | 9.6 | 0.298 |
### System Info:
- hf_name: itc-itc
- source_languages: itc
- target_languages: itc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt
- src_alpha3: itc
- tgt_alpha3: itc
- short_pair: itc-itc
- chrF2_score: 0.599
- bleu: 40.8
- brevity_penalty: 0.968
- ref_len: 77448.0
- src_name: Italic languages
- tgt_name: Italic languages
- train_date: 2020-07-07
- src_alpha2: itc
- tgt_alpha2: itc
- prefer_old: False
- long_pair: itc-itc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
IlyaGusev/xlm_roberta_large_headline_cause_simple | ea1785ed65ee94eb6feae6695d00a00676d7ea55 | 2022-07-13T15:36:36.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"en",
"dataset:IlyaGusev/headline_cause",
"arxiv:2108.12626",
"transformers",
"xlm-roberta-large",
"license:apache-2.0"
] | text-classification | false | IlyaGusev | null | IlyaGusev/xlm_roberta_large_headline_cause_simple | 29 | null | transformers | 7,215 | ---
language:
- ru
- en
tags:
- xlm-roberta-large
datasets:
- IlyaGusev/headline_cause
license: apache-2.0
widget:
- text: "Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку"
---
# XLM-RoBERTa HeadlineCause Simple
## Model description
This model was trained to predict the presence of causal relations between two headlines. This model is for the Simple task with 3 possible labels: A causes B, B causes A, no causal relation. English and Russian languages are supported.
You can use hosted inference API to infer a label for a headline pair. To do this, you shoud seperate headlines with ```</s>``` token.
For example:
```
Песков опроверг свой перевод на удаленку</s>Дмитрий Песков перешел на удаленку
```
## Intended uses & limitations
#### How to use
```python
from tqdm.notebook import tqdm
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
def get_batch(data, batch_size):
start_index = 0
while start_index < len(data):
end_index = start_index + batch_size
batch = data[start_index:end_index]
yield batch
start_index = end_index
def pipe_predict(data, pipe, batch_size=64):
raw_preds = []
for batch in tqdm(get_batch(data, batch_size)):
raw_preds += pipe(batch)
return raw_preds
MODEL_NAME = TOKENIZER_NAME = "IlyaGusev/xlm_roberta_large_headline_cause_simple"
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME, do_lower_case=False)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
model.eval()
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, framework="pt", return_all_scores=True)
texts = [
(
"Judge issues order to allow indoor worship in NC churches",
"Some local churches resume indoor services after judge lifted NC governor’s restriction"
),
(
"Gov. Kevin Stitt defends $2 million purchase of malaria drug touted by Trump",
"Oklahoma spent $2 million on malaria drug touted by Trump"
),
(
"Песков опроверг свой перевод на удаленку",
"Дмитрий Песков перешел на удаленку"
)
]
pipe_predict(texts, pipe)
```
#### Limitations and bias
The models are intended to be used on news headlines. No other limitations are known.
## Training data
* HuggingFace dataset: [IlyaGusev/headline_cause](https://huggingface.co/datasets/IlyaGusev/headline_cause)
* GitHub: [IlyaGusev/HeadlineCause](https://github.com/IlyaGusev/HeadlineCause)
## Training procedure
* Notebook: [HeadlineCause](https://colab.research.google.com/drive/1NAnD0OJ0TnYCJRsHpYUyYkjr_yi8ObcA)
* Stand-alone script: [train.py](https://github.com/IlyaGusev/HeadlineCause/blob/main/headline_cause/train.py)
## Eval results
Evaluation results can be found in the [arxiv paper](https://arxiv.org/pdf/2108.12626.pdf).
### BibTeX entry and citation info
```bibtex
@misc{gusev2021headlinecause,
title={HeadlineCause: A Dataset of News Headlines for Detecting Causalities},
author={Ilya Gusev and Alexey Tikhonov},
year={2021},
eprint={2108.12626},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
M-FAC/bert-tiny-finetuned-mrpc | 5186a04d859ccd40aef8b32e8b1e065b0b4f187b | 2021-12-13T08:12:51.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
] | text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-mrpc | 29 | null | transformers | 7,216 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MRPC validation set:
```bash
f1 = 83.12
accuracy = 73.52
```
Mean and standard deviation for 5 runs on MRPC validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 81.68 ± 0.33 | 69.90 ± 0.32 |
| M-FAC | 82.77 ± 0.22 | 72.94 ± 0.37 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M-FAC/bert-tiny-finetuned-sst2 | 41ad6709ec46b414749b37daf49cf5ca1c7dba7c | 2021-12-13T08:13:48.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
] | text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-sst2 | 29 | null | transformers | 7,217 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SST-2 validation set:
```bash
accuracy = 83.02
```
Mean and standard deviation for 5 runs on SST-2 validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 80.11 ± 0.65 |
| M-FAC | 81.86 ± 0.76 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
Madhour/gpt2-eli5 | be12ea44e909ad3f4d1894b90b2ca7041d48ec28 | 2022-01-23T12:00:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:eli5",
"transformers",
"ELI5",
"license:gpl-3.0"
] | text-generation | false | Madhour | null | Madhour/gpt2-eli5 | 29 | null | transformers | 7,218 | ---
language: en
tags:
- ELI5
license: gpl-3.0
datasets:
- eli5
Task: Summarization
widget:
- text: "<|BOS|><|SEP|>Consulting,business,Fraud<|SEP|>"
inference:
parameters:
temperature: 0.9
return_full_text: False
repetition_penalty: 1
---
# Conditional ELI5 Generator
Given a few keywords, it generates an Eli5 question with a corresponding answer.
The model is mainly used for [SeemsPhishy](https://github.com/madhour/seemsphishy) to auto generate newsletters for phishing/penetration-testing.
# How to use
```Python
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
from torch import tensor
tokenizer = AutoTokenizer.from_pretrained("Madhour/gpt2-eli5")
model = AutoModelForCausalLM.from_pretrained("Madhour/gpt2-eli5")
prompt = <|BOS|> +"I have a question."+ <|SEP|> + "keyword1,keyword2,keyword3" + <|SEP|>
prompt = tensor(tokenizer.encode(prompt)).unsqueeze(0)
text = model.generate(prompt,
do_sample=True,
min_length=50,
max_length=768,
top_k=30,
top_p=0.7,
temperature=0.9,
repetition_penalty=2.0,
num_return_sequences=3)
``` |
Malaina/mt5-large-spider | 11ff8f57835c5f238948ec4b811d809c454e4e35 | 2022-02-09T04:33:48.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Malaina | null | Malaina/mt5-large-spider | 29 | null | transformers | 7,219 | Entry not found |
Maltehb/aelaectra-danish-electra-small-uncased | 687bd788e396966d15da24cba4fc1b64fe9c4c07 | 2021-11-23T06:39:20.000Z | [
"pytorch",
"electra",
"pretraining",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"transformers",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"license:mit",
"co2_eq_emissions"
] | null | false | Maltehb | null | Maltehb/aelaectra-danish-electra-small-uncased | 29 | null | transformers | 7,220 | ---
language: "da"
co2_eq_emissions: 4009.5
tags:
- ælæctra
- pytorch
- danish
- ELECTRA-Small
- replaced token detection
license: "mit"
datasets:
- DAGW
metrics:
- f1
---
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ |
SEBIS/legal_t5_small_summ_de | 42d8cb548a1addc92fd43ad8fca86f23783802ea | 2021-06-23T11:21:22.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch",
"dataset:jrc-acquis",
"transformers",
"summarization Deustch model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_de | 29 | null | transformers | 7,221 |
---
language: Deustch
tags:
- summarization Deustch model
datasets:
- jrc-acquis
widget:
- text: "(90/365/EWG) DER RAT DER EUROPÄISCHEN GEMEINSCHAFTEN - gestützt auf den Vertrag zur Gründung der Europäischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europäischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erwägung nachstehender Gründe: Gemäß Artikel 3 Buchstabe c) des Vertrages umfasst die Tätigkeit der Gemeinschaft, nach Maßgabe des Vertrages, die Beseitigung der Hindernisse für den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, daß der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gemäß den Bestimmungen des Vertrages gewährleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbständig Erwerbstätigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gewähren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie während ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten dürfen die öffentlichen Finanzen des Aufnahmemitgliedstaates nicht über Gebühr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empfänger von Geldleistungen bei Invalidität und Alter und die Bezieher von Renten bei Arbeitsunfällen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Träger seinen Sitz hat. Die Ausübung des Aufenthaltsrechts wird erst dann eine reale Möglichkeit, wenn es auch den Familienangehörigen zugestanden wird. Für die von dieser Richtlinie Begünstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enthält Befugnisse für den Erlaß der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gewähren den Angehörigen der Mitgliedstaaten, die in der Gemeinschaft eine Tätigkeit als Arbeitnehmer oder als Selbständige ausgeuebt haben, sowie deren Familienangehörigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, daß sie eine Invaliditäts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen Höhe beziehen, daß sie während ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen müssen, und einen Krankenversicherungsschutz genießen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag übersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangehörigen aufgrund der persönlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gewähren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung übersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten dürfen folgende Personen ungeachtet ihrer Staatsangehörigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gewährt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gewährt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die »Aufenthaltserlaubnis für Staatsangehörige eines EWG-Mitgliedstaates%quot%, erteilt, deren Gültigkeit auf fünf Jahre mit Verlängerungsmöglichkeit begrenzt werden kann. Die Mitgliedstaaten können jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies für erforderlich halten. Einem Familienmitglied, das nicht die Staatsangehörigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen Gültigkeitsdauer ausgestellt wie dem Staatsangehörigen, von dem es seine Rechte herleitet. Für die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines gültigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, daß er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Begünstigten entsprechende Anwendung. Der Ehegatte eines Staatsangehörigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangehörigen, denen er Unterhalt gewährt, haben, auch wenn sie die Staatsangehörigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede Tätigkeit im Lohn- oder Gehaltsverhältnis oder jedwede selbständige Erwerbstätigkeit auszuüben. Die Mitgliedstaaten dürfen nur aus Gründen der öffentlichen Ordnung, der öffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie berührt nicht die geltenden Rechtsvorschriften für den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet spätestens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschließend alle drei Jahre einen Bericht über ihre Anwendung aus und legt ihn dem Europäischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis spätestens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Präsident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt veröffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. "
---
# legal_t5_small_summ_de model
Model for Summarization of legal text written in Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_de is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Deustch.
### How to use
Here is how to use this model to summarize legal text written in Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_de"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "(90/365/EWG) DER RAT DER EUROPÄISCHEN GEMEINSCHAFTEN - gestützt auf den Vertrag zur Gründung der Europäischen Wirtschaftsgemeinschaft, insbesondere auf Artikel 235, auf Vorschlag der Kommission (1), nach Stellungnahme des Europäischen Parlaments (2), nach Stellungnahme des Wirtschafts- und Sozialausschusses (3), in Erwägung nachstehender Gründe: Gemäß Artikel 3 Buchstabe c) des Vertrages umfasst die Tätigkeit der Gemeinschaft, nach Maßgabe des Vertrages, die Beseitigung der Hindernisse für den freien Personenverkehr zwischen den Mitgliedstaaten. Artikel 8a des Vertrages sieht vor, daß der Binnenmarkt bis zum 31. Dezember 1992 zu verwirklichen ist. Der Binnenmarkt umfasst einen Raum ohne Binnengrenzen, in dem der freie Verkehr von Waren, Personen, Dienstleistungen und Kapital gemäß den Bestimmungen des Vertrages gewährleistet ist. Die Artikel 48 und 52 des Vertrages sehen die Freizuegigkeit der Arbeitnehmer und selbständig Erwerbstätigen vor, was ein Recht auf Aufenthalt in dem Mitgliedstaat beinhaltet, in dem sie ihr Berufsleben verbringen. Es empfiehlt sich, dieses Aufenthaltsrecht auch Personen zu gewähren, die aus dem Erwerbsleben ausgeschieden sind, auch wenn sie während ihres Berufslebens von dem Recht auf Freizuegigkeit keinen Gebrauch gemacht haben. Die Aufenthaltsberechtigten dürfen die öffentlichen Finanzen des Aufnahmemitgliedstaates nicht über Gebühr belasten. Nach Artikel 10 der Verordnung (EWG) Nr. 1408/71 (4) in der Fassung der Verordnung (EWG) Nr. 1390/81 (5) haben die Empfänger von Geldleistungen bei Invalidität und Alter und die Bezieher von Renten bei Arbeitsunfällen oder Berufskrankheiten auch dann weiterhin Anspruch auf diese Leistungen und Renten, wenn sie im Gebiet eines anderen Mitgliedstaates als des Staates wohnen, auf dessen Gebiet der zur Zahlung verpflichtete Träger seinen Sitz hat. Die Ausübung des Aufenthaltsrechts wird erst dann eine reale Möglichkeit, wenn es auch den Familienangehörigen zugestanden wird. Für die von dieser Richtlinie Begünstigten sollte eine Verwaltungsregelung entsprechend der insbesondere in der Richtlinie 68/360/EWG (6) und in der Richtlinie 64/221/EWG (7) vorgesehenen Regelung gelten. Der Vertrag enthält Befugnisse für den Erlaß der vorliegenden Richtlinie nur in Artikel 235 - HAT FOLGENDE RICHTLINIE ERLASSEN: Artikel 1 (1) Die Mitgliedstaaten gewähren den Angehörigen der Mitgliedstaaten, die in der Gemeinschaft eine Tätigkeit als Arbeitnehmer oder als Selbständige ausgeuebt haben, sowie deren Familienangehörigen nach der Definition von Absatz 2 unter der Bedingung das Aufenthaltsrecht, daß sie eine Invaliditäts-, Vorruhestands- oder Altersrente oder eine Rente wegen Arbeitsunfalls oder Berufskrankheit in einer solchen Höhe beziehen, daß sie während ihres Aufenthalts nicht die Sozialhilfe des Aufnahmemitgliedstaats in Anspruch nehmen müssen, und einen Krankenversicherungsschutz genießen, der im Aufnahmemitgliedstaat alle Risiken abdeckt. Die Existenzmittel des Antragstellers gelten als ausreichend, wenn sie einen Betrag übersteigen, unterhalb dessen der Aufnahmemitgliedstaat seinen Staatsangehörigen aufgrund der persönlichen Situation des Antragstellers und gegebenenfalls der Situation der nach Absatz 2 aufgenommenen Personen Sozialhilfe gewähren kann. Ist Unterabsatz 2 in einem Mitgliedstaat nicht anwendbar, so gelten die Existenzmittel des Antragstellers als ausreichend, wenn sie den Betrag der Grundrente der Sozialversicherung übersteigen, die der Aufnahmemitgliedstaat zahlt. (2) Bei dem Aufenthaltsberechtigten dürfen folgende Personen ungeachtet ihrer Staatsangehörigkeit in einem anderen Mitgliedstaat Wohnung nehmen: a) sein Ehegatte sowie die Verwandten in absteigender Linie, denen Unterhalt gewährt wird; b) seine Verwandten und die Verwandten seines Ehegatten in aufsteigender Linie, denen er Unterhalt gewährt. Artikel 2 (1) Zum Nachweis des Aufenthaltsrechts wird eine Bescheinigung, die »Aufenthaltserlaubnis für Staatsangehörige eines EWG-Mitgliedstaates%quot%, erteilt, deren Gültigkeit auf fünf Jahre mit Verlängerungsmöglichkeit begrenzt werden kann. Die Mitgliedstaaten können jedoch die Erneuerung der Aufenthaltserlaubnis nach den ersten zwei Aufenthaltsjahren verlangen, wenn sie dies für erforderlich halten. Einem Familienmitglied, das nicht die Staatsangehörigkeit eines Mitgliedstaats besitzt, wird ein Aufenthaltsdokument mit der gleichen Gültigkeitsdauer ausgestellt wie dem Staatsangehörigen, von dem es seine Rechte herleitet. Für die Erteilung der Aufenthaltserlaubnis oder des Aufenthaltsdokuments darf der Mitgliedstaat vom Antragsteller nur die Vorlage eines gültigen Personalausweises bzw. Reisepasses sowie den Nachweis verlangen, daß er die Voraussetzungen des Artikels 1 erfuellt. (2) Die Artikel 2 und 3, Artikel 6 Absatz 1 Buchstabe a) und Absatz 2 sowie Artikel 9 der Richtlinie 68/360/EWG finden auf die von dieser Richtlinie Begünstigten entsprechende Anwendung. Der Ehegatte eines Staatsangehörigen eines Mitgliedstaats, der im Hoheitsgebiet eines Mitgliedstaats aufenthaltsberechtigt ist, sowie die Kinder dieses Staatsangehörigen, denen er Unterhalt gewährt, haben, auch wenn sie die Staatsangehörigkeit eines Mitgliedstaats nicht besitzen, das Recht, im gesamten Hoheitsgebiet dieses Mitgliedstaats jedwede Tätigkeit im Lohn- oder Gehaltsverhältnis oder jedwede selbständige Erwerbstätigkeit auszuüben. Die Mitgliedstaaten dürfen nur aus Gründen der öffentlichen Ordnung, der öffentlichen Sicherheit oder der Volksgesundheit von den Bestimmungen dieser Richtlinie abweichen. In diesem Fall findet die Richtlinie 64/221/EWG Anwendung. (3) Die vorliegende Richtlinie berührt nicht die geltenden Rechtsvorschriften für den Erwerb von Zweitwohnungen. Artikel 3 Das Aufenthaltsrecht besteht, solange die Berechtigten die Bedingungen des Artikels 1 erfuellen. Artikel 4 Die Kommission arbeitet spätestens drei Jahre nach dem Beginn der Anwendung dieser Richtlinie und anschließend alle drei Jahre einen Bericht über ihre Anwendung aus und legt ihn dem Europäischen Parlament und dem Rat vor. Artikel 5 Die Mitgliedstaaten setzen die erforderlichen Rechts- und Verwaltungsvorschriften in Kraft, um dieser Richtlinie bis spätestens 30. Juni 1992 nachzukommen. Sie setzen die Kommission unverzueglich davon in Kenntnis. Artikel 6 Diese Richtlinie ist an die Mitgliedstaaten gerichtet. Geschehen zu Luxemburg am 28. Juni 1990. Im Namen des Rates Der Präsident M. GEOGHEGAN-QUINN (1) ABl. Nr. C 191 vom 28. 7. 1989, S. 3 und ABl. Nr. C 26 vom 3. 2. 1990, S. 19. (2) Stellungnahme vom 13. Juni 1990 (noch nicht im Amtsblatt veröffentlicht). (3) ABl. Nr. C 329 vom 30. 12. 1989, S. 25. (4) ABl. Nr. L 149 vom 5. 7. 1971, S. 2. (5) ABl. Nr. L 143 vom 29. 5. 1981, S. 1. (6) ABl. Nr. L 257 vom 19. 10. 1968, S. 13. (7) ABl. Nr. 56 vom 4. 4. 1964, S. 850/64. "
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_summ_de model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_de | 78.03|68.84 |76.95|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
ShengdingHu/qnli | 5d11db630227970acb31f2a246d698ce6eefe708 | 2022-02-02T13:22:44.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ShengdingHu | null | ShengdingHu/qnli | 29 | null | transformers | 7,222 | Entry not found |
addy88/wav2vec2-english-stt | 86095016131ad4fc6bfd7e72f4cbb8615319ee74 | 2021-12-19T15:08:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-english-stt | 29 | null | transformers | 7,223 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-english-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-english-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
aliosm/ComVE-gpt2-large | babcc66c5ea72dcb089496c92d6e6b0cd0bce7e7 | 2021-05-21T13:12:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation",
"transformers",
"exbert",
"commonsense",
"semeval2020",
"comve",
"license:mit"
] | text-generation | false | aliosm | null | aliosm/ComVE-gpt2-large | 29 | null | transformers | 7,224 | ---
language: "en"
tags:
- gpt2
- exbert
- commonsense
- semeval2020
- comve
license: "mit"
datasets:
- https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation
metrics:
- bleu
widget:
- text: "Chicken can swim in water. <|continue|>"
---
# ComVE-gpt2-large
## Model description
Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
The model is able to generate a reason why a given natural language statement is against commonsense.
## Intended uses & limitations
You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
#### How to use
You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
*Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
#### Limitations and bias
The model biased to negate the entered sentence usually instead of producing a factual reason.
## Training data
The model is initialized from the [gpt2-large](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
## Training procedure
Each natural language statement that against commonsense is concatenated with its reference reason with `<|conteniue|>` as a separator, then the model finetuned using CLM objective.
The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size.
<center>
<img src="https://i.imgur.com/xKbrwBC.png">
</center>
## Eval results
The model achieved 16.5110/15.9299 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
### BibTeX entry and citation info
```bibtex
@article{fadel2020justers,
title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
year={2020}
}
```
<a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2-large">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
anirudh21/albert-xxlarge-v2-finetuned-wnli | 102c29f68a2b6379f9544f07371d6ad49972424a | 2022-01-27T13:00:48.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/albert-xxlarge-v2-finetuned-wnli | 29 | null | transformers | 7,225 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-xxlarge-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5070422535211268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6970
- Accuracy: 0.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 0.8066 | 0.4366 |
| No log | 2.0 | 26 | 0.6970 | 0.5070 |
| No log | 3.0 | 39 | 0.7977 | 0.4507 |
| No log | 4.0 | 52 | 0.7906 | 0.4930 |
| No log | 5.0 | 65 | 0.8459 | 0.4366 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anon-submission-mk/electra-base-macedonian-cased-discriminator | d170c38e6658341ed131e73405a540a4e78d089a | 2020-06-17T21:37:57.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | anon-submission-mk | null | anon-submission-mk/electra-base-macedonian-cased-discriminator | 29 | null | transformers | 7,226 | Entry not found |
anton-l/wav2vec2-large-xlsr-53-hungarian | 05da1d50259d9b3ad85b363937415419d39b69c4 | 2021-07-05T19:47:18.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-hungarian | 29 | null | transformers | 7,227 | ---
language: hu
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hungarian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hu
type: common_voice
args: hu
metrics:
- name: Test WER
type: wer
value: 42.26
---
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/hu.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/hu/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/hu/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 42.26 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
arshyajabbari/wav2vec2-large-persian-demo | 5527845c9e900118cecf1ccbab8537c4fc3d0b46 | 2022-02-09T10:39:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | arshyajabbari | null | arshyajabbari/wav2vec2-large-persian-demo | 29 | null | transformers | 7,228 | Entry not found |
ashish-shrivastava/dont-know-response | 700388e0f20ccdc535929b7fb5a622ee9a93f09d | 2021-06-23T11:27:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:2012.01873",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ashish-shrivastava | null | ashish-shrivastava/dont-know-response | 29 | 2 | transformers | 7,229 | ## Natural Don't Know Response Model
Fine-tuned on [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) using a combination of a dependency-rule based data and [Quora Question Pairs(QQP)](https://huggingface.co/nlp/viewer/?dataset=quora) dataset for **Don't Know Response Generation** task.
Additional information about this model:
- Paper : [Saying No is An Art: Contextualized Fallback Responses for
Unanswerable Dialogue Queries](https://arxiv.org/pdf/2012.01873.pdf)
- Github Repo: https://github.com/kaustubhdhole/natural-dont-know
#### How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "ashish-shrivastava/dont-know-response"
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
input = "Where can I find good Italian food ?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded_output) # I'm not sure where you can get good quality Italian food.
```
#### Hyperparameters
```
n_epochs = 2
base_LM_model = "T5-base"
max_seq_len = 256
learning_rate = 3e-4
adam_epsilon = 1e-8
train_batch_size = 6
```
#### BibTeX entry and citation info
```bibtex
@misc{shrivastava2020saying,
title={Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries},
author={Ashish Shrivastava and Kaustubh Dhole and Abhinav Bhatt and Sharvani Raghunath},
year={2020},
eprint={2012.01873},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
bagdaebhishek/IndianPoliticalTweetsLM | e0f9855b3f7a2e73b45d58bdc8bd2d6e5ea55ef3 | 2021-09-22T07:49:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"transformers",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"license:apache-2.0"
] | text-generation | false | bagdaebhishek | null | bagdaebhishek/IndianPoliticalTweetsLM | 29 | null | transformers | 7,230 | ---
language: en
thumbnail: https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: apache-2.0
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
bioformers/bioformer-cased-v1.0-bc2gm | 3ddfa702a1b534b178c3acb765937582ff6e58a3 | 2021-10-19T07:37:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | bioformers | null | bioformers/bioformer-cased-v1.0-bc2gm | 29 | null | transformers | 7,231 | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [BC2GM](https://doi.org/10.1186/gb-2008-9-s2-s2) dataset for 10 epochs. This fine-tuned model can be used for NER for genes/proteins.
|
birgermoell/t5-base-swedish | c9fa23681fba1e8efeb1412dafa13e3e3976fabf | 2021-07-17T07:52:39.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"feature-extraction",
"sv",
"dataset:oscar",
"arxiv:1910.10683",
"transformers",
"summarization",
"translation",
"license:apache-2.0"
] | translation | false | birgermoell | null | birgermoell/t5-base-swedish | 29 | null | transformers | 7,232 | ---
language:
- sv
datasets:
- oscar
tags:
- summarization
- translation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/oscar)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
biu-nlp/superpal | a0383eaa3520d4fae7780f25d63fb5c84eb0694d | 2022-06-18T22:15:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:2009.00590",
"transformers"
] | text-classification | false | biu-nlp | null | biu-nlp/superpal | 29 | null | transformers | 7,233 | ---
widget:
- text: "Prime Minister Hun Sen insisted that talks take place in Cambodia. </s><s> Cambodian leader Hun Sen rejected opposition parties' demands for talks outside the country."
---
# SuperPAL model
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline
Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. [PDF](https://arxiv.org/pdf/2009.00590)
**How to use?**
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("biu-nlp/superpal")
model = AutoModelForSequenceClassification.from_pretrained("biu-nlp/superpal")
```
The original repo is [here](https://github.com/oriern/SuperPAL).
If you find our work useful, please cite the paper as:
```python
@inproceedings{ernst-etal-2021-summary,
title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline",
author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido",
booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.conll-1.25",
pages = "310--322"
}
``` |
btk-mufi/bert-pretrain | 8283f6ab56d53811864ab6a65e17acae00ea6115 | 2021-05-19T13:34:00.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | btk-mufi | null | btk-mufi/bert-pretrain | 29 | null | transformers | 7,234 | Entry not found |
castorini/monot5-large-msmarco-10k | cfbf422f744b443bc461fac220541c4d90be9cbe | 2021-11-24T19:15:14.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/monot5-large-msmarco-10k | 29 | null | transformers | 7,235 | This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
This model usually has a better zero-shot performance than `monot5-large-msmarco`, i.e., it performs better on datasets different from MS MARCO.
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
chirag2706/gpt2_code_generation_model | fe673bce8bc8ea2ce4d9d3e745ec1eed6aba4ec6 | 2021-05-21T14:54:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | chirag2706 | null | chirag2706/gpt2_code_generation_model | 29 | null | transformers | 7,236 | Entry not found |
cook/cicero-similis | 971409a88c7293eb6fd8d497589a95f85dcdd78e | 2022-01-10T06:07:57.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"la",
"dataset:Tesserae",
"dataset:Phi5",
"dataset:Thomas Aquinas",
"dataset:Patrologia Latina",
"transformers",
"language model",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | cook | null | cook/cicero-similis | 29 | null | transformers | 7,237 | ---
language:
- la
tags:
- language model
license: apache-2.0
datasets:
- Tesserae
- Phi5
- Thomas Aquinas
- Patrologia Latina
---
# Cicero-Similis
## Model description
A Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2.
## Intended uses & limitations
#### How to use
Normalize text using JV Replacement and tokenize using CLTK to separate enclitics such as "-que", then:
```
from transformers import BertForMaskedLM, AutoTokenizer, FillMaskPipeline
tokenizer = AutoTokenizer.from_pretrained("cook/cicero-similis")
model = BertForMaskedLM.from_pretrained("cook/cicero-similis")
fill_mask = FillMaskPipeline(model=model, tokenizer=tokenizer, top_k=10_000)
# Cicero, De Re Publica, VI, 32, 2
# "animal" is found in A, Q, PhD manuscripts
# 'anima' H^1 Macr. et codd. Tusc.
results = fill_mask("inanimum est enim omne quod pulsu agitatur externo; quod autem est [MASK],")
```
#### Limitations and bias
Currently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model.
## Training data
Trained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina.
## Training procedure
5 epochs, masked language modeling .15, effective batch size 32
## Eval results
A novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2.
### BibTeX entry and citation info
TODO
_What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook,
Published in Ciceroniana On Line, Vol. V, #2. |
crazould/multimodal-emotion-recognition | ef9981c345dbcb41678dff53eab90729e93300aa | 2021-08-19T08:49:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | crazould | null | crazould/multimodal-emotion-recognition | 29 | null | transformers | 7,238 | Entry not found |
dbdmg/wav2vec2-xls-r-300m-italian-robust | 2ec7d2093dce5453d02fc5630468bc5f2f2c0b7e | 2022-03-23T18:26:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | dbdmg | null | dbdmg/wav2vec2-xls-r-300m-italian-robust | 29 | null | transformers | 7,239 | ---
license: apache-2.0
language: it
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300m - Italian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: it
metrics:
- name: Test WER
type: wer
value: 17.17
- name: Test CER
type: cer
value: 4.27
- name: Test WER (+LM)
type: wer
value: 12.07
- name: Test CER (+LM)
type: cer
value: 3.52
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: it
metrics:
- name: Test WER
type: wer
value: 24.29
- name: Test CER
type: cer
value: 8.1
- name: Test WER (+LM)
type: wer
value: 17.36
- name: Test CER (+LM)
type: cer
value: 7.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: it
metrics:
- name: Test WER
type: wer
value: 33.66
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-italian-robust
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Italian splits of the following datasets:
- Mozilla Foundation Common Voice V7 dataset
- [LibriSpeech multilingual](http://www.openslr.org/94)
- [TED multilingual](https://www.openslr.org/100/)
- [Voxforge](http://www.voxforge.org/it/Downloads)
- [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/)
- [EuroParl-ST](https://www.mllp.upv.es/europarl-st/)
- [EMOVO](http://voice.fub.it/activities/corpora/emovo/index.html)
- [MSPKA](http://www.mspkacorpus.it/)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.06 | 400 | 0.7508 | 0.7354 |
| 2.3127 | 0.11 | 800 | 0.5888 | 0.5882 |
| 0.7256 | 0.17 | 1200 | 0.5121 | 0.5247 |
| 0.6692 | 0.22 | 1600 | 0.4774 | 0.5028 |
| 0.6384 | 0.28 | 2000 | 0.4832 | 0.4885 |
| 0.6384 | 0.33 | 2400 | 0.4410 | 0.4581 |
| 0.6199 | 0.39 | 2800 | 0.4160 | 0.4331 |
| 0.5972 | 0.44 | 3200 | 0.4136 | 0.4275 |
| 0.6048 | 0.5 | 3600 | 0.4362 | 0.4538 |
| 0.5627 | 0.55 | 4000 | 0.4313 | 0.4469 |
| 0.5627 | 0.61 | 4400 | 0.4425 | 0.4579 |
| 0.5855 | 0.66 | 4800 | 0.3859 | 0.4133 |
| 0.5702 | 0.72 | 5200 | 0.3974 | 0.4097 |
| 0.55 | 0.77 | 5600 | 0.3931 | 0.4134 |
| 0.5624 | 0.83 | 6000 | 0.3900 | 0.4126 |
| 0.5624 | 0.88 | 6400 | 0.3622 | 0.3899 |
| 0.5615 | 0.94 | 6800 | 0.3755 | 0.4067 |
| 0.5472 | 0.99 | 7200 | 0.3980 | 0.4284 |
| 0.5663 | 1.05 | 7600 | 0.3553 | 0.3782 |
| 0.5189 | 1.1 | 8000 | 0.3538 | 0.3726 |
| 0.5189 | 1.16 | 8400 | 0.3425 | 0.3624 |
| 0.518 | 1.21 | 8800 | 0.3431 | 0.3651 |
| 0.5399 | 1.27 | 9200 | 0.3442 | 0.3573 |
| 0.5303 | 1.32 | 9600 | 0.3241 | 0.3404 |
| 0.5043 | 1.38 | 10000 | 0.3175 | 0.3378 |
| 0.5043 | 1.43 | 10400 | 0.3265 | 0.3501 |
| 0.4968 | 1.49 | 10800 | 0.3539 | 0.3703 |
| 0.5102 | 1.54 | 11200 | 0.3323 | 0.3506 |
| 0.5008 | 1.6 | 11600 | 0.3188 | 0.3433 |
| 0.4996 | 1.65 | 12000 | 0.3162 | 0.3388 |
| 0.4996 | 1.71 | 12400 | 0.3353 | 0.3552 |
| 0.5007 | 1.76 | 12800 | 0.3152 | 0.3317 |
| 0.4956 | 1.82 | 13200 | 0.3207 | 0.3430 |
| 0.5205 | 1.87 | 13600 | 0.3239 | 0.3430 |
| 0.4829 | 1.93 | 14000 | 0.3134 | 0.3266 |
| 0.4829 | 1.98 | 14400 | 0.3039 | 0.3291 |
| 0.5251 | 2.04 | 14800 | 0.2944 | 0.3169 |
| 0.4872 | 2.09 | 15200 | 0.3061 | 0.3228 |
| 0.4805 | 2.15 | 15600 | 0.3034 | 0.3152 |
| 0.4949 | 2.2 | 16000 | 0.2896 | 0.3066 |
| 0.4949 | 2.26 | 16400 | 0.3059 | 0.3344 |
| 0.468 | 2.31 | 16800 | 0.2932 | 0.3111 |
| 0.4637 | 2.37 | 17200 | 0.2890 | 0.3074 |
| 0.4638 | 2.42 | 17600 | 0.2893 | 0.3112 |
| 0.4728 | 2.48 | 18000 | 0.2832 | 0.3013 |
| 0.4728 | 2.54 | 18400 | 0.2921 | 0.3065 |
| 0.456 | 2.59 | 18800 | 0.2961 | 0.3104 |
| 0.4628 | 2.65 | 19200 | 0.2886 | 0.3109 |
| 0.4534 | 2.7 | 19600 | 0.2828 | 0.3020 |
| 0.4578 | 2.76 | 20000 | 0.2805 | 0.3026 |
| 0.4578 | 2.81 | 20400 | 0.2796 | 0.2987 |
| 0.4702 | 2.87 | 20800 | 0.2748 | 0.2906 |
| 0.4487 | 2.92 | 21200 | 0.2819 | 0.3008 |
| 0.4411 | 2.98 | 21600 | 0.2722 | 0.2868 |
| 0.4631 | 3.03 | 22000 | 0.2814 | 0.2974 |
| 0.4631 | 3.09 | 22400 | 0.2762 | 0.2894 |
| 0.4591 | 3.14 | 22800 | 0.2802 | 0.2980 |
| 0.4349 | 3.2 | 23200 | 0.2748 | 0.2951 |
| 0.4339 | 3.25 | 23600 | 0.2792 | 0.2927 |
| 0.4254 | 3.31 | 24000 | 0.2712 | 0.2911 |
| 0.4254 | 3.36 | 24400 | 0.2719 | 0.2892 |
| 0.4317 | 3.42 | 24800 | 0.2686 | 0.2861 |
| 0.4282 | 3.47 | 25200 | 0.2632 | 0.2861 |
| 0.4262 | 3.53 | 25600 | 0.2633 | 0.2817 |
| 0.4162 | 3.58 | 26000 | 0.2561 | 0.2765 |
| 0.4162 | 3.64 | 26400 | 0.2613 | 0.2847 |
| 0.414 | 3.69 | 26800 | 0.2679 | 0.2824 |
| 0.4132 | 3.75 | 27200 | 0.2569 | 0.2813 |
| 0.405 | 3.8 | 27600 | 0.2589 | 0.2785 |
| 0.4128 | 3.86 | 28000 | 0.2611 | 0.2714 |
| 0.4128 | 3.91 | 28400 | 0.2548 | 0.2731 |
| 0.4174 | 3.97 | 28800 | 0.2574 | 0.2716 |
| 0.421 | 4.02 | 29200 | 0.2529 | 0.2700 |
| 0.4109 | 4.08 | 29600 | 0.2547 | 0.2682 |
| 0.4027 | 4.13 | 30000 | 0.2578 | 0.2758 |
| 0.4027 | 4.19 | 30400 | 0.2511 | 0.2715 |
| 0.4075 | 4.24 | 30800 | 0.2507 | 0.2601 |
| 0.3947 | 4.3 | 31200 | 0.2552 | 0.2711 |
| 0.4042 | 4.35 | 31600 | 0.2530 | 0.2695 |
| 0.3907 | 4.41 | 32000 | 0.2543 | 0.2738 |
| 0.3907 | 4.46 | 32400 | 0.2491 | 0.2629 |
| 0.3895 | 4.52 | 32800 | 0.2471 | 0.2611 |
| 0.3901 | 4.57 | 33200 | 0.2404 | 0.2559 |
| 0.3818 | 4.63 | 33600 | 0.2378 | 0.2583 |
| 0.3831 | 4.68 | 34000 | 0.2341 | 0.2499 |
| 0.3831 | 4.74 | 34400 | 0.2379 | 0.2560 |
| 0.3808 | 4.79 | 34800 | 0.2418 | 0.2553 |
| 0.4015 | 4.85 | 35200 | 0.2378 | 0.2565 |
| 0.407 | 4.9 | 35600 | 0.2375 | 0.2535 |
| 0.38 | 4.96 | 36000 | 0.2329 | 0.2451 |
| 0.38 | 5.02 | 36400 | 0.2541 | 0.2737 |
| 0.3753 | 5.07 | 36800 | 0.2475 | 0.2580 |
| 0.3701 | 5.13 | 37200 | 0.2356 | 0.2484 |
| 0.3627 | 5.18 | 37600 | 0.2422 | 0.2552 |
| 0.3652 | 5.24 | 38000 | 0.2353 | 0.2518 |
| 0.3652 | 5.29 | 38400 | 0.2328 | 0.2452 |
| 0.3667 | 5.35 | 38800 | 0.2358 | 0.2478 |
| 0.3711 | 5.4 | 39200 | 0.2340 | 0.2463 |
| 0.361 | 5.46 | 39600 | 0.2375 | 0.2452 |
| 0.3655 | 5.51 | 40000 | 0.2292 | 0.2387 |
| 0.3655 | 5.57 | 40400 | 0.2330 | 0.2432 |
| 0.3637 | 5.62 | 40800 | 0.2242 | 0.2396 |
| 0.3516 | 5.68 | 41200 | 0.2284 | 0.2394 |
| 0.3498 | 5.73 | 41600 | 0.2254 | 0.2343 |
| 0.3626 | 5.79 | 42000 | 0.2191 | 0.2318 |
| 0.3626 | 5.84 | 42400 | 0.2261 | 0.2399 |
| 0.3719 | 5.9 | 42800 | 0.2261 | 0.2411 |
| 0.3563 | 5.95 | 43200 | 0.2259 | 0.2416 |
| 0.3574 | 6.01 | 43600 | 0.2148 | 0.2249 |
| 0.3339 | 6.06 | 44000 | 0.2173 | 0.2237 |
| 0.3339 | 6.12 | 44400 | 0.2133 | 0.2238 |
| 0.3303 | 6.17 | 44800 | 0.2193 | 0.2297 |
| 0.331 | 6.23 | 45200 | 0.2122 | 0.2205 |
| 0.3372 | 6.28 | 45600 | 0.2083 | 0.2215 |
| 0.3427 | 6.34 | 46000 | 0.2079 | 0.2163 |
| 0.3427 | 6.39 | 46400 | 0.2072 | 0.2154 |
| 0.3215 | 6.45 | 46800 | 0.2067 | 0.2170 |
| 0.3246 | 6.5 | 47200 | 0.2089 | 0.2183 |
| 0.3217 | 6.56 | 47600 | 0.2030 | 0.2130 |
| 0.3309 | 6.61 | 48000 | 0.2020 | 0.2123 |
| 0.3309 | 6.67 | 48400 | 0.2054 | 0.2133 |
| 0.3343 | 6.72 | 48800 | 0.2013 | 0.2128 |
| 0.3213 | 6.78 | 49200 | 0.1971 | 0.2064 |
| 0.3145 | 6.83 | 49600 | 0.2029 | 0.2107 |
| 0.3274 | 6.89 | 50000 | 0.2038 | 0.2136 |
| 0.3274 | 6.94 | 50400 | 0.1991 | 0.2064 |
| 0.3202 | 7.0 | 50800 | 0.1970 | 0.2083 |
| 0.314 | 7.05 | 51200 | 0.1970 | 0.2035 |
| 0.3031 | 7.11 | 51600 | 0.1943 | 0.2053 |
| 0.3004 | 7.16 | 52000 | 0.1942 | 0.1985 |
| 0.3004 | 7.22 | 52400 | 0.1941 | 0.2003 |
| 0.3029 | 7.27 | 52800 | 0.1936 | 0.2008 |
| 0.2915 | 7.33 | 53200 | 0.1935 | 0.1995 |
| 0.3005 | 7.38 | 53600 | 0.1943 | 0.2032 |
| 0.2984 | 7.44 | 54000 | 0.1913 | 0.1978 |
| 0.2984 | 7.5 | 54400 | 0.1907 | 0.1965 |
| 0.2978 | 7.55 | 54800 | 0.1881 | 0.1958 |
| 0.2944 | 7.61 | 55200 | 0.1887 | 0.1966 |
| 0.3004 | 7.66 | 55600 | 0.1870 | 0.1930 |
| 0.3099 | 7.72 | 56000 | 0.1906 | 0.1976 |
| 0.3099 | 7.77 | 56400 | 0.1856 | 0.1939 |
| 0.2917 | 7.83 | 56800 | 0.1883 | 0.1961 |
| 0.2924 | 7.88 | 57200 | 0.1864 | 0.1930 |
| 0.3061 | 7.94 | 57600 | 0.1831 | 0.1872 |
| 0.2834 | 7.99 | 58000 | 0.1835 | 0.1896 |
| 0.2834 | 8.05 | 58400 | 0.1828 | 0.1875 |
| 0.2807 | 8.1 | 58800 | 0.1820 | 0.1874 |
| 0.2765 | 8.16 | 59200 | 0.1807 | 0.1869 |
| 0.2737 | 8.21 | 59600 | 0.1810 | 0.1848 |
| 0.2722 | 8.27 | 60000 | 0.1795 | 0.1829 |
| 0.2722 | 8.32 | 60400 | 0.1785 | 0.1826 |
| 0.272 | 8.38 | 60800 | 0.1802 | 0.1836 |
| 0.268 | 8.43 | 61200 | 0.1771 | 0.1813 |
| 0.2695 | 8.49 | 61600 | 0.1773 | 0.1821 |
| 0.2686 | 8.54 | 62000 | 0.1756 | 0.1814 |
| 0.2686 | 8.6 | 62400 | 0.1740 | 0.1770 |
| 0.2687 | 8.65 | 62800 | 0.1748 | 0.1769 |
| 0.2686 | 8.71 | 63200 | 0.1734 | 0.1766 |
| 0.2683 | 8.76 | 63600 | 0.1722 | 0.1759 |
| 0.2686 | 8.82 | 64000 | 0.1719 | 0.1760 |
| 0.2686 | 8.87 | 64400 | 0.1720 | 0.1743 |
| 0.2626 | 8.93 | 64800 | 0.1696 | 0.1742 |
| 0.2587 | 8.98 | 65200 | 0.1690 | 0.1718 |
| 0.2554 | 9.04 | 65600 | 0.1704 | 0.1722 |
| 0.2537 | 9.09 | 66000 | 0.1702 | 0.1721 |
| 0.2537 | 9.15 | 66400 | 0.1696 | 0.1717 |
| 0.2511 | 9.2 | 66800 | 0.1685 | 0.1701 |
| 0.2473 | 9.26 | 67200 | 0.1696 | 0.1704 |
| 0.2458 | 9.31 | 67600 | 0.1686 | 0.1698 |
| 0.2476 | 9.37 | 68000 | 0.1675 | 0.1687 |
| 0.2476 | 9.42 | 68400 | 0.1659 | 0.1673 |
| 0.2463 | 9.48 | 68800 | 0.1664 | 0.1674 |
| 0.2481 | 9.53 | 69200 | 0.1661 | 0.1670 |
| 0.2411 | 9.59 | 69600 | 0.1658 | 0.1663 |
| 0.2445 | 9.64 | 70000 | 0.1652 | 0.1660 |
| 0.2445 | 9.7 | 70400 | 0.1646 | 0.1654 |
| 0.2407 | 9.75 | 70800 | 0.1646 | 0.1641 |
| 0.2483 | 9.81 | 71200 | 0.1641 | 0.1641 |
| 0.245 | 9.86 | 71600 | 0.1635 | 0.1643 |
| 0.2402 | 9.92 | 72000 | 0.1638 | 0.1634 |
| 0.2402 | 9.98 | 72400 | 0.1633 | 0.1636 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
diego-fustes/wav2vec2-large-xlsr-gl | e61a69170595a866e391df843fff6df7a71d46d8 | 2021-07-06T01:30:50.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:OpenSLR 77",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | diego-fustes | null | diego-fustes/wav2vec2-large-xlsr-gl | 29 | null | transformers | 7,240 | # Wav2Vec2-Large-XLSR-53
---
language: gl
datasets:
- OpenSLR 77
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Galician Wav2Vec2-Large-XLSR-53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: openslr
args: gl
metrics:
- name: Test WER
type: wer
value: 16.79
---
Wav2Vec2-Large-XLSR-53-galician
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on galician using the [OpenSLR](https://huggingface.co/datasets/common_voice) dataset
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "gl", split="test[:2%]") # This is not available yet, load OpenSLR or your dataset instead
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Galician test data of Common Voice (when it is released).
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "gl", split="test") # This is not available yet, load OpenSLR or your dataset instead
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model.to("cuda")
chars_to_ignore_regex = '[^a-záéíóúñ ]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.79 % on OpenSLR split
## Training
The OpenSLR [SLR77](https://openslr.org/77/) dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing
The script used for training can be found [here](https://github.com/diego-fustes/xlsr-fine-tuning-gl)
|
dragonSwing/wav2vec2-base-vietnamese | c66735f7a38f4ab727cd4d31d42c16011d2bd388 | 2021-08-26T05:08:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:vlsp",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | dragonSwing | null | dragonSwing/wav2vec2-base-vietnamese | 29 | null | transformers | 7,241 | ---
language: vi
datasets:
- vlsp
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: Wav2vec2 Base Vietnamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 31.353591
---
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [dragonSwing/wav2vec2-base-pretrain-vietnamese](https://huggingface.co/dragonSwing/wav2vec2-base-pretrain-vietnamese) on Vietnamese Speech Recognition task using 100h labelled data from [VSLP dataset](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test")
processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model.to("cuda")
chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=1)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 31.353591%
|
emre/wav2vec2-large-xls-r-300m-tr | 382a24da1774f6a18f68de3c813d6334457a1ee8 | 2022-03-23T18:25:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-large-xls-r-300m-tr | 29 | null | transformers | 7,242 | ---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice_8_0
args: tr
metrics:
- name: Test WER
type: wer
value: 28.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Wer: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.8222 | 0.64 | 500 | 3.5026 | 1.0 |
| 3.2136 | 1.28 | 1000 | 3.0593 | 1.0000 |
| 2.8882 | 1.91 | 1500 | 2.4670 | 0.9939 |
| 2.3743 | 2.55 | 2000 | 1.1844 | 0.8657 |
| 1.9456 | 3.19 | 2500 | 0.8228 | 0.7397 |
| 1.7781 | 3.83 | 3000 | 0.6826 | 0.6753 |
| 1.6848 | 4.46 | 3500 | 0.5885 | 0.6140 |
| 1.6228 | 5.1 | 4000 | 0.5274 | 0.5789 |
| 1.5768 | 5.74 | 4500 | 0.4900 | 0.5519 |
| 1.5431 | 6.38 | 5000 | 0.4508 | 0.5238 |
| 1.5019 | 7.02 | 5500 | 0.4248 | 0.5021 |
| 1.4684 | 7.65 | 6000 | 0.4009 | 0.4827 |
| 1.4635 | 8.29 | 6500 | 0.3830 | 0.4700 |
| 1.4291 | 8.93 | 7000 | 0.3707 | 0.4595 |
| 1.4271 | 9.57 | 7500 | 0.3570 | 0.4514 |
| 1.3938 | 10.2 | 8000 | 0.3479 | 0.4378 |
| 1.3914 | 10.84 | 8500 | 0.3396 | 0.4368 |
| 1.3767 | 11.48 | 9000 | 0.3253 | 0.4262 |
| 1.3641 | 12.12 | 9500 | 0.3251 | 0.4178 |
| 1.355 | 12.76 | 10000 | 0.3138 | 0.4136 |
| 1.336 | 13.39 | 10500 | 0.3121 | 0.4069 |
| 1.3292 | 14.03 | 11000 | 0.3041 | 0.4014 |
| 1.3249 | 14.67 | 11500 | 0.3014 | 0.3931 |
| 1.3156 | 15.31 | 12000 | 0.3014 | 0.3929 |
| 1.313 | 15.94 | 12500 | 0.2969 | 0.3968 |
| 1.3068 | 16.58 | 13000 | 0.2965 | 0.3966 |
| 1.2785 | 17.22 | 13500 | 0.2943 | 0.3850 |
| 1.2867 | 17.86 | 14000 | 0.2912 | 0.3782 |
| 1.2714 | 18.49 | 14500 | 0.2819 | 0.3747 |
| 1.2844 | 19.13 | 15000 | 0.2840 | 0.3740 |
| 1.2684 | 19.77 | 15500 | 0.2913 | 0.3828 |
| 1.26 | 20.41 | 16000 | 0.2739 | 0.3674 |
| 1.2543 | 21.05 | 16500 | 0.2740 | 0.3691 |
| 1.2532 | 21.68 | 17000 | 0.2709 | 0.3756 |
| 1.2409 | 22.32 | 17500 | 0.2669 | 0.3593 |
| 1.2404 | 22.96 | 18000 | 0.2673 | 0.3576 |
| 1.2347 | 23.6 | 18500 | 0.2678 | 0.3643 |
| 1.2351 | 24.23 | 19000 | 0.2715 | 0.3650 |
| 1.2409 | 24.87 | 19500 | 0.2637 | 0.3571 |
| 1.2152 | 25.51 | 20000 | 0.2785 | 0.3609 |
| 1.2046 | 26.15 | 20500 | 0.2610 | 0.3508 |
| 1.2082 | 26.79 | 21000 | 0.2619 | 0.3461 |
| 1.2109 | 27.42 | 21500 | 0.2597 | 0.3502 |
| 1.2014 | 28.06 | 22000 | 0.2608 | 0.3468 |
| 1.1948 | 28.7 | 22500 | 0.2573 | 0.3457 |
| 1.205 | 29.34 | 23000 | 0.2619 | 0.3464 |
| 1.2019 | 29.97 | 23500 | 0.2559 | 0.3474 |
| 1.1917 | 30.61 | 24000 | 0.2601 | 0.3462 |
| 1.1939 | 31.25 | 24500 | 0.2575 | 0.3387 |
| 1.1882 | 31.89 | 25000 | 0.2535 | 0.3368 |
| 1.191 | 32.53 | 25500 | 0.2489 | 0.3365 |
| 1.1767 | 33.16 | 26000 | 0.2501 | 0.3347 |
| 1.167 | 33.8 | 26500 | 0.2504 | 0.3347 |
| 1.1678 | 34.44 | 27000 | 0.2480 | 0.3378 |
| 1.1803 | 35.08 | 27500 | 0.2487 | 0.3345 |
| 1.167 | 35.71 | 28000 | 0.2442 | 0.3319 |
| 1.1661 | 36.35 | 28500 | 0.2495 | 0.3334 |
| 1.164 | 36.99 | 29000 | 0.2472 | 0.3292 |
| 1.1578 | 37.63 | 29500 | 0.2442 | 0.3242 |
| 1.1584 | 38.27 | 30000 | 0.2431 | 0.3314 |
| 1.1526 | 38.9 | 30500 | 0.2441 | 0.3347 |
| 1.1542 | 39.54 | 31000 | 0.2437 | 0.3330 |
| 1.1508 | 40.18 | 31500 | 0.2433 | 0.3294 |
| 1.1406 | 40.82 | 32000 | 0.2434 | 0.3271 |
| 1.1514 | 41.45 | 32500 | 0.2426 | 0.3255 |
| 1.1418 | 42.09 | 33000 | 0.2432 | 0.3233 |
| 1.1365 | 42.73 | 33500 | 0.2436 | 0.3240 |
| 1.1348 | 43.37 | 34000 | 0.2483 | 0.3257 |
| 1.1301 | 44.01 | 34500 | 0.2420 | 0.3271 |
| 1.1268 | 44.64 | 35000 | 0.2472 | 0.3225 |
| 1.1224 | 45.28 | 35500 | 0.2382 | 0.3205 |
| 1.1224 | 45.92 | 36000 | 0.2388 | 0.3184 |
| 1.1198 | 46.56 | 36500 | 0.2382 | 0.3202 |
| 1.1274 | 47.19 | 37000 | 0.2404 | 0.3172 |
| 1.1147 | 47.83 | 37500 | 0.2394 | 0.3164 |
| 1.121 | 48.47 | 38000 | 0.2406 | 0.3202 |
| 1.1109 | 49.11 | 38500 | 0.2384 | 0.3154 |
| 1.1164 | 49.74 | 39000 | 0.2375 | 0.3169 |
| 1.1105 | 50.38 | 39500 | 0.2387 | 0.3173 |
| 1.1054 | 51.02 | 40000 | 0.2362 | 0.3120 |
| 1.0893 | 51.66 | 40500 | 0.2399 | 0.3130 |
| 1.0913 | 52.3 | 41000 | 0.2357 | 0.3088 |
| 1.1017 | 52.93 | 41500 | 0.2345 | 0.3084 |
| 1.0937 | 53.57 | 42000 | 0.2330 | 0.3140 |
| 1.0945 | 54.21 | 42500 | 0.2399 | 0.3107 |
| 1.0933 | 54.85 | 43000 | 0.2383 | 0.3134 |
| 1.0912 | 55.48 | 43500 | 0.2372 | 0.3077 |
| 1.0898 | 56.12 | 44000 | 0.2339 | 0.3083 |
| 1.0903 | 56.76 | 44500 | 0.2367 | 0.3065 |
| 1.0947 | 57.4 | 45000 | 0.2352 | 0.3104 |
| 1.0751 | 58.04 | 45500 | 0.2334 | 0.3084 |
| 1.09 | 58.67 | 46000 | 0.2328 | 0.3100 |
| 1.0876 | 59.31 | 46500 | 0.2276 | 0.3050 |
| 1.076 | 59.95 | 47000 | 0.2309 | 0.3047 |
| 1.086 | 60.59 | 47500 | 0.2293 | 0.3047 |
| 1.082 | 61.22 | 48000 | 0.2328 | 0.3027 |
| 1.0714 | 61.86 | 48500 | 0.2290 | 0.3020 |
| 1.0746 | 62.5 | 49000 | 0.2313 | 0.3059 |
| 1.076 | 63.14 | 49500 | 0.2342 | 0.3050 |
| 1.0648 | 63.78 | 50000 | 0.2286 | 0.3025 |
| 1.0586 | 64.41 | 50500 | 0.2338 | 0.3044 |
| 1.0753 | 65.05 | 51000 | 0.2308 | 0.3045 |
| 1.0664 | 65.69 | 51500 | 0.2273 | 0.3009 |
| 1.0739 | 66.33 | 52000 | 0.2298 | 0.3027 |
| 1.0695 | 66.96 | 52500 | 0.2247 | 0.2996 |
| 1.06 | 67.6 | 53000 | 0.2276 | 0.3015 |
| 1.0742 | 68.24 | 53500 | 0.2280 | 0.2974 |
| 1.0618 | 68.88 | 54000 | 0.2291 | 0.2989 |
| 1.062 | 69.52 | 54500 | 0.2302 | 0.2971 |
| 1.0572 | 70.15 | 55000 | 0.2280 | 0.2990 |
| 1.055 | 70.79 | 55500 | 0.2278 | 0.2983 |
| 1.0553 | 71.43 | 56000 | 0.2282 | 0.2991 |
| 1.0509 | 72.07 | 56500 | 0.2261 | 0.2959 |
| 1.0469 | 72.7 | 57000 | 0.2216 | 0.2919 |
| 1.0476 | 73.34 | 57500 | 0.2267 | 0.2989 |
| 1.0494 | 73.98 | 58000 | 0.2260 | 0.2960 |
| 1.0517 | 74.62 | 58500 | 0.2297 | 0.2989 |
| 1.0458 | 75.26 | 59000 | 0.2246 | 0.2923 |
| 1.0382 | 75.89 | 59500 | 0.2255 | 0.2922 |
| 1.0462 | 76.53 | 60000 | 0.2258 | 0.2954 |
| 1.0375 | 77.17 | 60500 | 0.2251 | 0.2929 |
| 1.0332 | 77.81 | 61000 | 0.2277 | 0.2940 |
| 1.0423 | 78.44 | 61500 | 0.2243 | 0.2896 |
| 1.0379 | 79.08 | 62000 | 0.2274 | 0.2928 |
| 1.0398 | 79.72 | 62500 | 0.2237 | 0.2928 |
| 1.0395 | 80.36 | 63000 | 0.2265 | 0.2956 |
| 1.0397 | 80.99 | 63500 | 0.2240 | 0.2920 |
| 1.0262 | 81.63 | 64000 | 0.2244 | 0.2934 |
| 1.0335 | 82.27 | 64500 | 0.2265 | 0.2936 |
| 1.0385 | 82.91 | 65000 | 0.2238 | 0.2928 |
| 1.0289 | 83.55 | 65500 | 0.2219 | 0.2912 |
| 1.0372 | 84.18 | 66000 | 0.2236 | 0.2898 |
| 1.0279 | 84.82 | 66500 | 0.2219 | 0.2902 |
| 1.0325 | 85.46 | 67000 | 0.2240 | 0.2908 |
| 1.0202 | 86.1 | 67500 | 0.2206 | 0.2886 |
| 1.0166 | 86.73 | 68000 | 0.2219 | 0.2886 |
| 1.0259 | 87.37 | 68500 | 0.2235 | 0.2897 |
| 1.0337 | 88.01 | 69000 | 0.2210 | 0.2873 |
| 1.0264 | 88.65 | 69500 | 0.2216 | 0.2882 |
| 1.0231 | 89.29 | 70000 | 0.2223 | 0.2899 |
| 1.0281 | 89.92 | 70500 | 0.2214 | 0.2872 |
| 1.0135 | 90.56 | 71000 | 0.2218 | 0.2868 |
| 1.0291 | 91.2 | 71500 | 0.2209 | 0.2863 |
| 1.0321 | 91.84 | 72000 | 0.2199 | 0.2876 |
| 1.028 | 92.47 | 72500 | 0.2214 | 0.2858 |
| 1.0213 | 93.11 | 73000 | 0.2219 | 0.2875 |
| 1.0261 | 93.75 | 73500 | 0.2232 | 0.2869 |
| 1.0197 | 94.39 | 74000 | 0.2227 | 0.2866 |
| 1.0298 | 95.03 | 74500 | 0.2228 | 0.2868 |
| 1.0192 | 95.66 | 75000 | 0.2230 | 0.2865 |
| 1.0156 | 96.3 | 75500 | 0.2220 | 0.2869 |
| 1.0075 | 96.94 | 76000 | 0.2223 | 0.2866 |
| 1.0201 | 97.58 | 76500 | 0.2219 | 0.2866 |
| 1.0159 | 98.21 | 77000 | 0.2219 | 0.2876 |
| 1.0087 | 98.85 | 77500 | 0.2219 | 0.2873 |
| 1.0159 | 99.49 | 78000 | 0.2223 | 0.2867 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
enelpi/med-electra-small-discriminator | c04a0627bf9463ae42f380385b61a275fbe1c68f | 2021-06-14T22:49:00.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | enelpi | null | enelpi/med-electra-small-discriminator | 29 | null | transformers | 7,243 | Entry not found |
google/tapas-small-finetuned-wikisql-supervised | 64471f58bf6e6817f715e8b8fa08d90193548d1b | 2021-11-29T13:07:06.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikisql",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1709.00103",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-small-finetuned-wikisql-supervised | 29 | 3 | transformers | 7,244 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- wikisql
---
# TAPAS small model fine-tuned on WikiSQL (in a supervised fashion)
his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup
ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1709-00103,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017},
url = {http://arxiv.org/abs/1709.00103},
archivePrefix = {arXiv},
eprint = {1709.00103},
timestamp = {Mon, 13 Aug 2018 16:48:41 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
hfl/chinese-electra-small-ex-discriminator | 999c15e16cfa6d3deac78d3a57f34d242908ecf4 | 2021-03-03T01:39:26.000Z | [
"pytorch",
"tf",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0"
] | null | false | hfl | null | hfl/chinese-electra-small-ex-discriminator | 29 | 1 | transformers | 7,245 | ---
language:
- zh
license: "apache-2.0"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
huggingtweets/elmo_oxygen | 3cbe8fc1b59d9304e3748966891befe60bcd3736 | 2021-05-22T02:54:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/elmo_oxygen | 29 | null | transformers | 7,246 | ---
language: en
thumbnail: https://www.huggingtweets.com/elmo_oxygen/1617790228158/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1358585705165803520/1gRdkOAR_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Elmo 🤖 AI Bot </div>
<div style="font-size: 15px">@elmo_oxygen bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@elmo_oxygen's tweets](https://twitter.com/elmo_oxygen).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 69 |
| Short tweets | 390 |
| Tweets kept | 2781 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ltnqbnk5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elmo_oxygen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gigw5nn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gigw5nn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elmo_oxygen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/insert_name27 | b7803456f2e949c18976ea3d1224bffa8c77e3b3 | 2021-05-22T08:23:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/insert_name27 | 29 | null | transformers | 7,247 | ---
language: en
thumbnail: https://www.huggingtweets.com/insert_name27/1617820538616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1342654008520011783/ELNBkoe__400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Insert 🚩🦮 🤖 AI Bot </div>
<div style="font-size: 15px">@insert_name27 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@insert_name27's tweets](https://twitter.com/insert_name27).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 111 |
| Short tweets | 491 |
| Tweets kept | 2644 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m2d1hmb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @insert_name27's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ajldnpxe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ajldnpxe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/insert_name27')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/xxinnernettexx | 9a8030edee6221ef2a00c4b4138e3bbc77856ec0 | 2022-06-18T22:57:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/xxinnernettexx | 29 | null | transformers | 7,248 | ---
language: en
thumbnail: http://www.huggingtweets.com/xxinnernettexx/1655593074247/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1536840646912315392/XtQhtfTT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">poolboy 𝖀𝖓𝖆𝕱𝖚𝖙𝖚𝖗𝖊 Iɳɳҽɾɳҽƚƚҽ</div>
<div style="text-align: center; font-size: 14px;">@xxinnernettexx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from poolboy 𝖀𝖓𝖆𝕱𝖚𝖙𝖚𝖗𝖊 Iɳɳҽɾɳҽƚƚҽ.
| Data | poolboy 𝖀𝖓𝖆𝕱𝖚𝖙𝖚𝖗𝖊 Iɳɳҽɾɳҽƚƚҽ |
| --- | --- |
| Tweets downloaded | 1556 |
| Retweets | 390 |
| Short tweets | 221 |
| Tweets kept | 945 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kfp4hiy2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @xxinnernettexx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p9fmnjln) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p9fmnjln/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/xxinnernettexx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jkulhanek/augpt-mw-21 | 6940ce051e321ddb4fad4da71c2dc0227cf7cc23 | 2021-05-23T05:58:15.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | jkulhanek | null | jkulhanek/augpt-mw-21 | 29 | null | transformers | 7,249 | Entry not found |
liaad/ud_srl-pt_bertimbau-large | a3a905cc9e2abcb54b11e28e1305e5a0c93875c5 | 2021-09-22T08:56:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"multilingual",
"pt",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"dataset:Universal Dependencies",
"arxiv:2101.01213",
"transformers",
"bert-large-portuguese-cased",
"semantic role labeling",
"finetuned",
"dependency parsing",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/ud_srl-pt_bertimbau-large | 29 | null | transformers | 7,250 | ---
language:
- multilingual
- pt
tags:
- bert-large-portuguese-cased
- semantic role labeling
- finetuned
- dependency parsing
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
- Universal Dependencies
metrics:
- F1 Measure
---
# BERTimbau large fine-tune in Portuguese Universal Dependencies and semantic role labeling
## Model description
This model is the [`neuralmind/bert-large-portuguese-cased`](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned first on the Universal Dependencies Portuguese dataset and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-pt_bertimbau-large")
model = AutoModel.from_pretrained("liaad/ud_srl-pt_bertimbau-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- The model was trained only for 10 epochs in the Universal Dependencies dataset.
## Training procedure
The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lighteternal/stsb-xlm-r-greek-transfer | 28e381a9365abcf8ad2c0808532a8b8cf0d48260 | 2021-10-11T21:16:05.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"el",
"arxiv:2004.09813",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | lighteternal | null | lighteternal/stsb-xlm-r-greek-transfer | 29 | null | sentence-transformers | 7,251 | ---
language:
- en
- el
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "Το κινητό έπεσε και έσπασε."
sentences: [
"H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."
]
pipeline_tag: sentence-similarity
license: apache-2.0
---
# Semantic Textual Similarity for the Greek language using Transformers and Transfer Learning
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a Teacher-Student transfer learning approach described [here](https://www.sbert.net/examples/training/multilingual/README.html) to train an XLM-Roberta-base model on STS using parallel EN-EL sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('{MODEL_NAME}')
sentences1 = ['Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.']
sentences2 = ["H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."]
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
#Compute cosine-similarities (clone repo for util functions)
from sentence_transformers import util
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(sentences1)):
print("{} {} Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
#Outputs:
#Το κινητό έπεσε και έσπασε. H πτώση κατέστρεψε τη συσκευή. Score: 0.6741
#Το κινητό έπεσε και έσπασε. Το αυτοκίνητο έσπασε στα δυο. Score: 0.5067
#Το κινητό έπεσε και έσπασε. Ο υπουργός έπεσε και έσπασε το πόδι του. Score: 0.4548
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained(
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
#### Similarity Evaluation on STS.en-el.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
| cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.834474802920369 | 0.845687403828107 | 0.815895882192263 | 0.81084300966291 | 0.816333562677654 | 0.813879742416394 | 0.7945167996031 | 0.802604238383742 |
#### Translation
We measure the translation accuracy. Given a list with source sentences, for example, 1000 English sentences. And a list with matching target (translated) sentences, for example, 1000 Greek sentences. For each sentence pair, we check if their embeddings are the closest using cosine similarity. I.e., for each src_sentences[i] we check if trg_sentences[i] has the highest similarity out of all target sentences. If this is the case, we have a hit, otherwise an error. This evaluator reports accuracy (higher = better).
| src2trg | trg2src |
| ----------- | ----------- |
| 0.981 | 0.9775 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 135121 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 400, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
## Citing & Authors
Citation info for Greek model: TBD
Based on the transfer learning approach of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
|
mbartolo/roberta-large-synqa-ext | 2119a9ff627644a132a5f2f6172d2d74cb2cff41 | 2022-07-25T23:35:51.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:adversarial_qa",
"dataset:mbartolo/synQA",
"dataset:squad",
"arxiv:2002.00293",
"arxiv:2104.08678",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mbartolo | null | mbartolo/roberta-large-synqa-ext | 29 | null | transformers | 7,252 | ---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/roberta-large-synqa-ext
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 53.2
verified: true
- name: F1
type: f1
value: 64.6266
verified: true
---
# Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD as well as Wikipedia passages external to SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA
# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. |
monologg/koelectra-base-v2-generator | 2e321e404f956bad94c680e21b050b7f613ca137 | 2021-10-20T16:54:01.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"transformers",
"korean",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/koelectra-base-v2-generator | 29 | null | transformers | 7,253 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v2 (Base Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v2-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v2-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-generator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 5084, 16248, 3770, 19059, 29965, 2259, 10431, 5, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-base-v2-generator",
tokenizer="monologg/koelectra-base-v2-generator"
)
print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token)))
```
|
nguyenthanhasia/VNBertLaw | c24aca1c885d8b50e94108c332bbc46b45f27cbf | 2021-05-20T01:49:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | null | false | nguyenthanhasia | null | nguyenthanhasia/VNBertLaw | 29 | null | transformers | 7,254 | This is Vietnamese Bert Law
|
ravirajoshi/wav2vec2-large-xls-r-300m-marathi | 9fe7c2efa5d649a4f112baae21d8c2cd7d643ac1 | 2022-03-23T18:25:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ravirajoshi | null | ravirajoshi/wav2vec2-large-xls-r-300m-marathi | 29 | null | transformers | 7,255 | ---
language:
- mr
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
model-index:
- name: wav2vec2-large-xls-r-300m-marathi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5656
- Wer: 0.2156
|
raynardj/keywords-cangtou-chinese-poetry | 69e4343a3f1a06273f52d6724330e7276be6bc5f | 2022-02-21T08:32:16.000Z | [
"pytorch",
"zh",
"generation",
"poetry"
] | null | false | raynardj | null | raynardj/keywords-cangtou-chinese-poetry | 29 | 3 | null | 7,256 | ---
language:
- zh
tags:
- generation
- poetry
widget:
- text: "疆场-思乡-归家-耕织《丘处机》"
---
# 终于落不了油腻俗套, 来弄这劳什子的藏头诗模型
> This is a model to generated Chinese poetry with leading characters and certain tune of mood.
## 本模型为了达到两个目的 Objectives
* 创作藏头诗 🎸
* 创作时尽量融入关键词的意境🪁 🌼 ❄️ 🌝
## 运作原理 How
这个模型充分利用了[gpt2论文](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)的精髓, 论文标题为**《语言模型即学万事万物》**, 也就是许许多多的学习任务, 可以安排成文本序列的形式,来管理输入输出, 即模型如能根据 **「所有自然常数的导数是0, 0的cos是1 ,」**算出后面的句子应该是**「 四个1相加的阶乘是4, 4的阶乘是24」**也就学会了二十四点。 模型在训练上只做了猜测语言序列的任务, 但会兼通万物。
这个码诗模型就是这么来的, 训练任务, 是输入0~10来个关键词+藏头标题+藏头字数+把头换成分类符```[CLS]```之后的诗句。
```
'忍看-窈窕-孤寝-勾带-嫩-黄昏《粉度》『二』[CLS]堞云齐,[CLS]清笳、愁入暮烟林杪。素艳透春,玉骨凄凉,勾带月痕生早。江天苍莽黄昏後,依然是、粉寒香瘦。动追感、西园嫩约,夜深人悄。记得东风窈窕。曾夜踏横斜,醉携娇小。惆怅旧欢,回首俱非,忍看绿笺红豆。香销纸帐人孤寝,相思恨、花还知否。梦回处,霜飞翠楼已晓。'
```
## Inference 通道矫情了一点, 大家照抄就是了
### 不然藏头就不见了
```python
from transformers import (AutoTokenizer, AutoModelForCausalLM)
tokenizer = AutoTokenizer.from_pretrained('raynardj/keywords-cangtou-chinese-poetry')
model = AutoModelForCausalLM.from_pretrained('raynardj/keywords-cangtou-chinese-poetry')
def inference(lead, keywords = []):
"""
lead: 藏头的语句, 比如一个人的名字, 2,3 或4个字
keywords:关键词, 0~12个关键词比较好
"""
leading = f"《{lead}》"
text = "-".join(keywords)+leading
input_ids = tokenizer(text, return_tensors='pt', ).input_ids[:,:-1]
lead_tok = tokenizer(lead, return_tensors='pt', ).input_ids[0,1:-1]
with torch.no_grad():
pred = model.generate(
input_ids,
max_length=256,
num_beams=5,
do_sample=True,
repetition_penalty=2.1,
top_p=.6,
bos_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.sep_token_id,
)[0,1:]
# 我们需要将[CLS] 字符, 也就是101, 逐个换回藏头的字符
mask = (pred==101)
while mask.sum()<len(lead_tok):
lead_tok = lead_tok[:mask.sum()]
while mask.sum()>len(lead_tok):
reversed_lead_tok = lead_tok.flip(0)
lead_tok = torch.cat([
lead_tok, reversed_lead_tok[:mask.sum()-len(lead_tok)]])
pred[mask] = lead_tok
# 从 token 编号解码成语句
generate = tokenizer.decode(pred, skip_special_tokens=True)
# 清理语句
generate = generate.replace("》","》\n").replace("。","。\n").replace(" ","")
return generate
```
感谢liangtongt指出Inference 代码运行时可能会发生的bug.
## Cherry picking
大家下了模型,可以自己玩耍。
却也可以尝尝我替大家摘的樱桃🍒
```python
>>> inference("上海",["高楼","虹光","灯红酒绿","华厦"])
高楼-虹光-灯红酒绿-华厦《上海》
『二』
上台星月明如昼。
海阁珠帘卷画堂。
>>> inference("刘先生",["妆容","思","落花","空镜"])
妆容-思-落花-空镜《刘先生》
『三』
刘郎何事不相逢,先把金尊酒未空。
生意自知人薄命,多情只有月明中。
```
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry) |
raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn | cb8827bc0699381451f35b3d92578509a7585ef7 | 2021-11-30T01:06:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"zh",
"transformers",
"search"
] | feature-extraction | false | raynardj | null | raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn | 29 | 1 | transformers | 7,257 | ---
language:
- zh
tags:
- search
---
# Cross Language Search
## Search cliassical CN with modern ZH
* In some cases, Classical Chinese feels like another language, I even trained 2 translation models ([1](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) and [2](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)) to prove this point.
* That's why, when people wants to be savvy about their words, we choose to quote our ancestors. It's exactly like westerners like to quote Latin or Shakespeare, the difference is we have a much bigger pool to choose.
* This model helps you **find** text within **ancient Chinese** literature, but you can **search with modern Chinese**
# 跨语种搜索
## 博古搜今
* 我不记得是谁, 哪个朝代,我只记得大概这么一个事儿,我就能模糊找到原文
* 我不记得原文, 但是我只记得原文想表达的现代汉语意思, 希望能找出来引用一下。
* 我在写文章, 有个观点, 我想碰运气看看古人有没有提过同样类似的说法。
* 我只是想更有效率地阅读古文
推荐的使用通道如下,当然, cosine距离搜索相关的框架和引擎很多, 大家自己看着适用的选
装包
```shell
pip install -Uqq unpackai
pip install -Uqq SentenceTransformer
```
搜索语句的函数
```python
from unpackai.interp import CosineSearch
from sentence_transformers import SentenceTransformer
import pandas as pd
import numpy as np
TAG = "raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn"
encoder = SentenceTransformer(TAG)
# all_lines is a list of all your sentences
# all_lines 是一个你所有句子的列表, 可以是一本书, 按照句子分割, 也可以是很多很多书
all_lines = ["句子1","句子2",...]
vec = encoder.encode(all_lines, batch_size=32, show_progress_bar=True)
# consine距离搜索器
cosine = CosineSearch(vec)
def search(text):
enc = encoder.encode(text) # encode the search key
order = cosine(enc) # distance array
sentence_df = pd.DataFrame({"sentence":np.array(all_lines)[order[:5]]})
return sentence_df
```
将史记打成句子以后, 搜索效果是这样的:
```python
>>> search("他是一个很慷慨的人")
```
```
sentence
0 季布者,楚人也。为气任侠,有名於楚。
1 董仲舒为人廉直。
2 大将军为人仁善退让,以和柔自媚於上,然天下未有称也。
3 勃为人木彊敦厚,高帝以为可属大事。
4 石奢者,楚昭王相也。坚直廉正,无所阿避。
```
```python
>>> search("进入军营,必须缓缓牵着马骑")
```
```
sentence
0 壁门士吏谓从属车骑曰:将军约,军中不得驱驰。
1 起之为将,与士卒最下者同衣食。卧不设席,行不骑乘,亲裹赢粮,与士卒分劳苦。
2 既出,沛公留车骑,独骑一马,与樊哙等四人步从,从间道山下归走霸上军,而使张良谢项羽。
3 顷之,上行出中渭桥,有一人从穚下走出,乘舆马惊。
4 元狩四年春,上令大将军青、骠骑将军去病将各五万骑,步兵转者踵军数十万,而敢力战深入之士皆属骠骑。
```
## 其他资源清单
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry) |
sangrimlee/mt5-small-multitask | 0d53905ad122a560eba31fdd02d54b5787b01779 | 2021-03-30T00:50:38.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sangrimlee | null | sangrimlee/mt5-small-multitask | 29 | 1 | transformers | 7,258 | Entry not found |
tr3cks/3LabelsSentimentAnalysisSpanish | 46595d7d1536cb13ca16d0afcea9ae018528c95e | 2021-05-20T08:02:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tr3cks | null | tr3cks/3LabelsSentimentAnalysisSpanish | 29 | null | transformers | 7,259 | Entry not found |
wangfan/jdt-fin-roberta-wwm-large | 4177b56fa7ed52b1cd990d1de117364834213fd8 | 2021-11-08T07:03:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:finance",
"transformers",
"roberta-wwm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | wangfan | null | wangfan/jdt-fin-roberta-wwm-large | 29 | null | transformers | 7,260 | ---
language: zh
tags:
- roberta-wwm
license: apache-2.0
datasets:
- finance
---
在众多业务中,越来越频繁的使用预训练语言模型(Pre-trained Language Models),为了在金融场景下各任务中取得更好效果,我们发布了jdt-fin-roberta-wwm模型
## 模型
* `base模型`:12-layer, 768-hidden, 12-heads, 110M parameters
| 模型简称 | 语料 | 京盘下载 |
| - | - | - |
| fin-roberta-wwm | 金融语料 | - |
## 快速加载
### 使用Huggingface-Transformers
依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。
```
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BertModel.from_pretrained("MODEL_NAME")
```
**注意:本目录中的所有模型均使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel!**
其中`MODEL_NAME`对应列表如下:
| 模型名 | MODEL_NAME |
| - | - |
| fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
|
wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner | c0b95e058842b3d0b8d01401a489fd866a8d8d04 | 2021-05-20T09:13:44.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner | 29 | 1 | transformers | 7,261 | Entry not found |
armageddon/albert-xxlarge-v2-squad2-covid-qa-deepset | 30a4c1a72050b836f37438235b631d30d36d57fd | 2022-03-02T11:01:43.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/albert-xxlarge-v2-squad2-covid-qa-deepset | 29 | null | transformers | 7,262 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: albert-xxlarge-v2-squad2-covid-qa-deepset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-squad2-covid-qa-deepset
This model is a fine-tuned version of [mfeb/albert-xxlarge-v2-squad2](https://huggingface.co/mfeb/albert-xxlarge-v2-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
facebook/m2m100-12B-avg-10-ckpt | 497102e8ab6b8a32356aa2c524902a9295d0314d | 2022-05-26T22:25:25.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"ast",
"az",
"ba",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"ilo",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"lb",
"lg",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"oc",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"th",
"tl",
"tn",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2010.11125",
"transformers",
"m2m100-12B",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | facebook | null | facebook/m2m100-12B-avg-10-ckpt | 29 | null | transformers | 7,263 | ---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
tags:
- m2m100-12B
---
# M2M100 12B (average of last 10 checkpoints)
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-avg-10-ckpt")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-avg-10-ckpt")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
docto/Docto-Bot | 30b7a6d837618f834f576a9ad7bcaae536d68ee4 | 2022-03-25T04:33:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | docto | null | docto/Docto-Bot | 29 | 1 | transformers | 7,264 | ---
license: afl-3.0
---
# Docto Bot
## Usage (HuggingFace Transformers)
```
pip install -U transformers
```
```python
import random
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("docto/Docto-Bot")
model = AutoModelForCausalLM.from_pretrained("docto/Docto-Bot")
special_token = '<|endoftext|>'
prompt_text = 'Question: I am having fever\nAnswer:'
#prompt_text = f'Question: {userinput}\nAnswer:'
encoded_prompt = tokenizer.encode(prompt_text,
add_special_tokens = False,
return_tensors = 'pt')
output_sequences = model.generate(
input_ids = encoded_prompt,
max_length = 700,
temperature = 0.9,
top_k = 20,
top_p = 0.9,
repetition_penalty = 1,
do_sample = True,
num_return_sequences = 4
)
result = tokenizer.decode(random.choice(output_sequences))
result = result[result.index("Answer: "):result.index(special_token)]
print(result[8:])
```
## Training Data
The Docto-Bot was trained on [Medical Question/Answer dataset](https://github.com/LasseRegin/medical-question-answer-data) |
MyOrg123/tinparadox-job_search | 4ce9a0b91965ac44e75032ca81f6c3f6fb4863bb | 2022-06-13T02:09:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | MyOrg123 | null | MyOrg123/tinparadox-job_search | 29 | null | transformers | 7,265 | Entry not found |
ikram54/autotrain-harassement-675420038 | fa34bc1f855b2be8b0cba5f1dac392f50d11b14a | 2022-03-27T18:08:30.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:ikram54/autotrain-data-harassement",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | ikram54 | null | ikram54/autotrain-harassement-675420038 | 29 | null | transformers | 7,266 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ikram54/autotrain-data-harassement
co2_eq_emissions: 2.6332836871905054
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 675420038
- CO2 Emissions (in grams): 2.6332836871905054
## Validation Metrics
- Loss: 0.8747465014457703
- Accuracy: 0.7085201793721974
- Macro F1: 0.579743989078862
- Micro F1: 0.7085201793721974
- Weighted F1: 0.6913786522271296
- Macro Precision: 0.5669375905888698
- Micro Precision: 0.7085201793721974
- Weighted Precision: 0.6760144007300164
- Macro Recall: 0.5940655209452201
- Micro Recall: 0.7085201793721974
- Weighted Recall: 0.7085201793721974
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ikram54/autotrain-harassement-675420038
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
gooohjy/suicidal-bert | 070651c9e7d85ebc49cb747d88c677702278478b | 2022-03-30T12:17:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | gooohjy | null | gooohjy/suicidal-bert | 29 | null | transformers | 7,267 | # Suicidal-BERT
This text classification model predicts whether a sequence of words are suicidal (1) or non-suicidal (0).
## Data
The model was trained on the [Suicide and Depression Dataset](https://www.kaggle.com/nikhileswarkomati/suicide-watch) obtained from Kaggle. The dataset was scraped from Reddit and consists of 232,074 rows equally distributed between 2 classes - suicide and non-suicide.
## Parameters
The model fine-tuning was conducted on 1 epoch, with batch size of 6, and learning rate of 0.00001. Due to limited computing resources and time, we were unable to scale up the number of epochs and batch size.
## Performance
The model has achieved the following results after fine-tuning on the aforementioned dataset:
- Accuracy: 0.9757
- Recall: 0.9669
- Precision: 0.9701
- F1 Score: 0.9685
## How to Use
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("gooohjy/suicidal-bert")
model = AutoModel.from_pretrained("gooohjy/suicidal-bert")
```
## Resources
For more resources, including the source code, please refer to the GitHub repository [gohjiayi/suicidal-text-detection](https://github.com/gohjiayi/suicidal-text-detection/). |
shpotes/codegen-350M-mono | 15acb80fd284e5977aefd7aa5df00fe10e21a493 | 2022-06-22T06:02:10.000Z | [
"pytorch",
"codegen",
"text-generation",
"transformers",
"license:bsd-3-clause"
] | text-generation | false | shpotes | null | shpotes/codegen-350M-mono | 29 | 3 | transformers | 7,268 | ---
license: bsd-3-clause
---
# Overview
The CodeGen model was proposed in by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. From Salesforce Research.
The abstract from the paper is the following:
Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We plan to make the training library JaxFormer including checkpoints available as open source.
# How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("shpotes/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono", trust_remote_code=True)
input_ids = tokenizer(
context,
truncation=True,
padding=True,
return_tensors='pt',
pad_token_id=pad_token_id,
).input_ids
input_ids_len = input_ids.shape[1]
with torch.no_grad():
input_ids = input_ids
tokens = model.generate(
input_ids,
do_sample=True,
num_return_sequences=num_return_sequences,
temperature=temp,
max_length=input_ids_len + max_length_sample,
top_p=top_p,
use_cache=True,
)
text = tokenizer.batch_decode(tokens[:, input_ids_len:, ...])
``` |
edwardjross/xlm-roberta-base-finetuned-recipe-all | 2e510e1cd082577bf2aaba6112dbd1a2657879e0 | 2022-04-09T13:19:55.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2004.12184",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-recipe-all | 29 | null | transformers | 7,269 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-recipe-all
results: []
widget:
- text: "1 sheet of frozen puff pastry (thawed)"
- text: "1/2 teaspoon fresh thyme, minced"
- text: "2-3 medium tomatoes"
- text: "1 petit oignon rouge"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-recipe-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the recipe ingredient [NER dataset](https://github.com/cosylabiiit/recipe-knowledge-mining) from the paper [A Named Entity Based Approach to Model Recipes](https://arxiv.org/abs/2004.12184) (using both the `gk` and `ar` datasets).
It achieves the following results on the evaluation set:
- Loss: 0.1169
- F1: 0.9672
On the test set it obtains an F1 of 0.9615, slightly above the CRF used in the paper.
## Model description
Predicts tag of each token in an ingredient string.
| Tag | Significance | Example |
| --- | --- | --- |
| NAME | Name of Ingredient | salt, pepper |
| STATE | Processing State of Ingredient. | ground, thawed |
| UNIT | Measuring unit(s). | gram, cup |
| QUANTITY | Quantity associated with the unit(s). | 1, 1 1/2 , 2-4 |
| SIZE | Portion sizes mentioned. | small, large |
| TEMP | Temperature applied prior to cooking. | hot, frozen |
| DF (DRY/FRESH) | Fresh otherwise as mentioned. | dry, fresh |
## Intended uses & limitations
* Only trained on ingredient strings.
* Tags subtokens; tag should be propagated to whole word
* Works best with pre-tokenisation splitting of symbols (such as parentheses) and numbers (e.g. 50g -> 50 g)
* Typically only detects the first ingredient if there are multiple.
* Only trained on two American English data sources
* Tags TEMP and DF have very few training data.
## Training and evaluation data
Both the `ar` (AllRecipes.com) and `gk` (FOOD.com) datasets obtained from the TSVs from the authors' [repository](https://github.com/cosylabiiit/recipe-knowledge-mining).
## Training procedure
It follows the overall procedure from Chapter 4 of [Natural Language Processing with Transformers](https://www.oreilly.com/library/view/natural-language-processing/9781098103231/) by Tunstall, von Wera and Wolf.
See the [training notebook](https://github.com/EdwardJRoss/nlp_transformers_exercises/blob/master/notebooks/ch4-ner-recipe-stanford-crf.ipynb) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2529 | 1.0 | 331 | 0.1303 | 0.9592 |
| 0.1164 | 2.0 | 662 | 0.1224 | 0.9640 |
| 0.0904 | 3.0 | 993 | 0.1156 | 0.9671 |
| 0.0585 | 4.0 | 1324 | 0.1169 | 0.9672 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sgugger/sharded-gpt-j-6B | dd565b6c037aec5477f98b47531e70c87f1dc021 | 2022-05-11T20:28:51.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | sgugger | null | sgugger/sharded-gpt-j-6B | 29 | null | transformers | 7,270 | Entry not found |
Farshid/distilbert-base-uncased-finetuned-financial_phrasebank | b01edc64edffda7d20c284955844ed42818a520a | 2022-06-26T18:57:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:financial_phrasebank",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Farshid | null | Farshid/distilbert-base-uncased-finetuned-financial_phrasebank | 29 | null | transformers | 7,271 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-financial_phrasebank
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_75agree
metrics:
- name: Accuracy
type: accuracy
value: 0.944015444015444
- name: F1
type: f1
value: 0.9437595528186435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-financial_phrasebank
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5533
- Accuracy: 0.9440
- F1: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0001 | 1.0 | 19 | 0.5826 | 0.9324 | 0.9334 |
| 0.011 | 2.0 | 38 | 0.5072 | 0.9382 | 0.9380 |
| 0.0007 | 3.0 | 57 | 0.5496 | 0.9382 | 0.9383 |
| 0.0004 | 4.0 | 76 | 0.5190 | 0.9421 | 0.9420 |
| 0.0 | 5.0 | 95 | 0.5611 | 0.9382 | 0.9388 |
| 0.0027 | 6.0 | 114 | 0.5734 | 0.9421 | 0.9414 |
| 0.0001 | 7.0 | 133 | 0.5333 | 0.9421 | 0.9424 |
| 0.0051 | 8.0 | 152 | 0.5648 | 0.9382 | 0.9390 |
| 0.0002 | 9.0 | 171 | 0.4934 | 0.9382 | 0.9385 |
| 0.005 | 10.0 | 190 | 0.5202 | 0.9344 | 0.9342 |
| 0.0146 | 11.0 | 209 | 0.4558 | 0.9479 | 0.9480 |
| 0.0002 | 12.0 | 228 | 0.4870 | 0.9421 | 0.9424 |
| 0.0049 | 13.0 | 247 | 0.4936 | 0.9440 | 0.9445 |
| 0.0007 | 14.0 | 266 | 0.5596 | 0.9363 | 0.9371 |
| 0.0009 | 15.0 | 285 | 0.4776 | 0.9479 | 0.9474 |
| 0.0 | 16.0 | 304 | 0.4737 | 0.9440 | 0.9438 |
| 0.0 | 17.0 | 323 | 0.4762 | 0.9479 | 0.9478 |
| 0.0 | 18.0 | 342 | 0.4826 | 0.9479 | 0.9478 |
| 0.0002 | 19.0 | 361 | 0.5324 | 0.9402 | 0.9395 |
| 0.0 | 20.0 | 380 | 0.5188 | 0.9498 | 0.9498 |
| 0.0 | 21.0 | 399 | 0.5327 | 0.9459 | 0.9461 |
| 0.0 | 22.0 | 418 | 0.5355 | 0.9459 | 0.9461 |
| 0.0 | 23.0 | 437 | 0.5369 | 0.9459 | 0.9461 |
| 0.0 | 24.0 | 456 | 0.5464 | 0.9440 | 0.9442 |
| 0.0 | 25.0 | 475 | 0.5468 | 0.9440 | 0.9442 |
| 0.0 | 26.0 | 494 | 0.5466 | 0.9440 | 0.9442 |
| 0.0 | 27.0 | 513 | 0.5471 | 0.9440 | 0.9442 |
| 0.0 | 28.0 | 532 | 0.5472 | 0.9440 | 0.9442 |
| 0.0 | 29.0 | 551 | 0.5481 | 0.9440 | 0.9442 |
| 0.0 | 30.0 | 570 | 0.5434 | 0.9459 | 0.9461 |
| 0.0 | 31.0 | 589 | 0.5433 | 0.9479 | 0.9479 |
| 0.0 | 32.0 | 608 | 0.5442 | 0.9479 | 0.9479 |
| 0.0 | 33.0 | 627 | 0.5456 | 0.9479 | 0.9479 |
| 0.0 | 34.0 | 646 | 0.5467 | 0.9479 | 0.9479 |
| 0.0 | 35.0 | 665 | 0.5482 | 0.9459 | 0.9461 |
| 0.0 | 36.0 | 684 | 0.5493 | 0.9459 | 0.9461 |
| 0.0 | 37.0 | 703 | 0.5497 | 0.9479 | 0.9479 |
| 0.0 | 38.0 | 722 | 0.5500 | 0.9479 | 0.9479 |
| 0.0 | 39.0 | 741 | 0.5517 | 0.9459 | 0.9461 |
| 0.0 | 40.0 | 760 | 0.5526 | 0.9459 | 0.9461 |
| 0.0 | 41.0 | 779 | 0.5517 | 0.9479 | 0.9479 |
| 0.0 | 42.0 | 798 | 0.5533 | 0.9479 | 0.9479 |
| 0.0 | 43.0 | 817 | 0.5555 | 0.9459 | 0.9461 |
| 0.0 | 44.0 | 836 | 0.5565 | 0.9459 | 0.9461 |
| 0.0 | 45.0 | 855 | 0.5571 | 0.9459 | 0.9461 |
| 0.0 | 46.0 | 874 | 0.5575 | 0.9459 | 0.9461 |
| 0.0 | 47.0 | 893 | 0.5593 | 0.9459 | 0.9461 |
| 0.0 | 48.0 | 912 | 0.5604 | 0.9459 | 0.9461 |
| 0.0 | 49.0 | 931 | 0.5611 | 0.9459 | 0.9461 |
| 0.0 | 50.0 | 950 | 0.5615 | 0.9459 | 0.9461 |
| 0.0 | 51.0 | 969 | 0.5621 | 0.9459 | 0.9461 |
| 0.0 | 52.0 | 988 | 0.5622 | 0.9459 | 0.9461 |
| 0.0 | 53.0 | 1007 | 0.5628 | 0.9459 | 0.9461 |
| 0.0 | 54.0 | 1026 | 0.5629 | 0.9479 | 0.9479 |
| 0.0 | 55.0 | 1045 | 0.5639 | 0.9479 | 0.9479 |
| 0.0 | 56.0 | 1064 | 0.5652 | 0.9459 | 0.9461 |
| 0.0 | 57.0 | 1083 | 0.5658 | 0.9459 | 0.9461 |
| 0.0 | 58.0 | 1102 | 0.5664 | 0.9459 | 0.9461 |
| 0.0 | 59.0 | 1121 | 0.5472 | 0.9498 | 0.9498 |
| 0.0 | 60.0 | 1140 | 0.5428 | 0.9517 | 0.9517 |
| 0.0 | 61.0 | 1159 | 0.5433 | 0.9517 | 0.9517 |
| 0.0 | 62.0 | 1178 | 0.5452 | 0.9517 | 0.9517 |
| 0.0 | 63.0 | 1197 | 0.5473 | 0.9517 | 0.9517 |
| 0.0 | 64.0 | 1216 | 0.5481 | 0.9517 | 0.9517 |
| 0.0 | 65.0 | 1235 | 0.5488 | 0.9517 | 0.9517 |
| 0.0 | 66.0 | 1254 | 0.5494 | 0.9517 | 0.9517 |
| 0.0 | 67.0 | 1273 | 0.5499 | 0.9517 | 0.9517 |
| 0.0 | 68.0 | 1292 | 0.5504 | 0.9517 | 0.9517 |
| 0.0 | 69.0 | 1311 | 0.5509 | 0.9517 | 0.9517 |
| 0.0 | 70.0 | 1330 | 0.5514 | 0.9517 | 0.9517 |
| 0.0 | 71.0 | 1349 | 0.5519 | 0.9517 | 0.9517 |
| 0.0 | 72.0 | 1368 | 0.5535 | 0.9517 | 0.9517 |
| 0.0 | 73.0 | 1387 | 0.5546 | 0.9517 | 0.9517 |
| 0.0 | 74.0 | 1406 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 75.0 | 1425 | 0.5555 | 0.9517 | 0.9517 |
| 0.0 | 76.0 | 1444 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 77.0 | 1463 | 0.5549 | 0.9517 | 0.9517 |
| 0.0 | 78.0 | 1482 | 0.5551 | 0.9517 | 0.9517 |
| 0.0 | 79.0 | 1501 | 0.5617 | 0.9479 | 0.9479 |
| 0.0 | 80.0 | 1520 | 0.5647 | 0.9459 | 0.9459 |
| 0.0026 | 81.0 | 1539 | 0.5970 | 0.9402 | 0.9404 |
| 0.0005 | 82.0 | 1558 | 0.5256 | 0.9459 | 0.9455 |
| 0.0005 | 83.0 | 1577 | 0.5474 | 0.9479 | 0.9478 |
| 0.0006 | 84.0 | 1596 | 0.6191 | 0.9363 | 0.9369 |
| 0.0 | 85.0 | 1615 | 0.6396 | 0.9324 | 0.9332 |
| 0.0 | 86.0 | 1634 | 0.6396 | 0.9324 | 0.9332 |
| 0.0 | 87.0 | 1653 | 0.5488 | 0.9479 | 0.9479 |
| 0.0008 | 88.0 | 1672 | 0.5376 | 0.9479 | 0.9476 |
| 0.0 | 89.0 | 1691 | 0.5383 | 0.9479 | 0.9476 |
| 0.0 | 90.0 | 1710 | 0.5384 | 0.9479 | 0.9476 |
| 0.0 | 91.0 | 1729 | 0.5384 | 0.9479 | 0.9476 |
| 0.0 | 92.0 | 1748 | 0.5385 | 0.9479 | 0.9476 |
| 0.0 | 93.0 | 1767 | 0.5385 | 0.9479 | 0.9476 |
| 0.0009 | 94.0 | 1786 | 0.5523 | 0.9440 | 0.9438 |
| 0.0 | 95.0 | 1805 | 0.5566 | 0.9440 | 0.9438 |
| 0.0 | 96.0 | 1824 | 0.5570 | 0.9440 | 0.9438 |
| 0.0 | 97.0 | 1843 | 0.5570 | 0.9440 | 0.9438 |
| 0.0 | 98.0 | 1862 | 0.5554 | 0.9440 | 0.9438 |
| 0.0 | 99.0 | 1881 | 0.5533 | 0.9440 | 0.9438 |
| 0.0 | 100.0 | 1900 | 0.5533 | 0.9440 | 0.9438 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gzomer/claim-spotter | 82f22aed62638f5ce174249dcc626245cbb2bb77 | 2022-04-12T14:09:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gzomer | null | gzomer/claim-spotter | 29 | null | transformers | 7,272 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: claim-spotter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# claim-spotter
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3266
- F1: 0.8709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3697 | 1.0 | 830 | 0.2728 | 0.8589 |
| 0.1475 | 2.0 | 1660 | 0.3266 | 0.8709 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Voicelab/sbert-large-cased-pl | e2179e369a3f72d9b32bcd5004fd9e0597693f94 | 2022-04-13T13:26:50.000Z | [
"pytorch",
"bert",
"feature-extraction",
"pl",
"dataset:Wikipedia",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"license:cc-by-4.0"
] | sentence-similarity | false | Voicelab | null | Voicelab/sbert-large-cased-pl | 29 | 3 | sentence-transformers | 7,273 | ---
license: cc-by-4.0
language:
- pl
datasets:
- Wikipedia
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
widget:
- source_sentence: "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
sentences:
- "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
- "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
- "Samica o długości ciała 10–11 mm, szczoteczki na tylnych nogach służące do zbierania pyłku oraz włoski na końcu odwłoka jaskrawo pomarańczowoczerwone. "
example_title: "Uczenie maszynowe"
---
# SHerbert large - Polish SentenceBERT
SentenceBERT is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Training was based on the original paper [Siamese BERT models for the task of semantic textual similarity (STS)](https://arxiv.org/abs/1908.10084) with a slight modification of how the training data was used. The goal of the model is to generate different embeddings based on the semantic and topic similarity of the given text.
> Semantic textual similarity analyzes how similar two pieces of texts are.
Read more about how the model was prepared in our [blog post](https://voicelab.ai/blog/).
The base trained model is a Polish HerBERT. HerBERT is a BERT-based Language Model. For more details, please refer to: "HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish".
# Corpus
Te model was trained solely on [Wikipedia](https://dumps.wikimedia.org/).
# Tokenizer
As in the original HerBERT implementation, the training dataset was tokenized into subwords using a character level byte-pair encoding (CharBPETokenizer) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a tokenizers library.
We kindly encourage you to use the Fast version of the tokenizer, namely HerbertTokenizerFast.
# Usage
```python
from transformers import AutoTokenizer, AutoModel
from sklearn.metrics import pairwise
sbert = AutoModel.from_pretrained("Voicelab/sbert-large-cased-pl")
tokenizer = AutoTokenizer.from_pretrained("Voicelab/sbert-large-cased-pl")
s0 = "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
s1 = "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
s2 = "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
tokens = tokenizer([s0, s1, s2],
padding=True,
truncation=True,
return_tensors='pt')
x = sbert(tokens["input_ids"],
tokens["attention_mask"]).pooler_output
# similarity between sentences s0 and s1
print(pairwise.cosine_similarity(x[0], x[1])) # Result: 0.8011128
# similarity between sentences s0 and s2
print(pairwise.cosine_similarity(x[0], x[2))) # Result: 0.58822715
```
# Results
| Model | Accuracy | Source |
|--------------------------|------------|----------------------------------------------------------|
| SBERT-WikiSec-base (EN) | 80.42% | https://arxiv.org/abs/1908.10084 |
| SBERT-WikiSec-large (EN) | 80.78% | https://arxiv.org/abs/1908.10084 |
| sbert-base-cased-pl | 82.31% | https://huggingface.co/Voicelab/sbert-base-cased-pl |
| **sbert-large-cased-pl** | **84.42%** | **https://huggingface.co/Voicelab/sbert-large-cased-pl** |
# License
CC BY 4.0
# Citation
If you use this model, please cite the following paper:
# Authors
The model was trained by NLP Research Team at Voicelab.ai.
You can contact us [here](https://voicelab.ai/contact/). |
Jeevesh8/feather_berts_44 | fe33588d8b3c6b63041a35464261b6334ac5a1d6 | 2022-04-20T13:31:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_44 | 29 | null | transformers | 7,274 | Entry not found |
adityay1221/Xegho.30.4 | a17a5a08e3b8ad1bf772f7aad1223e96767df461 | 2022-04-23T12:07:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | adityay1221 | null | adityay1221/Xegho.30.4 | 29 | null | transformers | 7,275 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Xegho.30.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Xegho.30.4
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1814
- Bleu: 87.4768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 121
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.19 | 100 | 1.2331 | 23.9598 |
| No log | 2.38 | 200 | 0.7943 | 39.0191 |
| No log | 3.57 | 300 | 0.5889 | 42.0816 |
| No log | 4.76 | 400 | 0.4595 | 47.6986 |
| 1.0058 | 5.95 | 500 | 0.3801 | 49.9630 |
| 1.0058 | 7.14 | 600 | 0.3209 | 50.4290 |
| 1.0058 | 8.33 | 700 | 0.2848 | 51.1531 |
| 1.0058 | 9.52 | 800 | 0.2544 | 54.0631 |
| 1.0058 | 10.71 | 900 | 0.2338 | 56.3553 |
| 0.3559 | 11.9 | 1000 | 0.2224 | 59.7317 |
| 0.3559 | 13.1 | 1100 | 0.2110 | 62.2114 |
| 0.3559 | 14.29 | 1200 | 0.2060 | 63.4936 |
| 0.3559 | 15.48 | 1300 | 0.1994 | 63.7621 |
| 0.3559 | 16.67 | 1400 | 0.1959 | 63.3415 |
| 0.2423 | 17.86 | 1500 | 0.1932 | 63.7683 |
| 0.2423 | 19.05 | 1600 | 0.1898 | 64.2757 |
| 0.2423 | 20.24 | 1700 | 0.1901 | 64.2757 |
| 0.2423 | 21.43 | 1800 | 0.1875 | 64.1890 |
| 0.2423 | 22.62 | 1900 | 0.1852 | 63.8513 |
| 0.2051 | 23.81 | 2000 | 0.1837 | 64.4531 |
| 0.2051 | 25.0 | 2100 | 0.1829 | 64.4531 |
| 0.2051 | 26.19 | 2200 | 0.1818 | 64.6303 |
| 0.2051 | 27.38 | 2300 | 0.1817 | 64.6303 |
| 0.2051 | 28.57 | 2400 | 0.1816 | 65.0213 |
| 0.186 | 29.76 | 2500 | 0.1814 | 65.0213 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
|
brennan-richards/gpt2-finetuned-academic-topics | dea749089cacc33939f9c07acdcd0540ff1449a2 | 2022-05-09T23:09:57.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"transformers",
"generated_from_keras_callback",
"license:mit",
"model-index"
] | text-generation | false | brennan-richards | null | brennan-richards/gpt2-finetuned-academic-topics | 29 | null | transformers | 7,276 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: gpt2-finetuned-academic-topics
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-academic-topics
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of sequences of science, technology, engineering and mathematics academic topics/tags which a user has used on their CiteULike or Google Scholar profiles.
Please contact [email protected] for questions or inquiries.
It achieves the following results on the evaluation set:
- Train Loss: 3.3216
- Validation Loss: 3.2215
- Epoch: 4
## Model description
Give a sequence of topics, i.e.: "machine learning, deep learning, chemistry, evolution" the model will continue the sequence, effectively recommending/generating new topics that might be of interest.
## Intended uses & limitations
The model is not guaranteed to generate a real topic or even a real word/words as output.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.7873 | 4.2950 | 0 |
| 4.1032 | 3.8203 | 1 |
| 3.7363 | 3.5614 | 2 |
| 3.4999 | 3.3740 | 3 |
| 3.3216 | 3.2215 | 4 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
laurens88/finetuning-crypto-tweet-sentiment-test | 365858bf06b448cfd5bfe0e1cd4b0dbebe5b586c | 2022-05-20T11:14:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | laurens88 | null | laurens88/finetuning-crypto-tweet-sentiment-test | 29 | null | transformers | 7,277 | ---
tags:
- generated_from_trainer
model-index:
- name: finetuning-crypto-tweet-sentiment-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-crypto-tweet-sentiment-test
This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_99 | c59922a76a6a1b1037560ba830b488e24810eea2 | 2022-05-11T03:23:53.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_99 | 29 | null | transformers | 7,278 | Entry not found |
anas-awadalla/bert-mini-finetuned-squad | 79be3229ea6b1b42526664e1bca7254d3e974855 | 2022-05-21T08:23:12.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-mini-finetuned-squad | 29 | null | transformers | 7,279 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-mini-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mini-finetuned-squad
This model is a fine-tuned version of [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KoichiYasuoka/deberta-large-japanese-aozora | 140b9ff05491fa0ccc602ce5c90b4cfd52443566 | 2022-07-23T14:43:55.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-aozora | 29 | 3 | transformers | 7,280 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-large-japanese-aozora
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts. You can fine-tune `deberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora")
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
anchit48/fine-tuned-sentiment-analysis-customer-feedback | 40c7d4585e27a1d8a270b6be7be2ff453e1d5895 | 2022-06-03T07:51:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | anchit48 | null | anchit48/fine-tuned-sentiment-analysis-customer-feedback | 29 | null | transformers | 7,281 | Entry not found |
ChainYo/segformer-sidewalk | f178724b7ff7654b99f3a5a2ee4fc3c98981e403 | 2022-06-13T19:08:23.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
] | image-segmentation | false | ChainYo | null | ChainYo/segformer-sidewalk | 29 | null | transformers | 7,282 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
---
# SegFormer (b0-sized) model fine-tuned on sidewalk-semantic dataset
SegFormer model fine-tuned on segments/sidewalk-semantic at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor(reduce_labels=True)
model = SegformerForSemanticSegmentation.from_pretrained("ChainYo/segformer-sidewalk")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). |
eslamxm/mt5-base-finetuned-Spanish | e4146c3b83111a17a44ed6964e7ef048d41776a1 | 2022-06-15T05:13:08.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"es",
"spanish",
"abstractive summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-Spanish | 29 | null | transformers | 7,283 | ---
license: apache-2.0
tags:
- summarization
- mt5
- es
- spanish
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mt5-base-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-Spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1727
- Rouge-1: 28.11
- Rouge-2: 12.09
- Rouge-l: 24.62
- Gen Len: 18.73
- Bertscore: 72.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
waboucay/camembert-large-finetuned-repnum_wl-rua_wl_3_classes | b1c348d2270b74ce091266a3556be7e69e113b72 | 2022-06-20T07:41:39.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-repnum_wl-rua_wl_3_classes | 29 | null | transformers | 7,284 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 77.3 | 77.3 |
| test | 78.0 | 77.9 | |
BellaAndBria/distilbert-base-uncased-finetuned-emotion | b19bf3ffa0e4a3270168163bec9227273521b1f1 | 2022-06-21T06:02:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | BellaAndBria | null | BellaAndBria/distilbert-base-uncased-finetuned-emotion | 29 | null | transformers | 7,285 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9425
- name: F1
type: f1
value: 0.942387859809443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1611
- Accuracy: 0.9425
- F1: 0.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1358 | 1.0 | 250 | 0.1765 | 0.9345 | 0.9340 |
| 0.0885 | 2.0 | 500 | 0.1588 | 0.937 | 0.9371 |
| 0.0727 | 3.0 | 750 | 0.1611 | 0.9425 | 0.9424 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
oussama/layoutlmv3-finetuned-invoice | 4d631e7e074f1e4530fde21bbd6d2f89012b60bc | 2022-06-24T02:57:27.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"dataset:sroie",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | oussama | null | oussama/layoutlmv3-finetuned-invoice | 29 | null | transformers | 7,286 | ---
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
args: sroie
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0967 | 0.958 | 0.9716 | 0.9648 | 0.9956 |
| No log | 4.0 | 200 | 0.0222 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0171 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0136 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 10.0 | 500 | 0.0117 | 0.964 | 0.9777 | 0.9708 | 0.9962 |
| 0.1307 | 12.0 | 600 | 0.0099 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 14.0 | 700 | 0.0094 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 16.0 | 800 | 0.0071 | 0.9918 | 0.9838 | 0.9878 | 0.9983 |
| 0.1307 | 18.0 | 900 | 0.0026 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0089 | 20.0 | 1000 | 0.0018 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0089 | 22.0 | 1100 | 0.0016 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0089 | 24.0 | 1200 | 0.0015 | 1.0 | 0.9980 | 0.9990 | 0.9998 |
| 0.0089 | 26.0 | 1300 | 0.0015 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0089 | 28.0 | 1400 | 0.0014 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0025 | 30.0 | 1500 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 32.0 | 1600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 34.0 | 1700 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 36.0 | 1800 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 38.0 | 1900 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0019 | 40.0 | 2000 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
suvrobaner/distilbert-base-uncased-finetuned-emotion-en-tweets | 5e11246f8925fdc98b25a3bf82fbda6e3e81107f | 2022-07-13T15:40:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:emotion",
"transformers",
"license:apache-2.0"
] | text-classification | false | suvrobaner | null | suvrobaner/distilbert-base-uncased-finetuned-emotion-en-tweets | 29 | null | transformers | 7,287 | ---
language: en
tags:
- text-classification
- pytorch
license: apache-2.0
datasets:
- emotion
---
```python
from transformers import pipeline
model_id = "suvrobaner/distilbert-base-uncased-finetuned-emotion-en-tweets"
classifier = pipeline("text-classification", model = model_id)
custom_tweet = "I saw a movie today and it was really good."
preds = classifier(custom_tweet, return_all_scores=True)
labels = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise']
preds_df = pd.DataFrame(preds[0])
import matplotlib.pyplot as plt
plt.bar(labels, 100 * preds_df["score"], color='C0')
plt.title(f'"{custom_tweet}"')
plt.ylabel("Class probability (%)")
plt.show()
```
|
nvidia/stt_zh_citrinet_1024_gamma_0_25 | 86cb0f87f63bebc66227cf4cf0764e046692d9df | 2022-06-28T05:08:12.000Z | [
"nemo",
"zh",
"dataset:aishell_2",
"arxiv:2104.01721",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Citrinet",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"Riva",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_zh_citrinet_1024_gamma_0_25 | 29 | 1 | nemo | 7,288 | ---
language:
- zh
library_name: nemo
datasets:
- aishell_2
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Citrinet
- pytorch
- NeMo
- hf-asr-leaderboard
- Riva
license: cc-by-4.0
model-index:
- name: stt_zh_citrinet_1024_gamma_0_25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Dev iOS
type: aishell_2
config: ios
split: dev
args:
language: zh
metrics:
- name: Dev CER
type: cer
value: 4.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test iOS
type: aishell_2
config: ios
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Dev Android
type: aishell_2
config: android
split: dev
args:
language: zh
metrics:
- name: Dev CER
type: cer
value: 5.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test Android
type: aishell_2
config: android
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.5
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Dev Mic
type: aishell_2
config: mic
split: dev
args:
language: zh
metrics:
- name: Dev CER
type: cer
value: 5.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test Mic
type: aishell_2
config: mic
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.5
---
# NVIDIA Streaming Citrinet 1024 (zh)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model utilizes a character encoding scheme, and transcribes text in the standard character set that is provided in the Aishell-2 Mandard Corpus.
It is a non-autoregressive "large" variant of Citrinet, with around 140 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained("nvidia/stt_zh_citrinet_1024_gamma_0_25")
```
### Transcribing using Python
First, let's get a sample of spoken Mandarin Chinese.
Then simply do:
```
asr_model.transcribe(['<Path of audio file(s)>'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_zh_citrinet_1024_gamma_0_25"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Citrinet model is a non-autoregressive model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
- AIShell 2
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | Dev iOS | Test iOS | Dev Android | Test Android | Dev Mic | Test Mic | Train Dataset |
|---------|-----------|-----------------|---------|----------|-------------|--------------|---------|----------|---------------|
| 1.0.0 | Character | 5000+ | 4.8 | 5.1 | 5.2 | 5.5 | 5.2 | 5.5 | AIShell 2 |
| | | | | | | | | | |
| | | | | | | | | | |
While deploying with [NVIDIA Riva](https://developer.nvidia.com/riva), you can combine this model with external language models to further improve WER. The WER(%) of the latest model with different language modeling techniques are reported in the following table.
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
southmost/ru-gpt-dy | 7700bdac7361e9b5ef4f735706be7c358c9bd47c | 2022-07-02T22:15:20.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"text",
"nlp",
"generation",
"beginner",
"license:gpl"
] | text-generation | false | southmost | null | southmost/ru-gpt-dy | 29 | null | transformers | 7,289 | ---
language:
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- text
- nlp
- generation
- beginner
license: "gpl"
---
# ru-gpt-dy
**This is the first model I fine-tuned.**
It is GPT-NEO fine-tuned on around 36,000 of my tweets. It’s a generation model. Input -> output. It’s just okay, but it’s mine. :-)
*Compute for fine-tune by RunPod.io*
***Made with love in Brownsville, Texas*** |
Neha2608/pegasus-samsum | 883b01f337e1cb7219857f3aacd7cff72c7a537e | 2022-07-03T11:47:37.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Neha2608 | null | Neha2608/pegasus-samsum | 29 | null | transformers | 7,290 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
f00d/Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn | e25afa200bb1430752ebd9b011cdbd872a956f81 | 2022-07-08T11:34:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | f00d | null | f00d/Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn | 29 | null | transformers | 7,291 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multilingual-MiniLM-L12-H384-MLM-finetuned-wikipedia_bn
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kuttersn/gpt2-finetuned-redditComments | 8bff8fe467fb6a261b365ed99fe7cd5138196005 | 2022-07-14T01:38:25.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | kuttersn | null | kuttersn/gpt2-finetuned-redditComments | 29 | null | transformers | 7,292 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-redditComments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-redditComments
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.9535 | 1.0 | 4320 | 3.8888 |
| 3.8832 | 2.0 | 8640 | 3.8523 |
| 3.8708 | 3.0 | 12960 | 3.8418 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nvidia/stt_es_conformer_transducer_large | e62ef7c0d82c1e50513e0db40e751fc328c08f7e | 2022-07-13T17:39:38.000Z | [
"nemo",
"es",
"dataset:Fisher",
"dataset:VoxPopuli",
"dataset:facebook/multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_es_conformer_transducer_large | 29 | null | nemo | 7,293 | ---
language:
- es
library_name: nemo
datasets:
- Fisher
- VoxPopuli
- facebook/multilingual_librispeech
- mozilla-foundation/common_voice_7_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
model-index:
- name: stt_es_conformer_transducer_large
results:
- task:
type: Automatic Speech Recognition
name: speech-recognition
dataset:
name: common-voice-7-0-6
type: mozilla-foundation/common_voice_7_0
config: es
split: dev
args:
language: es
metrics:
- name: Dev WER
type: wer
value: 4.6
- task:
type: Automatic Speech Recognition
name: speech-recognition
dataset:
name: common-voice-7-0-6
type: mozilla-foundation/common_voice_7_0
config: es
split: test
args:
language: es
metrics:
- name: Test WER
type: wer
value: 5.2
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: spanish
split: dev
args:
language: es
metrics:
- name: Dev WER
type: wer
value: 2.7
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: spanish
split: test
args:
language: es
metrics:
- name: Test WER
type: wer
value: 3.2
---
# NVIDIA Conformer-Transducer Large (es)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in lowercase Spanish alphabet including spaces, and was trained on a composite dataset comprising of 1340 hours of Spanish speech. It is a "large" variant of Conformer-Transducer, with around 120 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_es_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_es_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of 1340 hours of Spanish speech:
- Mozilla Common Voice 7.0 (Spanish) - 289 hours after data cleaning
- Multilingual LibriSpeech (Spanish) - 801 hours after data cleaning
- Voxpopuli transcribed subset (Spanish) - 110 hours after data cleaning
- Fisher dataset (Spanish) - 140 hours after data cleaning
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | MCV 7.0 Dev | MCV 7.0 Test | MLS Dev | MLS Test | Voxpopuli Dev | Voxpopuli Test | Fisher Dev | Fisher Test| Train Dataset |
|---------|-----------------------|-----------------|-------------|--------------|---------|----------|---------------|----------------|------------|-------------|-----------------|
| 1.8.0 | SentencePiece Unigram | 1024 | 4.6 | 5.2 | 2.7 | 3.2 | 4.7 | 6.0 | 14.7 | 14.8 | NeMo ASRSET 2.0 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
domenicrosati/t5-paraphrase-paws-msrp-opinosis-finetuned-parasci | afe89949a269ace083eab9c999425806a1331c7b | 2022-07-15T16:24:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"paraphrasing",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | domenicrosati | null | domenicrosati/t5-paraphrase-paws-msrp-opinosis-finetuned-parasci | 29 | null | transformers | 7,294 | ---
license: apache-2.0
tags:
- paraphrasing
- generated_from_trainer
model-index:
- name: t5-paraphrase-paws-msrp-opinosis-finetuned-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-paraphrase-paws-msrp-opinosis-finetuned-parasci
This model is a fine-tuned version of [ceshine/t5-paraphrase-paws-msrp-opinosis](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-mnli | 77bd58a2f15e48da12d530c5e046c034fce0dc15 | 2022-07-15T02:42:53.000Z | [
"pytorch",
"pixel",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-mnli | 29 | null | transformers | 7,295 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-mnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE MNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 15000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
domenicrosati/pegasus-paraphrase-finetuned-parasci | 53567a767682328951d08f37ed6026a3a0e0f851 | 2022-07-17T12:07:15.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"paraphrasing",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | domenicrosati | null | domenicrosati/pegasus-paraphrase-finetuned-parasci | 29 | null | transformers | 7,296 | ---
tags:
- paraphrasing
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-paraphrase-finetuned-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-paraphrase-finetuned-parasci
This model is a fine-tuned version of [domenicrosati/pegasus-paraphrase-finetuned-parasci](https://huggingface.co/domenicrosati/pegasus-paraphrase-finetuned-parasci) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.8956 | 1.0 | 28227 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
google/ncsnpp-ffhq-256 | 3a2a14b6226883d6ce5458738898d989dcc343eb | 2022-07-21T15:00:00.000Z | [
"diffusers",
"arxiv:2011.13456",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ncsnpp-ffhq-256 | 29 | null | diffusers | 7,297 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
**Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456)
**Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
**Abstract**:
*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
## Inference
*SDE* models can use **continous** noise schedulers such as:
- [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py)
for inference.
See the following code:
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
model_id = "google/ncsnpp-ffhq-256"
# load model and scheduler
sde_ve = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = sde_ve()["sample"]
# save image
image[0].save("sde_ve_generated_image.png")
```
Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py)
for more details on how to write your own denoising loop.
For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Samples
1. 
2. 
3. 
4.  |
adit94/question_roi | 28b2950bcd3368e317a5ad51dccf3d6f6178e896 | 2022-07-24T00:43:37.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
] | object-detection | false | adit94 | null | adit94/question_roi | 29 | null | transformers | 7,298 | Entry not found |
adamnik/bert-causality-baseline | fa82699976d8b47f30641e32c781d993cad1b80f | 2022-07-24T15:21:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | adamnik | null | adamnik/bert-causality-baseline | 29 | null | transformers | 7,299 | ---
license: mit
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.