modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
junzai/bert_finetuning_test
3398e5a8acc8d9def42ee63b09f220af6f0ffd99
2021-05-19T20:56:52.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
junzai
null
junzai/bert_finetuning_test
3
null
transformers
21,500
Entry not found
junzai/demo
450bcb1ad7a3279e01548fdf57424f18979263af
2022-02-23T08:22:06.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
junzai
null
junzai/demo
3
null
transformers
21,501
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert_finetuning_test results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8284313725490197 - name: F1 type: f1 value: 0.8817567567567567 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_finetuning_test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4023 - Accuracy: 0.8284 - F1: 0.8818 - Combined Score: 0.8551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.11.0
kamivao/autonlp-cola_gram-208681
b6d0d29ea4fa754904c5e9cd39bf5a0d1833a9cd
2021-05-21T12:43:57.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:kamivao/autonlp-data-cola_gram", "transformers", "autonlp" ]
text-classification
false
kamivao
null
kamivao/autonlp-cola_gram-208681
3
null
transformers
21,502
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - kamivao/autonlp-data-cola_gram --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 208681 ## Validation Metrics - Loss: 0.37569838762283325 - Accuracy: 0.8365019011406845 - Precision: 0.8398058252427184 - Recall: 0.9453551912568307 - AUC: 0.9048838797814208 - F1: 0.8894601542416453 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-cola_gram-208681 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
kangnichaluo/mnli-2
7beccd52f28c5df320b67a6db716c25b5f7242a2
2021-05-25T11:40:02.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
kangnichaluo
null
kangnichaluo/mnli-2
3
null
transformers
21,503
learning rate: 3e-5 training epochs: 3 batch size: 64 seed: 0 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
kevinzyz/chinese-bert-wwm-ext-finetuned-cola
b296af42cd0973cdc07ef36293094595cecab646
2021-11-19T03:13:39.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
kevinzyz
null
kevinzyz/chinese-bert-wwm-ext-finetuned-cola
3
null
transformers
21,504
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: chinese-bert-wwm-ext-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-bert-wwm-ext-finetuned-cola This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5747 - Matthews Correlation: 0.4085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.5824 | 1.0 | 66375 | 0.5746 | 0.4083 | | 0.5824 | 2.0 | 66376 | 0.5747 | 0.4085 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.7.1 - Datasets 1.15.1 - Tokenizers 0.10.3
khizon/bert-unreliable-news-eng
bbd24e12412feb01576b7ce823e1db57c109509d
2022-01-15T07:04:33.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
khizon
null
khizon/bert-unreliable-news-eng
3
null
transformers
21,505
# Unreliable News Classifier (English) Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets. This model used the pre-trained weights of `bert-base-cased` as starting point and was able to achieve 84% accuracy on the test set. For more details: [Github](https://github.com/khizon/CS284_final_project)
kloon99/KML_Eula_generate_v1
dff9bbe2ff6adca5e708f5e3ccde5a12004bc081
2021-11-03T10:07:54.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
kloon99
null
kloon99/KML_Eula_generate_v1
3
null
transformers
21,506
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: trained_model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.9.1 - Datasets 1.14.0 - Tokenizers 0.10.3
kobkrit/wangchanberta-ner
0436450b4c0116fcb1b4f0e556a02492c10d393b
2022-02-14T12:20:43.000Z
[ "pytorch", "camembert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
kobkrit
null
kobkrit/wangchanberta-ner
3
null
transformers
21,507
Entry not found
korca/bae-roberta-base-mrpc-5
bd6994003a358d37e735f0e64c9ccd7f996b581f
2022-02-04T16:12:41.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
korca
null
korca/bae-roberta-base-mrpc-5
3
null
transformers
21,508
Entry not found
korca/bae-roberta-base-sst2
85669196e132306131e0545ac99c725b76e6945e
2022-02-02T07:34:38.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
korca
null
korca/bae-roberta-base-sst2
3
null
transformers
21,509
Entry not found
korca/textfooler-roberta-base-mrpc
407702b02f81e9c49beaf6dec999c5f94206e9a8
2022-01-31T15:24:32.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
korca
null
korca/textfooler-roberta-base-mrpc
3
null
transformers
21,510
Entry not found
korca/textfooler-roberta-base-rte-5
f73f1b007fd884296951e024565b67e0c8e6cd37
2022-02-04T18:45:10.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
korca
null
korca/textfooler-roberta-base-rte-5
3
null
transformers
21,511
Entry not found
korca/textfooler-roberta-base-rte
070a6a9311b34fcdfabd4ac1c1a5344e42013df9
2022-01-31T15:34:43.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
korca
null
korca/textfooler-roberta-base-rte
3
null
transformers
21,512
Entry not found
kornwtp/unsup-consert-base
c5edbae562b9f925067ebab7b1959219ab29738b
2021-12-25T04:59:06.000Z
[ "pytorch" ]
null
false
kornwtp
null
kornwtp/unsup-consert-base
3
null
null
21,513
Entry not found
krevas/finance-koelectra-base-generator
08c9a3a2dadff69e61a77280d493cefc72e8f173
2020-12-11T21:48:30.000Z
[ "pytorch", "electra", "fill-mask", "ko", "transformers", "autotrain_compatible" ]
fill-mask
false
krevas
null
krevas/finance-koelectra-base-generator
3
null
transformers
21,514
--- language: ko --- # 📈 Financial Korean ELECTRA model Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`) > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a financial news data of Naver news. The final training corpus has a size of 25GB and 2.3B tokens. This model was trained a cased model on a TITAN RTX for 500k steps. ## Usage ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="krevas/finance-koelectra-base-generator", tokenizer="krevas/finance-koelectra-base-generator" ) print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다.")) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
krimo11/bert-finetuned-ner
46f9c0ec58996c973b46332337148a15cdbc0317
2021-12-29T15:07:45.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
krimo11
null
krimo11/bert-finetuned-ner
3
null
transformers
21,515
Entry not found
krlng/sts-GBERT-cross-encoder
620ac4c992119527d36541769491b87e7c1457af
2021-09-07T15:06:19.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
krlng
null
krlng/sts-GBERT-cross-encoder
3
null
transformers
21,516
Entry not found
ks15/distilbert-base-uncased-finetuned-cola
0b41a13be1bb5197bb957eda1a8aaf630f0d691e
2022-02-09T21:06:35.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
ks15
null
ks15/distilbert-base-uncased-finetuned-cola
3
null
transformers
21,517
Entry not found
ksmcg/name
60e50463d1e980b05d5d7a13609c07f64ca2cd8e
2021-08-23T13:26:51.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
false
ksmcg
null
ksmcg/name
3
null
transformers
21,518
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model_index: - name: name results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # name This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
describeai/gemini-small
ba5967d6b32d1c579f5396833726a137093e6012
2022-05-10T06:00:56.000Z
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "Explain code", "Code Summarization", "Summarization", "license:mit", "autotrain_compatible" ]
text2text-generation
false
describeai
null
describeai/gemini-small
3
null
transformers
21,519
--- language: en tags: - Explain code - Code Summarization - Summarization license: mit --- # Gemini For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini) ## Model description Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in: - Python - Javascript (mostly vanilla JS, however, it can handle frameworks like React as well) - Java - Ruby - Go And outputs a description in English. ## Intended uses & limitations Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations. ### How to use You can use this model directly with a pipeline for Text2Text generation, as shown below: ```python from transformers import pipeline, set_seed summarizer = pipeline('text2text-generation', model='describeai/gemini-small') code = "print('hello world!')" response = summarizer(code, max_length=100, num_beams=3) print("Summarized code: " + response[0]['generated_text']) ``` Which should yield something along the lines of: ``` Summarized code: The following code is greeting the world. ``` ### Model sizes - Gemini: 770 Million Parameters - Gemini-Small (this repo): 220 Million Parameters ### Limitations Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results. ### About Us A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community.
lemon234071/ct5-small
a5e5453d1ee2c46c866e35838b4d392574ac0e3b
2021-06-23T15:13:23.000Z
[ "pytorch", "jax", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
lemon234071
null
lemon234071/ct5-small
3
null
transformers
21,520
Entry not found
lewtun/bert-base-uncased-finetuned-imdb
de61cb95f41d147a8e6c94dd4d4685b3dc5ac08a
2021-09-28T20:45:38.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
lewtun
null
lewtun/bert-base-uncased-finetuned-imdb
3
null
transformers
21,521
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: bert-base-uncased-finetuned-imdb results: - task: name: Masked Language Modeling type: fill-mask dataset: name: imdb type: imdb args: plain_text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-imdb This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.0284 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2244 | 1.0 | 958 | 2.0726 | | 2.1537 | 2.0 | 1916 | 2.0381 | | 2.1183 | 3.0 | 2874 | 2.0284 | ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
lewtun/bert-base-uncased-finetuned-squad-v1
76975553d05ac4a1cc576dc188a0b65649887f5e
2021-05-19T21:25:25.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
lewtun
null
lewtun/bert-base-uncased-finetuned-squad-v1
3
null
transformers
21,522
Entry not found
lewtun/distilbert-base-uncased-finetuned-imdb
c89681206885be4af92e10be6d51d08db2e33465
2021-11-12T15:08:55.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
lewtun
null
lewtun/distilbert-base-uncased-finetuned-imdb
3
null
transformers
21,523
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7106 | 1.0 | 157 | 2.4854 | | 2.5716 | 2.0 | 314 | 2.4161 | | 2.5408 | 3.0 | 471 | 2.4454 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
lgris/distilxlsr_bp_8-12
51040e5bd768fd6fed9e47d9bf471441cf54ed9e
2021-12-30T00:37:53.000Z
[ "pytorch", "wav2vec2", "feature-extraction", "pt", "arxiv:2110.01900", "transformers", "speech", "license:apache-2.0" ]
feature-extraction
false
lgris
null
lgris/distilxlsr_bp_8-12
3
null
transformers
21,524
--- language: pt tags: - speech license: apache-2.0 --- # DistilXLSR-53 for BP [DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)). **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
lgris/wav2vec2-xls-r-300m-gn-cv8
62f2fdfe6886b7dd00b8c523b5e7dc1321832087
2022-03-24T11:54:03.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "gn", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
lgris
null
lgris/wav2vec2-xls-r-300m-gn-cv8
3
null
transformers
21,525
--- language: - gn license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - gn - robust-speech-event - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-300m-gn-cv8 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: pt metrics: - name: Test WER type: wer value: 69.05 - name: Test CER type: cer value: 14.7 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: gn metrics: - name: Test WER type: wer value: 69.05 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-gn-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9392 - Wer: 0.7033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 20.0601 | 5.54 | 100 | 5.1622 | 1.0 | | 3.7052 | 11.11 | 200 | 3.2869 | 1.0 | | 3.3275 | 16.65 | 300 | 3.2162 | 1.0 | | 3.2984 | 22.22 | 400 | 3.1638 | 1.0 | | 3.1111 | 27.76 | 500 | 2.5541 | 1.0 | | 2.238 | 33.32 | 600 | 1.2198 | 0.9616 | | 1.5284 | 38.86 | 700 | 0.9571 | 0.8593 | | 1.2735 | 44.43 | 800 | 0.8719 | 0.8363 | | 1.1269 | 49.97 | 900 | 0.8334 | 0.7954 | | 1.0427 | 55.54 | 1000 | 0.7700 | 0.7749 | | 1.0152 | 61.11 | 1100 | 0.7747 | 0.7877 | | 0.943 | 66.65 | 1200 | 0.7151 | 0.7442 | | 0.9132 | 72.22 | 1300 | 0.7224 | 0.7289 | | 0.8397 | 77.76 | 1400 | 0.7354 | 0.7059 | | 0.8577 | 83.32 | 1500 | 0.7285 | 0.7263 | | 0.7931 | 88.86 | 1600 | 0.7863 | 0.7084 | | 0.7995 | 94.43 | 1700 | 0.7562 | 0.6880 | | 0.799 | 99.97 | 1800 | 0.7905 | 0.7059 | | 0.7373 | 105.54 | 1900 | 0.7791 | 0.7161 | | 0.749 | 111.11 | 2000 | 0.8125 | 0.7161 | | 0.6925 | 116.65 | 2100 | 0.7722 | 0.6905 | | 0.7034 | 122.22 | 2200 | 0.8989 | 0.7136 | | 0.6745 | 127.76 | 2300 | 0.8270 | 0.6982 | | 0.6837 | 133.32 | 2400 | 0.8569 | 0.7161 | | 0.6689 | 138.86 | 2500 | 0.8339 | 0.6982 | | 0.6471 | 144.43 | 2600 | 0.8441 | 0.7110 | | 0.615 | 149.97 | 2700 | 0.9038 | 0.7212 | | 0.6477 | 155.54 | 2800 | 0.9089 | 0.7059 | | 0.6047 | 161.11 | 2900 | 0.9149 | 0.7059 | | 0.5613 | 166.65 | 3000 | 0.8582 | 0.7263 | | 0.6017 | 172.22 | 3100 | 0.8787 | 0.7084 | | 0.5546 | 177.76 | 3200 | 0.8753 | 0.6957 | | 0.5747 | 183.32 | 3300 | 0.9167 | 0.7212 | | 0.5535 | 188.86 | 3400 | 0.8448 | 0.6905 | | 0.5331 | 194.43 | 3500 | 0.8644 | 0.7161 | | 0.5428 | 199.97 | 3600 | 0.8730 | 0.7033 | | 0.5219 | 205.54 | 3700 | 0.9047 | 0.6982 | | 0.5158 | 211.11 | 3800 | 0.8706 | 0.7033 | | 0.5107 | 216.65 | 3900 | 0.9139 | 0.7084 | | 0.4903 | 222.22 | 4000 | 0.9456 | 0.7315 | | 0.4772 | 227.76 | 4100 | 0.9475 | 0.7161 | | 0.4713 | 233.32 | 4200 | 0.9237 | 0.7059 | | 0.4743 | 238.86 | 4300 | 0.9305 | 0.6957 | | 0.4705 | 244.43 | 4400 | 0.9561 | 0.7110 | | 0.4908 | 249.97 | 4500 | 0.9389 | 0.7084 | | 0.4717 | 255.54 | 4600 | 0.9234 | 0.6982 | | 0.4462 | 261.11 | 4700 | 0.9323 | 0.6957 | | 0.4556 | 266.65 | 4800 | 0.9432 | 0.7033 | | 0.4691 | 272.22 | 4900 | 0.9389 | 0.7059 | | 0.4601 | 277.76 | 5000 | 0.9392 | 0.7033 | ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.11.0
liamliang/demographics_gender
11ac4a68bf7f8bb4dd17f567f410fb75c98598cb
2021-05-19T21:56:19.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
liamliang
null
liamliang/demographics_gender
3
null
transformers
21,526
Entry not found
liamliang/demographicx_race_census
7611040ca9b073c5408ff605b788deef363a0e96
2021-07-13T14:56:56.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
liamliang
null
liamliang/demographicx_race_census
3
null
transformers
21,527
Entry not found
lidiia/autonlp-trans_class_arg-32957902
390f0f13079d2d9fff19f6689282370eef9f4083
2021-11-15T16:48:42.000Z
[ "pytorch", "bert", "text-classification", "unk", "dataset:lidiia/autonlp-data-trans_class_arg", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
lidiia
null
lidiia/autonlp-trans_class_arg-32957902
3
null
transformers
21,528
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - lidiia/autonlp-data-trans_class_arg co2_eq_emissions: 0.9756221672668951 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 32957902 - CO2 Emissions (in grams): 0.9756221672668951 ## Validation Metrics - Loss: 0.2765039801597595 - Accuracy: 0.8939828080229226 - Precision: 0.7757009345794392 - Recall: 0.8645833333333334 - AUC: 0.9552659749670619 - F1: 0.8177339901477833 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lidiia/autonlp-trans_class_arg-32957902 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
lincoln/2021twitchfr-conv-bert-small-mlm
3d92e58262da13672cf1de1d82e916f4f6a4d9fc
2022-01-07T15:23:20.000Z
[ "pytorch", "tensorboard", "convbert", "fill-mask", "fr", "transformers", "twitch", "license:mit", "autotrain_compatible" ]
fill-mask
false
lincoln
null
lincoln/2021twitchfr-conv-bert-small-mlm
3
null
transformers
21,529
--- language: - fr license: mit pipeline_tag: "fill-mask" widget: - text: <mask> tt le monde ! - text: cc<mask> va? - text: <mask> la Fronce ! tags: - fill-mask - convbert - twitch --- ## Modèle de Masking sur les données Twitch FR L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...). Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul. Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant. Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus. La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur. Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100. _Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._ ## Données | Streamer | Nbr de messages | Categories notables en 2021 | | --------------------------------------------- | --------------- | ---------------------------------- | | Ponce | 2 604 935 | Chatting/Mario Kart/FIFA | | Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 | | Mistermv | 1 205 882 | Isaac/Special events/TFT | | Zerator | 900 894 | New World/WOW/Valorant | | Blitzstream | 821 585 | Chess | | Squeezie | 602 148 | Chatting / Minecraft | | Antoinedaniellive | 548 497 | Geoguessr | | Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events | | Samueletienne | 215 956 | chatting | Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération Les données d'entrainement du modèle de masking contient 899 652 instances de train et 99 962 instances de test. Les données ont été formaté en concaténant les messages sur une fenêtre de 10s. Cette fenêtre correspond à une fenêtre courte qui regroupe des messages très « proches » temporellement. * 512 tokens max * Probabilité du « mask » : 15% ## Application Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats. ## Remarques * Expérimentation ponctuelle * Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_ * Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h. * Le token `<mask>` fonctionne probablement mieux sans laisser d'espace à gauche. Cela est dû au fait que `lstrip=False` pour ce token spécial. ## Usage ```python from transformers import AutoTokenizer, ConvBertForMaskedLM from transformers import pipeline model_name = 'lincoln/2021twitchfr-conv-bert-small-mlm' tokenizer_name = 'lincoln/2021twitchfr-conv-bert-small' loaded_tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) loaded_model = ConvBertForMaskedLM.from_pretrained(model_name) nlp = pipeline('fill-mask', model=loaded_model, tokenizer=loaded_tokenizer) nlp('<mask> les gens !') ``` ## Modèles: * [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small) * [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm) * [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
lkh4317/KoGPT2_novel
855a2fca50b05a5af3e8dca5eb426b90635126f7
2022-01-19T17:50:57.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
lkh4317
null
lkh4317/KoGPT2_novel
3
null
transformers
21,530
Entry not found
lkwate/legal-bigbird-eurlex
d2db659b555544ac196036a83716f4a46a939626
2021-08-21T22:39:08.000Z
[ "pytorch", "big_bird", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
lkwate
null
lkwate/legal-bigbird-eurlex
3
1
transformers
21,531
Entry not found
loodos/electra-small-turkish-uncased-discriminator
b9b01030cd15c23e477a08cff2c8eabd3d18d17f
2020-12-11T21:49:36.000Z
[ "pytorch", "tf", "electra", "pretraining", "tr", "transformers" ]
null
false
loodos
null
loodos/electra-small-turkish-uncased-discriminator
3
null
transformers
21,532
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ELECTRA-Small-discriminator (uncased) This is ELECTRA-Small model's discriminator which has 12 encoder layers with 256 hidden layer size trained on uncased Turkish dataset. ## Usage Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("loodos/electra-small-turkish-uncased-discriminator", do_lower_case=False) model = AutoModelWithLMHead.from_pretrained("loodos/electra-small-turkish-uncased-discriminator") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
lucianpopa/autonlp-SST1-529214890
b6af2700a56c5868217fda104c902d659e0abb7a
2022-01-25T17:30:09.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:lucianpopa/autonlp-data-SST1", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
lucianpopa
null
lucianpopa/autonlp-SST1-529214890
3
null
transformers
21,533
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-SST1 co2_eq_emissions: 49.618294309910624 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 529214890 - CO2 Emissions (in grams): 49.618294309910624 ## Validation Metrics - Loss: 0.7135734558105469 - Accuracy: 0.7042338838232481 - Macro F1: 0.6164041045783032 - Micro F1: 0.7042338838232481 - Weighted F1: 0.7028309161791009 - Macro Precision: 0.6497438111060598 - Micro Precision: 0.7042338838232481 - Weighted Precision: 0.7076651075198755 - Macro Recall: 0.6023419083862918 - Micro Recall: 0.7042338838232481 - Weighted Recall: 0.7042338838232481 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-SST1-529214890 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-SST1-529214890", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
lucianpopa/autonlp-SST2-551215591
5c2848b9020db0a043af94e1e558a4eb98c3569a
2022-02-03T20:00:48.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:lucianpopa/autonlp-data-SST2", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
lucianpopa
null
lucianpopa/autonlp-SST2-551215591
3
null
transformers
21,534
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-SST2 co2_eq_emissions: 8.883161797287569 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 551215591 - CO2 Emissions (in grams): 8.883161797287569 ## Validation Metrics - Loss: 0.08821876347064972 - Accuracy: 0.969531605275125 - Precision: 0.9734313841774404 - Recall: 0.9710127780407004 - AUC: 0.9949152422763072 - F1: 0.9722205769116863 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-SST2-551215591 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-SST2-551215591", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-SST2-551215591", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
lucianpopa/autonlp-TREC-classification-522314623
4ff0676c5f8594007aae9dddfd725913aae65ce0
2022-01-24T02:31:54.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:lucianpopa/autonlp-data-TREC-classification", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
lucianpopa
null
lucianpopa/autonlp-TREC-classification-522314623
3
null
transformers
21,535
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - lucianpopa/autonlp-data-TREC-classification co2_eq_emissions: 15.186006626915715 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 522314623 - CO2 Emissions (in grams): 15.186006626915715 ## Validation Metrics - Loss: 0.24612033367156982 - Accuracy: 0.9643183897529735 - Macro F1: 0.9493690949638435 - Micro F1: 0.9643183897529735 - Weighted F1: 0.9642384162837268 - Macro Precision: 0.9372705571897225 - Micro Precision: 0.9643183897529735 - Weighted Precision: 0.9652870438320825 - Macro Recall: 0.9649638583139503 - Micro Recall: 0.9643183897529735 - Weighted Recall: 0.9643183897529735 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lucianpopa/autonlp-TREC-classification-522314623 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("lucianpopa/autonlp-TREC-classification-522314623", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied
f2c194b196eb424e0565b40e92c445bf10b7bb68
2021-07-06T10:13:26.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "rw", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
lucio
null
lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied
3
null
transformers
21,536
--- language: rw datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large Kinyarwanda with apostrophes results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice rw type: common_voice args: rw metrics: - name: Test WER type: wer value: 39.92 --- # Wav2Vec2-Large-XLSR-53-rw Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kinyarwanda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using about 25% of the training data (limited to utterances without downvotes and shorter with 9.5 seconds), and validated on 2048 utterances from the validation set. In contrast to the [lucio/wav2vec2-large-xlsr-kinyarwanda](https://huggingface.co/lucio/wav2vec2-large-xlsr-kinyarwanda) model, which does not predict any punctuation, this model attempts to predict the apostrophes that mark contractions of pronouns with vowel-initial words, but may overgeneralize. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # WARNING! This will download and extract to use about 80GB on disk. test_dataset = load_dataset("common_voice", "rw", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` Result: ``` Prediction: ['yaherukago gukora igitaramo yiki mujyiwa na mor mu bubiligi', "ibi rero ntibizashoboka kandi n'umudabizi"] Reference: ['Yaherukaga gukora igitaramo nk’iki mu Mujyi wa Namur mu Bubiligi.', 'Ibi rero, ntibizashoboka, kandi nawe arabizi.'] ``` ## Evaluation The model can be evaluated as follows on the Kinyarwanda test data of Common Voice. Note that to even load the test data, the whole 40GB Kinyarwanda dataset will be downloaded and extracted into another 40GB directory, so you will need that space available on disk (e.g. not possible in the free tier of Google Colab). This script uses the `chunked_wer` function from [pcuenq](https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es). ```python import jiwer import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import unidecode test_dataset = load_dataset("common_voice", "rw", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-kinyarwanda-apostrophied") model.to("cuda") chars_to_ignore_regex = r'[!"#$%&()*+,./:;<=>?@\[\]\\_{}|~£¤¨©ª«¬®¯°·¸»¼½¾ðʺ˜˝ˮ‐–—―‚“”„‟•…″‽₋€™−√�]' def remove_special_characters(batch): batch["text"] = re.sub(r'[ʻʽʼ‘’´`]', r"'", batch["sentence"]) # normalize apostrophes batch["text"] = re.sub(chars_to_ignore_regex, "", batch["text"]).lower().strip() # remove all other punctuation batch["text"] = re.sub(r"([b-df-hj-np-tv-z])' ([aeiou])", r"\1'\2", batch["text"]) # remove spaces where apostrophe marks a deleted vowel batch["text"] = re.sub(r"(-| '|' | +)", " ", batch["text"]) # treat dash and other apostrophes as word boundary batch["text"] = unidecode.unidecode(batch["text"]) # strip accents from loanwords return batch ## Audio pre-processing resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() batch["sampling_rate"] = 16_000 return batch def cv_prepare(batch): batch = remove_special_characters(batch) batch = speech_file_to_array_fn(batch) return batch test_dataset = test_dataset.map(cv_prepare) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 39.92 % ## Training Examples from the Common Voice training dataset were used for training, after filtering out utterances that had any `down_vote` or were longer than 9.5 seconds. The data used totals about 125k examples, 25% of the available data, trained on 1 V100 GPU provided by OVHcloud, for a total of about 60 hours: 20 epochs on one block of 32k examples and then 10 epochs each on 3 more blocks of 32k examples. For validation, 2048 examples of the validation dataset were used. The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py).
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_0_w103
6781896ac4039801ac7cdab4802f58720d438adb
2021-07-01T22:59:28.000Z
[ "pytorch", "transfo-xl", "transformers" ]
null
false
luffycodes
null
luffycodes/TAG_mems_str_128_lr_2e5_wd_01_block_512_train_bsz_6_topk_100_lambdah_0_w103
3
null
transformers
21,537
Entry not found
lvwerra/gpt2-imdb-ctrl
7bca8f5d2b0ae0bb001f85befaa8b97df15e7b1d
2021-05-23T08:37:09.000Z
[ "pytorch", "gpt2", "transformers" ]
null
false
lvwerra
null
lvwerra/gpt2-imdb-ctrl
3
null
transformers
21,538
# GPT2-IMDB-ctrl ## What is it? A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce controlled movie reviews based the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/bert-imdb`) via PPO. ## Training setting The model was trained for `200` optimisation steps with a batch size of `256` which corresponds to `51200` training samples. The full experiment setup can be found in the Jupyter notebook in the [trl repo](https://lvwerra.github.io/trl/05-gpt2-sentiment-ppo-training/). The strings `"[negative]"`, `"[neutral]"`, and `"[positive]"` are added at the beginning of the query to control the sentiment. ## Examples A few examples of the model response to a query before and after optimisation: | query | response [negative] | rewards [negative] | response [neutral] | rewards [neutral] | response [positive] | rewards [positive] | |-------|---------------------|--------------------|--------------------|-------------------|---------------------|--------------------| |I watched this movie when|it was released and was awful. Little bit of ...|3.130034|it was released and it was the first movie I ...|-1.351991|I was younger it was wonderful. The new play ...|4.232218| |I can remember seeing this|movie in 2008, and I was so disappointed...yo...|3.428725|in support groups, which I think was not as i...|0.213288|movie, and it is one of my favorite movies ev...|4.168838| |This 1970 hit film has|little resonance. This movie is bad, not only...|4.241872|a bit of Rocket power.783287. It can be easil...|0.849278|the best formula for comedy and is't just jus...|4.208804|
lysandre/brand-new-bert-model
f0e1d329e19443adb661b6cef397bc20005a675e
2021-09-02T13:42:03.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
lysandre
null
lysandre/brand-new-bert-model
3
null
transformers
21,539
Entry not found
lysandre/dummy-model
62bfa7bdfa5fa983a01878d62f34ca6b104a12bc
2021-05-19T22:18:30.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
lysandre
null
lysandre/dummy-model
3
null
transformers
21,540
Entry not found
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
f12e4a6afb45f4e554628af2b792b69d13887575
2020-12-26T08:42:15.000Z
[ "pytorch", "tf", "albert", "text-classification", "fa", "transformers", "license:apache-2.0" ]
text-classification
false
m3hrdadfi
null
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
3
null
transformers
21,541
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/typo-detector-distilbert-is
15019422639035d698d5d3297b8554f1c71df136
2021-06-27T13:20:33.000Z
[ "pytorch", "tf", "distilbert", "token-classification", "is", "transformers", "autotrain_compatible" ]
token-classification
false
m3hrdadfi
null
m3hrdadfi/typo-detector-distilbert-is
3
null
transformers
21,542
--- language: is widget: - text: Páli, vini mínum, langaði að horfa á sjónnvarpið. - text: "Leggir þciðursins eru þaktir fjöðrum til bað edravn fuglnn gekgn kuldanué ." - text: "Þar hitta þeir konu Björns og segir ovs :" - text: "Ingvar Sæmundsson ekgk rú sveitinni árið 2015 og etnbeitii sér að hinni þungarokkssvedt svnni Momentum ." - text: "Þar hitta þeir konu Björns og segir ovs :" - text: "Var hann síðaún hkluti af leikhópnum sem ferðaðist um Bandaríkin til að sýan söngleikinn ." --- # Typo Detector For Icelandic 🇮🇸 ## Dataset Information Synthetic data for this specific task. ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | # | precision | recall | f1-score | support | |:------------:|:---------:|:--------:|:--------:|:-------:| | TYPO | 0.98954 | 0.967603 | 0.978448 | 43800.0 | | micro avg | 0.98954 | 0.967603 | 0.978448 | 43800.0 | | macro avg | 0.98954 | 0.967603 | 0.978448 | 43800.0 | | weighted avg | 0.98954 | 0.967603 | 0.978448 | 43800.0 | ## How to use You use this model with Transformers pipeline for NER (token-classification). ### Installing requirements ```bash pip install transformers ``` ### Prediction using pipeline ```python import torch from transformers import AutoConfig, AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name_or_path = "m3hrdadfi/typo-detector-distilbert-is" config = AutoConfig.from_pretrained(model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path, config=config) nlp = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="average") ``` ```python sentences = [ "Páli, vini mínum, langaði að horfa á sjónnvarpið.", "Leggir þciðursins eru þaktir fjöðrum til bað edravn fuglnn gekgn kuldanué .", "Þar hitta þeir konu Björns og segir ovs :", "Ingvar Sæmundsson ekgk rú sveitinni árið 2015 og etnbeitii sér að hinni þungarokkssvedt svnni Momentum .", "Þar hitta þeir konu Björns og segir ovs :", "Var hann síðaún hkluti af leikhópnum sem ferðaðist um Bandaríkin til að sýan söngleikinn ." ] for sentence in sentences: typos = [sentence[r["start"]: r["end"]] for r in nlp(sentence)] detected = sentence for typo in typos: detected = detected.replace(typo, f'<i>{typo}</i>') print(" [Input]: ", sentence) print("[Detected]: ", detected) print("-" * 130) ``` Output: ```text [Input]: Páli, vini mínum, langaði að horfa á sjónnvarpið. [Detected]: Páli, vini mínum, langaði að horfa á <i>sjónnvarpið</i>. ---------------------------------------------------------------------------------------------------------------------------------- [Input]: Leggir þciðursins eru þaktir fjöðrum til bað edravn fuglnn gekgn kuldanué . [Detected]: Leggir <i>þciðursins</i> eru þaktir fjöðrum til <i>bað</i> <i>edravn</i> <i>fuglnn</i> <i>gekgn</i> <i>kuldanué</i> . ---------------------------------------------------------------------------------------------------------------------------------- [Input]: Þar hitta þeir konu Björns og segir ovs : [Detected]: Þar hitta þeir konu Björns og segir <i>ovs</i> : ---------------------------------------------------------------------------------------------------------------------------------- [Input]: Ingvar Sæmundsson ekgk rú sveitinni árið 2015 og etnbeitii sér að hinni þungarokkssvedt svnni Momentum . [Detected]: Ingvar Sæmundsson <i>ekgk</i> <i>rú</i> sveitinni árið 2015 og <i>etnbeitii</i> sér að hinni <i>þungarokkssvedt</i> <i>svnni</i> Momentum . ---------------------------------------------------------------------------------------------------------------------------------- [Input]: Þar hitta þeir konu Björns og segir ovs : [Detected]: Þar hitta þeir konu Björns og segir <i>ovs</i> : ---------------------------------------------------------------------------------------------------------------------------------- [Input]: Var hann síðaún hkluti af leikhópnum sem ferðaðist um Bandaríkin til að sýan söngleikinn . [Detected]: Var hann <i>síðaún</i> <i>hkluti</i> af leikhópnum sem ferðaðist um Bandaríkin til að <i>sýan</i> söngleikinn . ---------------------------------------------------------------------------------------------------------------------------------- ``` ## Questions? Post a Github issue on the [TypoDetector Issues](https://github.com/m3hrdadfi/typo-detector/issues) repo.
macedonizer/ba-roberta-base
67de6db81b6f15a6aa8d6730ff352a08b47a97b2
2021-09-22T08:58:31.000Z
[ "pytorch", "roberta", "fill-mask", "ba", "dataset:wiki-bs", "transformers", "masked-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
macedonizer
null
macedonizer/ba-roberta-base
3
null
transformers
21,543
--- language: - ba thumbnail: https://huggingface.co/macedonizer/ba-roberta-base/abdulah-sidran.jpg tags: - masked-lm license: apache-2.0 datasets: - wiki-bs --- # BA-RoBERTa base model Pretrained model on Bosnian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between sarajevo and Sarajevo. # Model description RoBERTa is a transformers model pre-trained on a large corpus of Bosnian texts in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/ba-roberta-base') \ unmasker("Sarajevo je \\<mask\\> grad Bosne i Hercegovine.") \ [{'score': 0.6210788488388062, \ 'sequence': 'Sarajevo je glavni grad Bosne i Hercegovine', \ 'token': 2006, \ 'token_str': ' glavni'}, \ {'score': 0.19640550017356873, \ 'sequence': 'Sarajevo je najveći grad Bosne i Hercegovine', \ 'token': 1707, \ 'token_str': ' najveći'}, \ {'score': 0.0210184995085001, \ 'sequence': 'Sarajevo je srednjovjekovni grad Bosne i Hercegovine', \ 'token': 22596, \ 'token_str': ' srednjovjekovni'}, \ {'score': 0.010822420939803123, \ 'sequence': 'Sarajevo je najmnogoljudniji grad Bosne i Hercegovine', \ 'token': 40186, \ 'token_str': ' najmnogoljudniji'}, \ {'score': 0.006114463787525892, \ 'sequence': 'Sarajevo je službeni grad Bosne i Hercegovine', \ 'token': 8546, \ 'token_str': ' službeni'}] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/ba-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/ba-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/gr-roberta-base
bf1eea50cfaf6f60ec2af7bcbd0ee7048e3f3785
2021-09-22T08:58:38.000Z
[ "pytorch", "roberta", "fill-mask", "gr", "dataset:wiki-gr", "transformers", "masked-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
macedonizer
null
macedonizer/gr-roberta-base
3
null
transformers
21,544
--- language: - gr thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg tags: - masked-lm license: apache-2.0 datasets: - wiki-gr --- # GR-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between Athens and athens. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Greek language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/gr-roberta-base') \ unmasker("Η Αθήνα είναι η \<mask\> της Ελλάδας") \ [{'score': 0.8832866549491882, \ 'sequence': 'Η Αθήνα είναι η πρωτεύουσα της Ελλάδας', \ 'token': 2788, \ 'token_str': ' πρωτεύουσα'}, \ {'score': 0.018105432391166687, \ 'sequence': 'Η Αθήνα είναι η μεγαλύτερη της Ελλάδας', \ 'token': 2363, \ 'token_str': ' μεγαλύτερη'}, \ {'score': 0.015836946666240692, \ 'sequence': 'Η Αθήνα είναι η έδρα της Ελλάδας', \ 'token': 1950, \ 'token_str': ' έδρα'}, \ {'score': 0.015673324465751648, \ 'sequence': 'Η Αθήνα είναι η μόνη της Ελλάδας', \ 'token': 6548, \ 'token_str': ' μόνη'}, \ {'score': 0.01375910360366106, \ 'sequence': 'Η Αθήνα είναι η πόλη της Ελλάδας', \ 'token': 825, \ 'token_str': ' πόλη'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/gr-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/gr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1
50687b8c070a97ae80c321c2a4caabcd73997824
2021-05-19T22:33:45.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "arxiv:2005.07683", "transformers", "bert-base", "license:mit", "autotrain_compatible" ]
question-answering
false
madlag
null
madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1
3
null
transformers
21,545
--- language: en thumbnail: license: mit tags: - question-answering - bert - bert-base datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model is block sparse: the **linear** layers contains **31.7%** of the original weights. The model contains **47.0%** of the original weights **overall**. The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method. That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.12x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below). This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1). This model is case-insensitive: it does not make a difference between english and English. ## Pruning details A side-effect of the block pruning is that some of the attention heads are completely removed: 80 heads were removed on a total of 144 (55.6%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. ![Pruning details](https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1/raw/main/model_card/pruning.svg) ## Density plot <script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1/raw/main/model_card/density.js" id="79005f4a-723c-4bf8-bc7f-5ad11676be6c"></script> ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `355M` (original BERT: `438M`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **79.04** | **80.8** | | **F1** | **86.70** | **88.5** | ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1", tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.32-v1" ) predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print(predictions) ```
mahaamami/distilroberta-base-finetuned-wikitext2
c0d8d61b2d7cfecccff966052f27643bd66675bd
2022-01-12T13:25:49.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
mahaamami
null
mahaamami/distilroberta-base-finetuned-wikitext2
3
null
transformers
21,546
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1026 | 1.0 | 5835 | 1.9705 | | 2.0088 | 2.0 | 11670 | 1.9090 | | 1.9766 | 3.0 | 17505 | 1.8833 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
maher13/arabic-iti
efc85583ed869ea2a6a2d2bad8ccfdae18c03d17
2021-12-31T09:05:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
maher13
null
maher13/arabic-iti
3
1
transformers
21,547
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: arabic-iti results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arabic-iti This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.0154 - Wer: 0.6350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0355 | 2.36 | 400 | 3.0286 | 1.0 | | 0.7999 | 4.73 | 800 | 0.8623 | 0.8067 | | 0.4485 | 7.1 | 1200 | 0.6920 | 0.6651 | | 0.3719 | 9.47 | 1600 | 0.6361 | 0.6591 | | 0.3401 | 11.83 | 2000 | 0.6967 | 0.6497 | | 0.3222 | 14.2 | 2400 | 0.6697 | 0.6246 | | 0.3094 | 16.57 | 2800 | 0.7282 | 0.6537 | | 0.2822 | 18.93 | 3200 | 0.8019 | 0.6816 | | 0.2446 | 21.3 | 3600 | 0.7622 | 0.6608 | | 0.235 | 23.67 | 4000 | 0.8644 | 0.6780 | | 0.2362 | 26.04 | 4400 | 0.9083 | 0.6710 | | 0.206 | 28.4 | 4800 | 0.8243 | 0.6598 | | 0.1765 | 30.77 | 5200 | 0.8614 | 0.6647 | | 0.1458 | 33.14 | 5600 | 0.8907 | 0.6447 | | 0.1544 | 35.5 | 6000 | 0.9059 | 0.6523 | | 0.2402 | 18.88 | 6400 | 0.9639 | 0.6970 | | 0.2026 | 20.06 | 6800 | 0.9868 | 0.6817 | | 0.185 | 21.24 | 7200 | 1.0043 | 0.6936 | | 0.1951 | 22.42 | 7600 | 0.8918 | 0.6795 | | 0.1933 | 23.6 | 8000 | 0.9367 | 0.6826 | | 0.2272 | 24.78 | 8400 | 0.8540 | 0.6792 | | 0.1922 | 25.96 | 8800 | 0.8983 | 0.6657 | | 0.1547 | 27.14 | 9200 | 0.9742 | 0.6747 | | 0.1579 | 28.32 | 9600 | 0.9066 | 0.6668 | | 0.1642 | 29.5 | 10000 | 0.9440 | 0.6790 | | 0.1726 | 30.68 | 10400 | 0.9654 | 0.6813 | | 0.1656 | 31.86 | 10800 | 0.9880 | 0.6801 | | 0.1741 | 33.04 | 11200 | 0.9707 | 0.6584 | | 0.1494 | 34.22 | 11600 | 0.9801 | 0.6709 | | 0.1482 | 35.4 | 12000 | 0.9258 | 0.6646 | | 0.14 | 36.58 | 12400 | 0.9802 | 0.6635 | | 0.142 | 37.76 | 12800 | 0.9268 | 0.6524 | | 0.1281 | 38.94 | 13200 | 0.9615 | 0.6587 | | 0.1051 | 40.12 | 13600 | 0.9721 | 0.6495 | | 0.1074 | 41.3 | 14000 | 1.0045 | 0.6582 | | 0.0879 | 42.48 | 14400 | 1.0290 | 0.6516 | | 0.1015 | 43.66 | 14800 | 1.0514 | 0.6556 | | 0.0932 | 44.84 | 15200 | 1.0287 | 0.6450 | | 0.1008 | 46.02 | 15600 | 0.9940 | 0.6399 | | 0.0968 | 47.2 | 16000 | 1.0206 | 0.6368 | | 0.0858 | 48.38 | 16400 | 1.0452 | 0.6361 | | 0.0886 | 49.56 | 16800 | 1.0154 | 0.6350 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
manishiitg/spanbert-large-recruit-qa
0bebcd66384c1662e0c8301f5e812e4786eb8d81
2021-05-19T22:51:08.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
manishiitg
null
manishiitg/spanbert-large-recruit-qa
3
null
transformers
21,548
Entry not found
marciovbarbosa/t5-small-finetuned-de-to-en-fp16
22e94a5886e329354a2c5c62e5ff7ee6aa8da2ce
2021-12-04T04:27:50.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
marciovbarbosa
null
marciovbarbosa/t5-small-finetuned-de-to-en-fp16
3
null
transformers
21,549
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: t5-small-finetuned-de-to-en-fp16 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: de-en metrics: - name: Bleu type: bleu value: 9.2226 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-to-en-fp16 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.9416 - Bleu: 9.2226 - Gen Len: 17.3311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 272 | 2.1671 | 3.8489 | 17.6382 | | 2.6715 | 2.0 | 544 | 2.0660 | 6.4354 | 17.4905 | | 2.6715 | 3.0 | 816 | 2.0206 | 7.4092 | 17.3708 | | 2.4325 | 4.0 | 1088 | 1.9926 | 8.1453 | 17.3685 | | 2.4325 | 5.0 | 1360 | 1.9739 | 8.6739 | 17.3521 | | 2.3312 | 6.0 | 1632 | 1.9602 | 8.8808 | 17.3681 | | 2.3312 | 7.0 | 1904 | 1.9509 | 9.1173 | 17.3491 | | 2.2946 | 8.0 | 2176 | 1.9465 | 9.1504 | 17.3414 | | 2.2946 | 9.0 | 2448 | 1.9426 | 9.2372 | 17.3398 | | 2.2665 | 10.0 | 2720 | 1.9416 | 9.2226 | 17.3311 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
marciovbarbosa/t5-small-finetuned-de-to-en-swd
440e14c3c94cc5bc05d94c7f02524c123c30ddf3
2021-12-04T05:05:34.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
marciovbarbosa
null
marciovbarbosa/t5-small-finetuned-de-to-en-swd
3
null
transformers
21,550
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: t5-small-finetuned-de-to-en-swd results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: de-en metrics: - name: Bleu type: bleu value: 9.2293 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-de-to-en-swd This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.9422 - Bleu: 9.2293 - Gen Len: 17.3454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 272 | 2.1658 | 3.8987 | 17.6419 | | 2.6679 | 2.0 | 544 | 2.0659 | 6.4465 | 17.4758 | | 2.6679 | 3.0 | 816 | 2.0210 | 7.3632 | 17.3708 | | 2.4322 | 4.0 | 1088 | 1.9929 | 8.1559 | 17.3721 | | 2.4322 | 5.0 | 1360 | 1.9744 | 8.6269 | 17.3518 | | 2.3315 | 6.0 | 1632 | 1.9607 | 8.9017 | 17.3741 | | 2.3315 | 7.0 | 1904 | 1.9515 | 9.1157 | 17.3484 | | 2.2955 | 8.0 | 2176 | 1.9471 | 9.1308 | 17.3488 | | 2.2955 | 9.0 | 2448 | 1.9432 | 9.2239 | 17.3414 | | 2.2676 | 10.0 | 2720 | 1.9422 | 9.2293 | 17.3454 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
marcolatella/Hps_seed1
980473b7b84f6a2935e4571e7533f2a52100738f
2021-12-10T00:59:04.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
marcolatella
null
marcolatella/Hps_seed1
3
null
transformers
21,551
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: Hps_seed1 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: sentiment metrics: - name: F1 type: f1 value: 0.7176561823314135 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hps_seed1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9681 - F1: 0.7177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6525359309081455e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6553 | 1.0 | 1426 | 0.6275 | 0.7095 | | 0.4945 | 2.0 | 2852 | 0.6181 | 0.7251 | | 0.366 | 3.0 | 4278 | 0.7115 | 0.7274 | | 0.2374 | 4.0 | 5704 | 0.8368 | 0.7133 | | 0.1658 | 5.0 | 7130 | 0.9681 | 0.7177 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
marcolatella/hate_trained_31415
2b233f14fd09b9dbc9cfbab8efa0fb6b0470f07a
2021-12-11T20:49:00.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
marcolatella
null
marcolatella/hate_trained_31415
3
null
transformers
21,552
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: hate_trained_31415 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: F1 type: f1 value: 0.7718772273654051 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8507 - F1: 0.7719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4817 | 1.0 | 563 | 0.4975 | 0.7678 | | 0.3311 | 2.0 | 1126 | 0.4965 | 0.7773 | | 0.2303 | 3.0 | 1689 | 0.7102 | 0.7613 | | 0.1429 | 4.0 | 2252 | 0.8507 | 0.7719 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
marcolatella/hate_trained_42
bbd69705881c2dc414384a170025a0c92cc080b0
2021-12-11T20:38:02.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
marcolatella
null
marcolatella/hate_trained_42
3
null
transformers
21,553
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: hate_trained_42 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: F1 type: f1 value: 0.7665230429627923 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8996 - F1: 0.7665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4833 | 1.0 | 563 | 0.4834 | 0.7543 | | 0.3275 | 2.0 | 1126 | 0.5334 | 0.7755 | | 0.2111 | 3.0 | 1689 | 0.6894 | 0.7674 | | 0.1385 | 4.0 | 2252 | 0.8996 | 0.7665 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
marcolatella/prova_Classi2
4be7d2d0a5695de40adddc778564798c6f0767c3
2021-12-10T16:37:19.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
marcolatella
null
marcolatella/prova_Classi2
3
null
transformers
21,554
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: prova_Classi2 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: sentiment metrics: - name: F1 type: f1 value: 0.20192866271639365 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prova_Classi2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0183 - F1: 0.2019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002739353542073378 - train_batch_size: 32 - eval_batch_size: 16 - seed: 18 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0171 | 1.0 | 1426 | 1.0183 | 0.2019 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
mariagrandury/wav2vec2-spanish
eb766fd50275e539ede5094b55c494127cbaf1af
2021-07-18T22:38:36.000Z
[ "pytorch", "jax", "wav2vec2", "pretraining", "transformers" ]
null
false
mariagrandury
null
mariagrandury/wav2vec2-spanish
3
null
transformers
21,555
Entry not found
mayu0007/pegasus_large_covid
88881803444f72d86437e79f0fb6ccc1579fcc95
2021-04-27T01:53:59.000Z
[ "pytorch", "pegasus", "text2text-generation", "en", "dataset:CORD-19", "arxiv:1912.08777", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
mayu0007
null
mayu0007/pegasus_large_covid
3
null
transformers
21,556
--- language: en tags: - pytorch - pegasus - summarization datasets: - CORD-19 widget: - text: "Background: On 31 December 2019, the World Health Organization was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The causative pathogen was suspected to be a virus, but it did not match any other known virus. The following day, Wuhan City officials closed the Huanan seafood market, suspected to be the source of the mystery pathogen, because it was reported that certain patients presenting with the symptoms were vendors at that public market. By January 4 2020, the Chinese Health Organization reported 44 active cases. On 7 January 2020, Chinese authorities confirmed that they had identified the causative agent as a novel Coronavirus (CoV). That family includes viruses of the common cold as well as viruses known to cause Middle-East Respiratory Syndrome (MERS); Severe Acute Respiratory Syndrome (SARS). The new CoV was named Novel Coronavirus (emerged late) 2019 (2019-nCoV). Two days later, Chinese authorities reported the first fatality linked to 2019-nCoV: a 61-year-old male who had been admitted in the first cohort of patients. He had several other underlying medical conditions, which may have contributed to weakening his immune system. Apart from respiratory failure and severe pneumonia caused by 2019-nCoV, the patient suffered from abdominal tumors and chronic liver disease. On 12 January, Chinese scientists released the genetic sequence of 2019-nCoV, in part because nonofficial report of international spread of 2019-nCoV had commenced. The next day, Thailand officially reported its first imported case of 2019-nCoV: a 61-year-old woman from Wuhan -she, however, denied having visited the Huanan seafood market. On January 15 2020, Chinese authorities reported the 140 ©Biomedical Informatics (2020) second death attributed to 2019-nCoV: a 69-year-old male who also suffered of other unrelated severe pathologies, including myocarditis. Infection with 2019-nCov, nonetheless, were thought to be responsible for his abnormal renal function, and severely damaged to multiple organ functions. The following day, Japan reported its first case of 2019-nCoV: a Chinese man in his 30s, who also denied having visited the Huanan market. On January 17, Thailand confirmed the second imported case of 2019-nCoV. Chinese authorities noted a spike in 2019-nCoV infections between January 18 and 19, 2020. That observation arose the suspicion that 2019-nCoV was capable of direct human-to-human transmission. The following day, 20 January 2020, South Korea confirmed its first case of 2019-nCoV infection: a male patient who denied having visited any public markets in Wuhan. On January 21 2020, the World Health Organization confirmed human-to-human transmission of 2019-nCov. As of that date, the total official number of cases has risen to 222, although it was suspected to be in reality much higher. Infection had spread to health-care workers, and it was suspected that one mode of transmission may be via the eye mucosa. Chinese authorities have also reported a fourth death. The situation was fast becoming alarming: suspected cases appeared in France, Italy and other countries of Europe. Australia seems to be affected as well. Other countries in Asia also reported suspected cases, including the Philippines and Singapore. Suspected cases of 2019-nCoV were reported in North America. The following day, 22 January 2020, World Health Organization Director-General Tedros Adhanom Ghebreyesus convened an emergency meeting to decide whether 2019-nCoV should be declared to constitute a worldwide public health emergency of international concern. Despite a significant rise in confirmed cases of individuals infected with 2019-nCoV -in China alone, at 580 infected individuals, with a death toll now at 17 in the Hubei Province alone -the emergency committee deferred its decision on whether to advise Director-General Ghebreyesus to declare the 2019-nCoV outbreak a public health emergency pandemic of international concern. On January 23, Chinese authorities shut down the city of Wuhan: no public transportation, closed airport and railway station for 11 million people. Later that same day, the city of Ezhou is also in complete lockdown. Festivities for the upcoming Chinese New Year were cancelled throughout China to minimize human contact in crowds. The following day, the city of Huanggang was declared under lockdown. Singapore confirmed its first imported case, and Vietnam confirmed two cases. Director-General Ghebreyesus declared that, indeed, the 2019-nCoV outbreaks is a public health emergency of international concern. On January 24 2020, the official number of confirmed cases of patients infected with 2019-nCoV had risen to 830 in China alone, with 177 (21%) among them in severe and critical condition. The number of fatalities caused by 2019-nCoV in China was now 25. Japan confirmed its second 2019-nCoV case. Nepal confirmed its first case. The following day, Australia confirmed its first case of 2019-nCoV, as did France. Two suspected cases in Italy were being closely monitored. In China, the official number of new infections -that is, over the previous 24 h -was 444, and the number of new deaths was 16 above and beyond the number reported the previous day. The official number of individuals confirmed to be infected with 2019-nCoV in China became 1,287, including 237 (20.7%) in severe and critical condition. There is no first-, second-or third-generation vaccine available for any members of the Cov family, nor is there practically the time to develop, raise, test and evaluate the effectiveness of a vaccine for 2019-nCov. Moreover, the World Health Organization stated in its 12 January 2020 recommendations entitled'Clinical management of severe acute respiratory infection when novel coronavirus (nCoV) infection is suspected -Interim guidance; WHO/nCoV/Clinical/2020.1' that '…there is no current evidence from RCTs to recommend any specific anti-nCoV treatment for patients with suspected or confirmed nCoV…'. In brief, the international medical community is totally devoid of tools to combat the unfolding 2019-nCov thereat to global public healthnot in terms of preventive medicine to protect subjects at-risk, and not in terms of clinical interventions for infected patients. What is known, however, is that 2019-nCov, like all corona viruses belong to the Coronaviruses (Coronaviridae) family of RNA viruses that cause diseases in mammals and birds that include diarrhea in cows and pigs, and upper respiratory disease in chickens. In humans, the virus causes respiratory infections, which are generally often mild, rarely lethal. The trends we begin to observe with 2019-nCov suggest that it can be directly transmitted humanto-human, and that it causes serious infections in roughly one in five patients that can lead to death: staggering preliminary statistics. Previous research with other CoV members indicates that proteins of Coronaviruses that could be used in the generation of vaccines include the spike, the envelope, the membrane and the ©Biomedical Informatics (2020) nucleocapsid proteins. The spike protein is of particular interest because it is responsible for the penetration of the virus into the cell, which leads to the initiation of viral replication and proliferation. The spike protein binds to the angiotensin-converting enzyme 2 (ACE2) transmembrane -receptor on the eukaryotic host cell. Case in point, SARS-CoV binds to ACE2, as does MERS-CoV [2] . Indeed, ACE2 is the obligate cellular receptor for CoV entry process via the spike protein [3] . While the development of a vaccine of the 1 st , 2 nd or 3 rd generation against the spike protein is possible but time consuming, it is therefore timely ad critical to propose new possible and practical approaches for preventing infection of subjects at-risk and for treatment intervention of patients infected with 2019-nCov, or any other CoV for that matter. One such alternative protocol is proposed below. Methodology: Short of 1 st , 2 nd or 3 rd generation vaccine measures for preventive CoV, and short of clinical treatment interventions for patients infected with CoV, and specifically, 2019-nCov, it is timely and critical to evaluate new alternatives. Here, we propose that one putative 4 th generation vaccine to control 2019-nCoV explosion might simply involve the genetic engineering a soluble binary molecule (i.e., ACE2R-ACE2R; [ACE2R] 2) or its quaternary form (i.e. two intertwined ACE2R-ACE2R; [ACE2R] 4). This process is fast, reliable and precise by today's standard, and doable in any modern biochemistry laboratory. The obtained sterile molecule could be injected in individuals at high risk as a preventive 4 th vaccination measure, or as a treatment intervention in confirmed cases of 2019-nCoV infection. The soluble molecule is expected to bind the spike protein of circulating CoV with higher affinity than the transmembrane ACE2R, and to render the CoV particles, therefore, incapable of binding to the cell receptor, of penetration into the cells, and of replicating inside the cell. The proposed 4 th generation vaccine would, besides protecting the cells from CoV infection, also preserve ACE2 intracellular functional activity, and guard against the rise of serum angiotensin II levels, which can be pathogenic to lung cell integrity. In brief, the 4 th generation vaccine proposed here would prevent at-risk individuals from becoming sick from any incipient infection: that is, in the true meaning of the term, it would 'vaccinate' them against CoV in general, and in the present case of high emergency provide substantial protection against2019-nCoV. Moreover, should the molecule be genetically engineered to incorporate a neutral protein, such as human serum albumin, the soluble albumin-[ACE2R] 2 or albumin-[ACE2R] 4 complex injected in 2019-nCoV-infected patients would bind the circulating CoV. Patients could then undergo a treatment intervention of 'cleaning' their blood from albumin-[ACE2R] n-CoV complexes by a clinical protocol akin to dialysis. The patient's blood would be passed through a sterile column constructed with high affinity anti-human albumin antibodies. The anti-albumin antibody-albumin-[ACE2R] n-CoV moieties would be retained on the column, and the 'CoV-cleaned' blood returned to the patient to dampen the infection. It is possible that the binding of CoV spike protein to ACE2 is a down regulation of its expression, resulting in increased serum angiotensin II levels, and lung injury. Indeed, administration of recombinant human ACE2 in experimental models of CoV infection ameliorates lung injury in animal models [4] . Therefore, we propose that the 'CoV-cleaned' blood returned to the patient would also be enriched with recombinant human ACE2 to ameliorate lung injury. Discussion: Vaccines that are raised from whole pathogens -attenuated or inactivated -are called 1 st generation vaccines. Protocols that involve utilizing specific protein components extracted from the pathogens to reduce risks and side -effects in the host produce 2 nd generation vaccines. By contrast 3 rd generation vaccines are vaccines derived from administration of genetically engineered DNA or mRNA to induce the host cells to produce an antigen in vivo, which in turn is expected to be recognized as non-self, and generate protective antibodies [5] . Here, we propose a new avenue in vaccinology: the generation of a molecule with the purpose of preventing infectious disease -that is, a vaccine -, but not based on the traditional norms of antigen-idiotype binding. The 4 th generation vaccine we theorize here depends upon the specificity of receptor-ligand binding, but is a biochemical molecule constructed TRN-rewired CoV are neither, properly speaking, 1 st or 2 nd generation vaccine, and neither are they 3 rd generation vaccines: they are efficacious hybrid measures that prevent or slow down SARS-CoV, and possibly MERS-CoV epidemic. However, the urgency of the present moment precludes the somewhat lengthy experimentation time that would be required for the development and testing of a 3 rd generation vaccine of the sort. Since scientists have had several issues up to this point in the process of producing a 3 rd generation vaccine for SARS or MERS, whose epidemics were several years ago, it implausible that they could now develop such a 3 rd generation vaccine for 2019-nCov in the emergency the world is experiencing today. Conclusion: Taken together, the important points brought forth above emphasize the fact that the field of vaccinology cannot and must not be limited strictly to 1 st , 2 nd or 3 rd generation vaccines. A 4 th generation of vaccines is now emerging that may seem unconventional, but converge toward the same goal of preventing the spread of infectious disease. These 4 th generation vaccines may be particularly relevant in the case of flaming epidemics, when the time to generate, test, evaluate and distribute 1 st , 2 nd or 3 rd generation vaccines is prohibitive, such as is precisely the case now with 2019-nCoV. In certain circumstances, public health urgency demands immediate intervention, and precludes the time required to generate and test new vaccine species. Case in point, the threat now posed by the new member of the Coronavirus family (2019-nConV), whose discovery was announced by the Chinese health authorities on Chinese authorities reported having isolated a new type of coronavirus on 7 January 2020. Whereas 2019-nCoV is reported to a beta coronavirus closely related to SARS and other coronaviruses that originate from bats, it is unclear -and at this point almost irrelevant -to date if 2019-nConV originated from bats or from snake or other animals and subsequently transferred to bats. What is clear is that 2019-nConV is capable of direct humanto-human transmission, and its infection patterns grows alarmingly fast across all continents. To be clear, three weeks into its original reporting, 2019-nCoV has infected children, men, women and elderly in all continents. In China alone, the number of confirmed cases are over thirty-seven thousand infected individuals (n=37,593 as of day 21), and the number of fatalities from the disease has risen over eight hundred (n=813). Whereas both the percent confirmed cases and the percent death rate seem to have steadily decreased in parallel over the past 21 days, the case-fatality percent rate has remained steady above 2% (mean ± SD: 2.34% ± 0.39) (Figure 1) . As a reference point, the case-fatality percent rate of the Spanish influenza following World War I worldwide was at, or slightly above 2.5%; that same statistic for measles with no preventive vaccination measures is close 15%. In brief, 2019-nCoV seems to be less lethal than the Spanish flu, and may be abating somewhat at its original epicenter; it has generated heightened fear for a global pandemic as other epicenters have emerged, including Singapore and Thailand. In this hypothesis report, we have proposed here a new avenue into 4 th generation vaccines. Thus, vaccine protocols that do not involve the generation of antibodies against whole pathogens uses protein extracts obtained from pathogens, or nucleic acids related to pathogens. Rather, the preventive and protecting ability of the intervention we propose, which still relies on the specific binding of the pathogen to a substrate generated specifically against it, is a biochemical construct, which could actually best be generated by artificial intelligence of immune surveillance [8] algorithms in the not so distant future. The construct we propose here, specific to CoV, and applicable to 2019-nCoV in the context of the immediate urgency that is upon us, can be generated and expanded quickly, simply and reliably in any biochemistry laboratory. We also describe how it can be effectively utilized in treatment protocols of patients already infected with 2019-nCoV, in a slight modification of the common clinical protocol for renal dialysis." --- # PEGASUS for COVID Literature Summarization ## Model Description Pegasus-large fine-tuned for COVID literature summarization ## Training data The data is the [CORD-19](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge) dataset, containing over 400,000 scholarly articles, including over 150,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. A subset of 1,000 articles and their abstracts were used. The baseline was from the PEGASUS model: [google/pegasus-large](https://huggingface.co/google/pegasus-large). PEGASUS was originally proposed by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf). ## Evaluation Results The results before and after the fine-tuning on our dataset are shown below: | Fine-tuning | R-1 | R-2 | R-L | |:-----------:|:-----:|:-----:|:------:| | Yes | 36.64 | 12.97 | 20.73 | | No | 25.51 | 8.07 | 15.21 | ### How to use We provide a simple snippet of how to use this model for the task of text summarization in PyTorch. ```Python from transformers import PegasusTokenizer, PegasusForConditionalGeneration, TFPegasusForConditionalGeneration # Let's load the model and the tokenizer model_name = "mayu0007/pegasus_large_covid" tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name) # Some text to summarize here text_to_summarize = "Background: On 31 December 2019, the World Health Organization was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The causative pathogen was suspected to be a virus, but it did not match any other known virus. The following day, Wuhan City officials closed the Huanan seafood market, suspected to be the source of the mystery pathogen, because it was reported that certain patients presenting with the symptoms were vendors at that public market. By January 4 2020, the Chinese Health Organization reported 44 active cases. On 7 January 2020, Chinese authorities confirmed that they had identified the causative agent as a novel Coronavirus (CoV). That family includes viruses of the common cold as well as viruses known to cause Middle-East Respiratory Syndrome (MERS); Severe Acute Respiratory Syndrome (SARS).\\\\ The new CoV was named Novel Coronavirus (emerged late) 2019 (2019-nCoV). Two days later, Chinese authorities reported the first fatality linked to 2019-nCoV: a 61-year-old male who had been admitted in the first cohort of patients. He had several other underlying medical conditions, which may have contributed to weakening his immune system. Apart from respiratory failure and severe pneumonia caused by 2019-nCoV, the patient suffered from abdominal tumors and chronic liver disease. On 12 January, Chinese scientists released the genetic sequence of 2019-nCoV, in part because nonofficial report of international spread of 2019-nCoV had commenced. The next day, Thailand officially reported its first imported case of 2019-nCoV: a 61-year-old woman from Wuhan -she, however, denied having visited the Huanan seafood market. On January 15 2020, Chinese authorities reported the 140 ©Biomedical Informatics (2020) second death attributed to 2019-nCoV: a 69-year-old male who also suffered of other unrelated severe pathologies, including myocarditis. Infection with 2019-nCov, nonetheless, were thought to be responsible for his abnormal renal function, and severely damaged to multiple organ functions. The following day, Japan reported its first case of 2019-nCoV: a Chinese man in his 30s, who also denied having visited the Huanan market. On January 17, Thailand confirmed the second imported case of 2019-nCoV. Chinese authorities noted a spike in 2019-nCoV infections between January 18 and 19, 2020. That observation arose the suspicion that 2019-nCoV was capable of direct human-to-human transmission. The following day, 20 January 2020, South Korea confirmed its first case of 2019-nCoV infection: a male patient who denied having visited any public markets in Wuhan. On January 21 2020, the World Health Organization confirmed human-to-human transmission of 2019-nCov. As of that date, the total official number of cases has risen to 222, although it was suspected to be in reality much higher. Infection had spread to health-care workers, and it was suspected that one mode of transmission may be via the eye mucosa. Chinese authorities have also reported a fourth death. The situation was fast becoming alarming: suspected cases appeared in France, Italy and other countries of Europe. Australia seems to be affected as well. Other countries in Asia also reported suspected cases, including the Philippines and Singapore. Suspected cases of 2019-nCoV were reported in North America. The following day, 22 January 2020, World Health Organization Director-General Tedros Adhanom Ghebreyesus convened an emergency meeting to decide whether 2019-nCoV should be declared to constitute a worldwide public health emergency of international concern. Despite a significant rise in confirmed cases of individuals infected with 2019-nCoV -in China alone, at 580 infected individuals, with a death toll now at 17 in the Hubei Province alone -the emergency committee deferred its decision on whether to advise Director-General Ghebreyesus to declare the 2019-nCoV outbreak a public health emergency pandemic of international concern. On January 23, Chinese authorities shut down the city of Wuhan: no public transportation, closed airport and railway station for 11 million people. Later that same day, the city of Ezhou is also in complete lockdown. Festivities for the upcoming Chinese New Year were cancelled throughout China to minimize human contact in crowds.\\\\ The following day, the city of Huanggang was declared under lockdown. Singapore confirmed its first imported case, and Vietnam confirmed two cases. Director-General Ghebreyesus declared that, indeed, the 2019-nCoV outbreaks is a public health emergency of international concern. On January 24 2020, the official number of confirmed cases of patients infected with 2019-nCoV had risen to 830 in China alone, with 177 (21%) among them in severe and critical condition. The number of fatalities caused by 2019-nCoV in China was now 25. Japan confirmed its second 2019-nCoV case. Nepal confirmed its first case. The following day, Australia confirmed its first case of 2019-nCoV, as did France. Two suspected cases in Italy were being closely monitored. In China, the official number of new infections -that is, over the previous 24 h -was 444, and the number of new deaths was 16 above and beyond the number reported the previous day. The official number of individuals confirmed to be infected with 2019-nCoV in China became 1,287, including 237 (20.7%) in severe and critical condition. There is no first-, second-or third-generation vaccine available for any members of the Cov family, nor is there practically the time to develop, raise, test and evaluate the effectiveness of a vaccine for 2019-nCov. Moreover, the World Health Organization stated in its 12 January 2020 recommendations entitled \\\\\\\\'Clinical management of severe acute respiratory infection when novel coronavirus (nCoV) infection is suspected -Interim guidance; WHO/nCoV/Clinical/2020.1\\\\\\\\' that "…there is no current evidence from RCTs to recommend any specific anti-nCoV treatment for patients with suspected or confirmed nCoV…". In brief, the international medical community is totally devoid of tools to combat the unfolding 2019-nCov thereat to global public healthnot in terms of preventive medicine to protect subjects at-risk, and not in terms of clinical interventions for infected patients. What is known, however, is that 2019-nCov, like all corona viruses belong to the Coronaviruses (Coronaviridae) family of RNA viruses that cause diseases in mammals and birds that include diarrhea in cows and pigs, and upper respiratory disease in chickens. In humans, the virus causes respiratory infections, which are generally often mild, rarely lethal. The trends we begin to observe with 2019-nCov suggest that it can be directly transmitted humanto-human, and that it causes serious infections in roughly one in five patients that can lead to death: staggering preliminary statistics. Previous research with other CoV members indicates that proteins of Coronaviruses that could be used in the generation of vaccines include the spike, the envelope, the membrane and the ©Biomedical Informatics (2020) nucleocapsid proteins. The spike protein is of particular interest because it is responsible for the penetration of the virus into the cell, which leads to the initiation of viral replication and proliferation. The spike protein binds to the angiotensin-converting enzyme 2 (ACE2) transmembrane -receptor on the eukaryotic host cell. Case in point, SARS-CoV binds to ACE2, as does MERS-CoV [2] . Indeed, ACE2 is the obligate cellular receptor for CoV entry process via the spike protein [3] . While the development of a vaccine of the 1 st , 2 nd or 3 rd generation against the spike protein is possible but time consuming, it is therefore timely ad critical to propose new possible and practical approaches for preventing infection of subjects at-risk and for treatment intervention of patients infected with 2019-nCov, or any other CoV for that matter. One such alternative protocol is proposed below. Methodology: Short of 1 st , 2 nd or 3 rd generation vaccine measures for preventive CoV, and short of clinical treatment interventions for patients infected with CoV, and specifically, 2019-nCov, it is timely and critical to evaluate new alternatives. Here, we propose that one putative 4 th generation vaccine to control 2019-nCoV explosion might simply involve the genetic engineering a soluble binary molecule (i.e., ACE2R-ACE2R; [ACE2R] 2) or its quaternary form (i.e. two intertwined ACE2R-ACE2R; [ACE2R] 4). This process is fast, reliable and precise by today's standard, and doable in any modern biochemistry laboratory. The obtained sterile molecule could be injected in individuals at high risk as a preventive 4 th vaccination measure, or as a treatment intervention in confirmed cases of 2019-nCoV infection. The soluble molecule is expected to bind the spike protein of circulating CoV with higher affinity than the transmembrane ACE2R, and to render the CoV particles, therefore, incapable of binding to the cell receptor, of penetration into the cells, and of replicating inside the cell. The proposed 4 th generation vaccine would, besides protecting the cells from CoV infection, also preserve ACE2 intracellular functional activity, and guard against the rise of serum angiotensin II levels, which can be pathogenic to lung cell integrity. In brief, the 4 th generation vaccine proposed here would prevent at-risk individuals from becoming sick from any incipient infection: that is, in the true meaning of the term, it would 'vaccinate' them against CoV in general, and in the present case of high emergency provide substantial protection against2019-nCoV. Moreover, should the molecule be genetically engineered to incorporate a neutral protein, such as human serum albumin, the soluble albumin-[ACE2R] 2 or albumin-[ACE2R] 4 complex injected in 2019-nCoV-infected patients would bind the circulating CoV. Patients could then undergo a treatment intervention of 'cleaning' their blood from albumin-[ACE2R] n-CoV complexes by a clinical protocol akin to dialysis. The patient's blood would be passed through a sterile column constructed with high affinity anti-human albumin antibodies. The anti-albumin antibody-albumin-[ACE2R] n-CoV moieties would be retained on the column, and the 'CoV-cleaned' blood returned to the patient to dampen the infection. It is possible that the binding of CoV spike protein to ACE2 is a down regulation of its expression, resulting in increased serum angiotensin II levels, and lung injury. Indeed, administration of recombinant human ACE2 in experimental models of CoV infection ameliorates lung injury in animal models [4] . Therefore, we propose that the 'CoV-cleaned' blood returned to the patient would also be enriched with recombinant human ACE2 to ameliorate lung injury. Discussion: Vaccines that are raised from whole pathogens -attenuated or inactivated -are called 1 st generation vaccines. Protocols that involve utilizing specific protein components extracted from the pathogens to reduce risks and side -effects in the host produce 2 nd generation vaccines. By contrast 3 rd generation vaccines are vaccines derived from administration of genetically engineered DNA or mRNA to induce the host cells to produce an antigen in vivo, which in turn is expected to be recognized as non-self, and generate protective antibodies [5] . Here, we propose a new avenue in vaccinology: the generation of a molecule with the purpose of preventing infectious disease -that is, a vaccine -, but not based on the traditional norms of antigen-idiotype binding. The 4 th generation vaccine we theorize here depends upon the specificity of receptor-ligand binding, but is a biochemical molecule constructed TRN-rewired CoV are neither, properly speaking, 1 st or 2 nd generation vaccine, and neither are they 3 rd generation vaccines: they are efficacious hybrid measures that prevent or slow down SARS-CoV, and possibly MERS-CoV epidemic. However, the urgency of the present moment precludes the somewhat lengthy experimentation time that would be required for the development and testing of a 3 rd generation vaccine of the sort. Since scientists have had several issues up to this point in the process of producing a 3 rd generation vaccine for SARS or MERS, whose epidemics were several years ago, it implausible that they could now develop such a 3 rd generation vaccine for 2019-nCov in the emergency the world is experiencing today. Conclusion: Taken together, the important points brought forth above emphasize the fact that the field of vaccinology cannot and must not be limited strictly to 1 st , 2 nd or 3 rd generation vaccines. A 4 th generation of vaccines is now emerging that may seem unconventional, but converge toward the same goal of preventing the spread of infectious disease. These 4 th generation vaccines may be particularly relevant in the case of flaming epidemics, when the time to generate, test, evaluate and distribute 1 st , 2 nd or 3 rd generation vaccines is prohibitive, such as is precisely the case now with 2019-nCoV. In certain circumstances, public health urgency demands immediate intervention, and precludes the time required to generate and test new vaccine species. Case in point, the threat now posed by the new member of the Coronavirus family (2019-nConV), whose discovery was announced by the Chinese health authorities on Chinese authorities reported having isolated a new type of coronavirus on 7 January 2020. Whereas 2019-nCoV is reported to a beta coronavirus closely related to SARS and other coronaviruses that originate from bats, it is unclear -and at this point almost irrelevant -to date if 2019-nConV originated from bats or from snake or other animals and subsequently transferred to bats. What is clear is that 2019-nConV is capable of direct humanto-human transmission, and its infection patterns grows alarmingly fast across all continents. To be clear, three weeks into its original reporting, 2019-nCoV has infected children, men, women and elderly in all continents. In China alone, the number of confirmed cases are over thirty-seven thousand infected individuals (n=37,593 as of day 21), and the number of fatalities from the disease has risen over eight hundred (n=813). Whereas both the percent confirmed cases and the percent death rate seem to have steadily decreased in parallel over the past 21 days, the case-fatality percent rate has remained steady above 2% (mean ± SD: 2.34% ± 0.39) (Figure 1) . As a reference point, the case-fatality percent rate of the Spanish influenza following World War I worldwide was at, or slightly above 2.5%; that same statistic for measles with no preventive vaccination measures is close 15%. In brief, 2019-nCoV seems to be less lethal than the Spanish flu, and may be abating somewhat at its original epicenter; it has generated heightened fear for a global pandemic as other epicenters have emerged, including Singapore and Thailand. In this hypothesis report, we have proposed here a new avenue into 4 th generation vaccines. Thus, vaccine protocols that do not involve the generation of antibodies against whole pathogens uses protein extracts obtained from pathogens, or nucleic acids related to pathogens. Rather, the preventive and protecting ability of the intervention we propose, which still relies on the specific binding of the pathogen to a substrate generated specifically against it, is a biochemical construct, which could actually best be generated by artificial intelligence of immune surveillance [8] algorithms in the not so distant future. The construct we propose here, specific to CoV, and applicable to 2019-nCoV in the context of the immediate urgency that is upon us, can be generated and expanded quickly, simply and reliably in any biochemistry laboratory. We also describe how it can be effectively utilized in treatment protocols of patients already infected with 2019-nCoV, in a slight modification of the common clinical protocol for renal dialysis." # Tokenize our text batch = tokenizer(text_to_summarize, truncation=True, padding='longest', return_tensors="pt") # Generate the output output = model.generate(**batch) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) # Finally, we can print the generated summary print(output_text) ```
mbateman/distilbert-base-uncased-finetuned-imdb
3d90b0ff7e76ecaad3bda2a78e880d94ed92719a
2022-01-20T20:43:24.000Z
[ "pytorch", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
mbateman
null
mbateman/distilbert-base-uncased-finetuned-imdb
3
null
transformers
21,557
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6482 | 1.0 | 625 | 2.4283 | | 2.5156 | 2.0 | 1250 | 2.3816 | | 2.475 | 3.0 | 1875 | 2.3638 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.1
mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda
52ae0eae8f5cf1b0bbb598fbfd7090a5b1f90cc5
2021-11-25T09:04:10.000Z
[ "pytorch", "xlm-roberta", "token-classification", "lug", "dataset:masakhaner", "arxiv:2103.11811", "transformers", "NER", "autotrain_compatible" ]
token-classification
false
mbeukman
null
mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda
3
null
transformers
21,558
--- language: - lug tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ." --- # xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-luganda](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the luganda part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | lug | 85.37 | 82.75 | 88.17 | 78.00 | 82.00 | 80.00 | 92.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | lug | 82.57 | 80.38 | 84.89 | 75.00 | 80.00 | 82.00 | 87.00 | | [xlm-roberta-base-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luganda) | [base](https://huggingface.co/xlm-roberta-base) | lug | 80.91 | 78.59 | 83.37 | 73.00 | 78.00 | 77.00 | 86.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Empaka zaakubeera mu kibuga Liverpool e Bungereza , okutandika nga July 12 ." ner_results = nlp(example) print(ner_results) ```
mbien/fma2vec2popularity
883c633e1ebed14b450be99a2e0c8ae561947ca8
2021-07-06T12:36:26.000Z
[ "pytorch", "wav2vec2", "transformers" ]
null
false
mbien
null
mbien/fma2vec2popularity
3
null
transformers
21,559
# Predicting music popularity using DNNs This is a model fine-tuned for music popularity classification, created as part of DH-401: Digital Musicology class on EPFL ## Team * Elisa ([email protected]) * Michał ([email protected]) * Noé ([email protected]) ## Milestone 3 Main notebook presenting out results is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3.ipynb) Notebook describing the details of Wav2Vec2.0 pre-training and fine-tuning for the task is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3-wav2vec2.ipynb) ## Milestone 2 Exploratory data analysis notebook is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone2.ipynb) ## Milestone 1 Refined project proposal is available [here](https://github.com/Glorf/DH-401/blob/main/milestone0.md) ## Milestone 0 Original project proposal is available in git history [here](https://github.com/Glorf/DH-401/blob/bb14813ff2bbbd9cdc6b6eecf34c9e3c160598eb/milestone0.md)
meghanabhange/hinglish-sbert
5f5df8ca42d1cb95fa09d48062d168410f451a62
2021-05-19T23:16:15.000Z
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
meghanabhange
null
meghanabhange/hinglish-sbert
3
null
transformers
21,560
Entry not found
meghanabhange/history_mcq
9338174fb06ccf8f9be37043cb6bacdeffb2a7b3
2021-06-28T11:18:55.000Z
[ "pytorch", "roberta", "multiple-choice", "transformers" ]
multiple-choice
false
meghanabhange
null
meghanabhange/history_mcq
3
null
transformers
21,561
Entry not found
melon422/DialoGPT-medium-MelonBot2
eed4c76cee66afabfc164ecac5138281690488db
2022-01-15T16:26:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
melon422
null
melon422/DialoGPT-medium-MelonBot2
3
null
transformers
21,562
--- tags: - conversational --- # Melon Bot2 DialoGPT Model
metamong1/longbartwithdoctype
95eeba58a8c6cd44f40db1546524b0febb87f9c0
2021-12-18T12:06:47.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
metamong1
null
metamong1/longbartwithdoctype
3
null
transformers
21,563
Entry not found
michaelrglass/rag-token-nq-kgi0-trex
9c369ae01a0b5feafa2ea18326e7c6a480263426
2021-04-20T18:24:05.000Z
[ "pytorch", "rag", "transformers" ]
null
false
michaelrglass
null
michaelrglass/rag-token-nq-kgi0-trex
3
null
transformers
21,564
Entry not found
midas/gupshup_h2e_t5_mtl
9ee6151109cf40cf8bbd8f04090b862f4983dc4d
2021-11-14T02:08:18.000Z
[ "pytorch", "t5", "text2text-generation", "arxiv:1910.04073", "transformers", "autotrain_compatible" ]
text2text-generation
false
midas
null
midas/gupshup_h2e_t5_mtl
3
null
transformers
21,565
# Gupshup GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021 Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf) Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup) ### Dataset Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0). Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts. ## Models All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts. Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below. **1. Hinglish Dialogues to English Summary (h2e)** | Model | Huggingface Alias | |---------|-------------------------------------------------------------------------------| | mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) | | PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) | | T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) | | T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) | | BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) | | GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) | **2. English Dialogues to English Summary (e2e)** | Model | Huggingface Alias | |---------|-------------------------------------------------------------------------------| | mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) | | PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) | | T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) | | T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) | | BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) | | GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) | ## Inference ### Using command line 1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using ``` git clone https://github.com/midas-research/gupshup.git pip install -r requirements.txt ``` 2. run_eval script has the following arguments. * **model_name** : Path or alias to one of our models available on Huggingface as listed above. * **input_path** : Source file or path to file containing conversations, which will be summarized. * **save_path** : File path where to save summaries generated by the model. * **reference_path** : Target file or path to file containing summaries, used to calculate matrices. * **score_path** : File path where to save scores. * **bs** : Batch size * **device**: Cuda devices to use. Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command ``` python run_eval.py \ --model_name midas/gupshup_h2e_mbart \ --input_path data/h2e/test.source \ --save_path generated_summary.txt \ --reference_path data/h2e/test.target \ --score_path scores.txt \ --bs 8 ``` Another example, to generate English summaries from English dialogues using the Pegasus model ``` python run_eval.py \ --model_name midas/gupshup_e2e_pegasus \ --input_path data/e2e/test.source \ --save_path generated_summary.txt \ --reference_path data/e2e/test.target \ --score_path scores.txt \ --bs 8 ``` Please create an issue if you are facing any difficulties in replicating the results. ### References Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful. [1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf) ``` @inproceedings{mehnaz2021gupshup, title={GupShup: Summarizing Open-Domain Code-Switched Conversations}, author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, pages={6177--6192}, year={2021} } ```
miguelvictor/python-fromzero-gpt2-base
461524cee3a36a5f084b40681780b7cf5ad7609c
2021-05-23T09:26:31.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
miguelvictor
null
miguelvictor/python-fromzero-gpt2-base
3
null
transformers
21,566
Entry not found
miguelvictor/python-fromzero-lstmlm
31f745fad89e00814adbf640103ee4366cef0b7b
2021-04-29T05:16:56.000Z
[ "pytorch", "tensorboard", "lstmlm", "transformers" ]
null
false
miguelvictor
null
miguelvictor/python-fromzero-lstmlm
3
null
transformers
21,567
Entry not found
miguelvictor/python-fromzero-t5-base
06ca0f807171ec78d045590353df5e255090754c
2021-04-29T05:03:06.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
miguelvictor
null
miguelvictor/python-fromzero-t5-base
3
null
transformers
21,568
Entry not found
mikeee/dummy-model
8bbf93b956fe48308a5e2dcaf18bcf6349ed0efc
2022-01-23T10:27:54.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
mikeee
null
mikeee/dummy-model
3
null
transformers
21,569
Entry not found
milyiyo/selectra-medium-finetuned-amazon-review
e205fa421da7e0e2ef8e2fa77b911241d1474808
2022-01-20T21:31:15.000Z
[ "pytorch", "tensorboard", "electra", "text-classification", "transformers" ]
text-classification
false
milyiyo
null
milyiyo/selectra-medium-finetuned-amazon-review
3
null
transformers
21,570
Entry not found
mimi/Waynehills_NLP_KE-T5
304b127be9e57fa6afdcc4e0dbd6585d1f0fdba2
2022-03-28T07:56:50.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mimi
null
mimi/Waynehills_NLP_KE-T5
3
null
transformers
21,571
Entry not found
minu/koelectra-nsmc-finetuned
7286162c0090608c39c7a4400cc015a9d7421023
2020-07-24T18:14:24.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
minu
null
minu/koelectra-nsmc-finetuned
3
null
transformers
21,572
Entry not found
mitra-mir/ALBERT-Persian-Poetry
a559fe803bdfb2b06f1dc90ca1672335c44af2fe
2021-04-27T06:55:48.000Z
[ "pytorch", "tf", "albert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
mitra-mir
null
mitra-mir/ALBERT-Persian-Poetry
3
null
transformers
21,573
A Transformer-based Persian Language Model Further Pretrained on Persian Poetry ALBERT was first introduced by [Hooshvare](https://huggingface.co/HooshvareLab/albert-fa-zwnj-base-v2?text=%D8%B2+%D8%A2%D9%86+%D8%AF%D8%B1%D8%AF%D8%B4+%5BMASK%5D+%D9%85%DB%8C+%D8%B3%D9%88%D8%AE%D8%AA+%D8%AF%D8%B1+%D8%A8%D8%B1) with 30,000 vocabulary size as lite BERT for self-supervised learning of language representations for the Persian language. Here we wanted to utilize its capabilities by pretraining it on a large corpse of Persian poetry. This model has been post-trained on 80 percent of poetry verses of the Persian poetry dataset - Ganjoor- and has been evaluated on the other 20 percent.
mjtaheri11/test-zarebin-2
fa26828d8bbd664a4d4cbd6d4dfe52ce6bb018f5
2022-01-16T16:13:13.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
mjtaheri11
null
mjtaheri11/test-zarebin-2
3
null
transformers
21,574
Entry not found
mlkorra/obgv-gender-bert-hi-en
862e2e43b883443c76db35f99fc85d2eb170b876
2021-09-03T06:39:23.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
mlkorra
null
mlkorra/obgv-gender-bert-hi-en
3
null
transformers
21,575
Entry not found
mmoradi/Robust-Biomed-RoBERTa-QuestionAnswering
509b219c4b15393d5b30fe6fa3f119de479c3ca2
2021-10-07T10:03:52.000Z
[ "pytorch", "jax", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
mmoradi
null
mmoradi/Robust-Biomed-RoBERTa-QuestionAnswering
3
null
transformers
21,576
Entry not found
mofawzy/argpt2-goodreads
0d3d7f91e05a5fe2b13f0215bf55b4ccbab42937
2021-12-01T06:55:41.000Z
[ "pytorch", "gpt2", "text-generation", "ar", "dataset:LABR", "transformers", "generated_from_trainer", "model-index" ]
text-generation
false
mofawzy
null
mofawzy/argpt2-goodreads
3
null
transformers
21,577
--- tags: - generated_from_trainer language: ar datasets: - LABR widget: - text: "كان الكاتب ممكن" - text: "كتاب ممتاز ولكن" - text: "رواية درامية جدا والافكار بسيطة" model-index: - name: argpt2-goodreads results: [] --- # argpt2-goodreads This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an goodreads LABR dataset. It achieves the following results on the evaluation set: - Loss: 1.4389 ## Model description Generate sentences either positive/negative examples based on goodreads corpus in arabic language. ## Intended uses & limitations the model fine-tuned on arabic language only with aspect to generate sentences such as reviews in order todo the same for other languages you need to fine-tune it in your own. any harmful content generated by GPT2 should not be used in anywhere. ## Training and evaluation data training and validation done on goodreads dataset LABR 80% for trainng and 20% for testing ## Usage ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("mofawzy/argpt2-goodreads") model = AutoModelForCausalLM.from_pretrained("mofawzy/argpt2-goodreads") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results - train_loss = 1.474 ### Evaluation results - eval_loss = 1.4389 ### train metrics - epoch = 20.0 - train_loss = 1.474 - train_runtime = 2:18:14.51 - train_samples = 108110 - train_samples_per_second = 260.678 - train_steps_per_second = 2.037 ### eval metrics - epoch = 20.0 - eval_loss = 1.4389 - eval_runtime = 0:04:37.01 - eval_samples = 27329 - eval_samples_per_second = 98.655 - eval_steps_per_second = 0.773 - perplexity = 4.2162 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
mohammed/ar
736ec566e77d6b0b6da9c26b12c466d4679ac76b
2021-07-06T12:47:52.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "ar", "dataset:common_voice", "dataset:arabic_speech_corpus", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
mohammed
null
mohammed/ar
3
null
transformers
21,578
--- language: ar datasets: - common_voice - arabic_speech_corpus metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Mohammed XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ar type: common_voice args: ar metrics: - name: Test WER type: wer value: 36.69 - name: Validation WER type: wer value: 36.69 --- # Wav2Vec2-Large-XLSR-53-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice) and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python %%capture !pip install datasets !pip install transformers==4.4.0 !pip install torchaudio !pip install jiwer !pip install tnkeeh import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ar", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("mohammed/ar") model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("The predicted sentence is: ", processor.batch_decode(predicted_ids)) print("The original sentence is:", test_dataset["sentence"][:2]) ``` The output is: ``` The predicted sentence is : ['ألديك قلم', 'ليست نارك مكسافة على هذه الأرض أبعد من يوم أمس'] The original sentence is: ['ألديك قلم ؟', 'ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.'] ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice: ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re # creating a dictionary with all diacritics dict = { 'ِ': '', 'ُ': '', 'ٓ': '', 'ٰ': '', 'ْ': '', 'ٌ': '', 'ٍ': '', 'ً': '', 'ّ': '', 'َ': '', '~': '', ',': '', 'ـ': '', '—': '', '.': '', '!': '', '-': '', ';': '', ':': '', '\'': '', '"': '', '☭': '', '«': '', '»': '', '؛': '', 'ـ': '', '_': '', '،': '', '“': '', '%': '', '‘': '', '”': '', '�': '', '_': '', ',': '', '?': '', '#': '', '‘': '', '.': '', '؛': '', 'get': '', '؟': '', ' ': ' ', '\'ۖ ': '', '\'': '', '\'ۚ' : '', ' \'': '', '31': '', '24': '', '39': '' } # replacing multiple diacritics using dictionary (stackoverflow is amazing) def remove_special_characters(batch): # Create a regular expression from the dictionary keys regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys()))) # For each match, look-up corresponding value in dictionary batch["sentence"] = regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], batch["sentence"]) return batch test_dataset = load_dataset("common_voice", "ar", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mohammed/ar") model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) test_dataset = test_dataset.map(remove_special_characters) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 36.69% ## Future Work One can use *data augmentation*, *transliteration*, or *attention_mask* to increase the accuracy.
mohsenfayyaz/bert-base-uncased-offenseval2019-downsample
83849e36170fa976c0f28f507a933e93751f3afb
2021-05-19T23:40:38.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
mohsenfayyaz
null
mohsenfayyaz/bert-base-uncased-offenseval2019-downsample
3
null
transformers
21,579
Entry not found
mohsenfayyaz/electra-base-discriminator-offenseval2019-downsample
c2964a468152df476471198e569d4bbedf62c97b
2021-05-04T14:15:44.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
mohsenfayyaz
null
mohsenfayyaz/electra-base-discriminator-offenseval2019-downsample
3
null
transformers
21,580
Entry not found
mohsenfayyaz/xlnet-base-cased-zihangdai
7c57b3f09c41cf1d785f3726c38dd88c968201e4
2021-06-27T17:21:03.000Z
[ "pytorch", "xlnet", "feature-extraction", "transformers" ]
feature-extraction
false
mohsenfayyaz
null
mohsenfayyaz/xlnet-base-cased-zihangdai
3
null
transformers
21,581
Entry not found
mollypak/cardiff-xlm-roberta-base
9bfba54b69400344ae14aed786bba3a19456231d
2021-12-14T12:34:14.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
false
mollypak
null
mollypak/cardiff-xlm-roberta-base
3
null
transformers
21,582
Entry not found
mollypak/cardiff
96eccd8d8a29c5cab5e3ca367d4454ff5f48d1a4
2021-12-18T14:33:00.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
mollypak
null
mollypak/cardiff
3
null
transformers
21,583
Entry not found
mollypak/distilbert-base-uncased-finetuned-cola
97fb27a38772c3c2cebf5a2a214587099fc2359f
2021-11-06T07:30:09.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
mollypak
null
mollypak/distilbert-base-uncased-finetuned-cola
3
null
transformers
21,584
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5556088865196797 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7629 - Matthews Correlation: 0.5556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.538 | 1.0 | 535 | 0.5812 | 0.3250 | | 0.3669 | 2.0 | 1070 | 0.5216 | 0.4993 | | 0.2461 | 3.0 | 1605 | 0.6071 | 0.5016 | | 0.1811 | 4.0 | 2140 | 0.7629 | 0.5556 | | 0.1347 | 5.0 | 2675 | 0.8480 | 0.5547 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
moma1820/TAPT100
5e7bb11652d9c3375ac35aeee07f39606c2313f3
2021-10-13T00:11:22.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
moma1820
null
moma1820/TAPT100
3
null
transformers
21,585
Entry not found
monsoon-nlp/ar-seq2seq-gender-encoder
3f1c75a6ceb9b96f7dd05aa7db7d9c075c32feb2
2021-05-19T23:54:14.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "ar", "transformers" ]
feature-extraction
false
monsoon-nlp
null
monsoon-nlp/ar-seq2seq-gender-encoder
3
null
transformers
21,586
--- language: ar --- # ar-seq2seq-gender (encoder) This is a seq2seq model (encoder half) to "flip" gender in **first-person** Arabic sentences. The model can augment your existing Arabic data, or generate counterfactuals to test a model's decisions (would changing the gender of the subject or speaker change output?). Intended Examples: - 'أنا سعيد' <=> 'انا سعيدة' - 'ركض إلى المتجر' <=> 'ركضت إلى المتجر' People's names, gender pronouns, gendered words (father, mother), and many other values are currently unchanged by this model. Future versions may be trained on more data. ## Sample Code ``` import torch from transformers import AutoTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_encoder_decoder_pretrained( "monsoon-nlp/ar-seq2seq-gender-encoder", "monsoon-nlp/ar-seq2seq-gender-decoder", min_length=40 ) tokenizer = AutoTokenizer.from_pretrained('monsoon-nlp/ar-seq2seq-gender-decoder') # same as MARBERT original input_ids = torch.tensor(tokenizer.encode("أنا سعيدة")).unsqueeze(0) generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) tokenizer.decode(generated.tolist()[0][1 : len(input_ids[0]) - 1]) > 'انا سعيد' ``` https://colab.research.google.com/drive/1S0kE_2WiV82JkqKik_sBW-0TUtzUVmrV?usp=sharing ## Training I originally developed <a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a> for Spanish sentences, using <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>, and spaCy. More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617 The Arabic model encoder and decoder started with weights and vocabulary from <a href="https://github.com/UBC-NLP/marbert">MARBERT from UBC-NLP</a>, and was trained on the <a href="https://camel.abudhabi.nyu.edu/arabic-parallel-gender-corpus/">Arabic Parallel Gender Corpus</a> from NYU Abu Dhabi. The text is first-person sentences from OpenSubtitles, with parallel gender-reinflected sentences generated by Arabic speakers. Training notebook: https://colab.research.google.com/drive/1TuDfnV2gQ-WsDtHkF52jbn699bk6vJZV ## Non-binary gender This model is useful to generate male and female text samples, but falls short of capturing gender diversity in the world and in the Arabic language. This subject is discussed in the bias statement of the <a href="https://www.aclweb.org/anthology/2020.gebnlp-1.12/">Gender Reinflection paper</a>.
moussaKam/frugalscore_small_roberta_bert-score
cdcd17634ab1dbad30cedc0da5302c0e91a33d64
2022-02-01T10:51:08.000Z
[ "pytorch", "bert", "text-classification", "arxiv:2110.08559", "transformers" ]
text-classification
false
moussaKam
null
moussaKam/frugalscore_small_roberta_bert-score
3
null
transformers
21,587
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
moussaKam/frugalscore_tiny_roberta_bert-score
eda524feeb885d75d9bb5c948ac224de0ac97bf2
2022-02-01T10:50:57.000Z
[ "pytorch", "bert", "text-classification", "arxiv:2110.08559", "transformers" ]
text-classification
false
moussaKam
null
moussaKam/frugalscore_tiny_roberta_bert-score
3
null
transformers
21,588
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
mpariente/ConvTasNet_Libri3Mix_sepnoisy
5f01eda366d1a6c914cf89a8b7d7c03e3c40b267
2021-09-23T16:12:18.000Z
[ "pytorch", "dataset:LibriMix", "dataset:sep_noisy", "asteroid", "audio", "ConvTasNet", "audio-to-audio", "license:cc-by-sa-4.0" ]
audio-to-audio
false
mpariente
null
mpariente/ConvTasNet_Libri3Mix_sepnoisy
3
null
asteroid
21,589
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - LibriMix - sep_noisy license: cc-by-sa-4.0 --- ## Asteroid model Imported from this Zenodo [model page](https://zenodo.org/record/4020529). ## Description: This model was trained by Takhir Mirzaev using the Librimix/ConvTasNet recipe in Asteroid. It was trained on the `sep_noisy` task of the Libri3Mix dataset. ## Training config: ```yaml data: n_src: 3 sample_rate: 8000 segment: 3 task: sep_noisy train_dir: data/wav8k/min/train-360 valid_dir: data/wav8k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 4 early_stop: True epochs: 200 half_lr: True num_workers: 4 ``` ## Results: ```yaml si_sdr: 6.824750632456865 si_sdr_imp: 11.234803761803752 sdr: 7.715799858488098 sdr_imp: 11.778681386239114 sir: 16.442141130818637 sir_imp: 19.527535070051055 sar: 8.757864265661263 sar_imp: -0.15657258049670303 stoi: 0.7854554136619554 stoi_imp: 0.22267957718163015 ``` ## License notice: This work "ConvTasNet_Libri3Mix_sepnoisy" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by [Vassil Panayotov](https://github.com/vdp), used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepnoisy" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Manuel Pariente.
mrm8488/GuaPeTe-2-tiny-finetuned-spa-constitution
8152fcfda340afda01f50af3af17e6f58f34b685
2021-05-23T10:17:12.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
mrm8488
null
mrm8488/GuaPeTe-2-tiny-finetuned-spa-constitution
3
null
transformers
21,590
Entry not found
mrm8488/RuPERTa-base-finetuned-pos
e277b7bd313bb0b98206c08d93d22ecebe698771
2021-05-20T18:08:34.000Z
[ "pytorch", "jax", "roberta", "token-classification", "es", "transformers", "autotrain_compatible" ]
token-classification
false
mrm8488
null
mrm8488/RuPERTa-base-finetuned-pos
3
null
transformers
21,591
--- language: es thumbnail: --- # RuPERTa-base (Spanish RoBERTa) + POS 🎃🏷 This model is a fine-tuned on [CONLL CORPORA](https://www.kaggle.com/nltkdata/conll-corpora) version of [RuPERTa-base](https://huggingface.co/mrm8488/RuPERTa-base) for **POS** downstream task. ## Details of the downstream task (POS) - Dataset - [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) 📚 | Dataset | # Examples | | ---------------------- | ----- | | Train | 445 K | | Dev | 55 K | - [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py) - Labels covered: ``` ADJ ADP ADV AUX CCONJ DET INTJ NOUN NUM PART PRON PROPN PUNCT SCONJ SYM VERB ``` ## Metrics on evaluation set 🧾 | Metric | # score | | :------------------------------------------------------------------------------------: | :-------: | | F1 | **97.39** | Precision | **97.47** | | Recall | **9732** | ## Model in action 🔨 Example of usage ```python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos') model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos') id2label = { "0": "O", "1": "ADJ", "2": "ADP", "3": "ADV", "4": "AUX", "5": "CCONJ", "6": "DET", "7": "INTJ", "8": "NOUN", "9": "NUM", "10": "PART", "11": "PRON", "12": "PROPN", "13": "PUNCT", "14": "SCONJ", "15": "SYM", "16": "VERB" } text ="Mis amigos están pensando viajar a Londres este verano." input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) outputs = model(input_ids) last_hidden_states = outputs[0] for m in last_hidden_states: for index, n in enumerate(m): if(index > 0 and index <= len(text.split(" "))): print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())]) ''' Output: -------- Mis: NUM amigos: PRON están: AUX pensando: ADV viajar: VERB a: ADP Londres: PROPN este: DET verano..: NOUN ''' ``` Yeah! Not too bad 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/RuPERTa-base-finetuned-squadv1
c069da3d7312d4b2134a0f9c6a909eadcf15544e
2021-05-20T18:13:28.000Z
[ "pytorch", "jax", "roberta", "question-answering", "es", "dataset:squad", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/RuPERTa-base-finetuned-squadv1
3
null
transformers
21,592
--- language: es datasets: - squad ---
mrm8488/byt5-small-tweet-hate-detection
5b9b7a5a2e8b804fc41f8914c32e31006e9f8ab0
2021-06-02T18:43:48.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/byt5-small-tweet-hate-detection
3
null
transformers
21,593
Entry not found
mrm8488/electra-base-finetuned-squadv2
2b49f30b89e9ff36ddddd4da4ffb91724c177b5c
2020-06-27T16:29:36.000Z
[ "pytorch", "electra", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/electra-base-finetuned-squadv2
3
null
transformers
21,594
Entry not found
mrm8488/electricidad-base-finetuned-muchocine
78a8b6ab7b1b64c29ff09b8e78c644463eafe42c
2021-01-06T19:23:20.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:muchocine", "transformers", "sentiment", "analysis", "spanish" ]
text-classification
false
mrm8488
null
mrm8488/electricidad-base-finetuned-muchocine
3
null
transformers
21,595
--- language: es datasets: - muchocine widget: - text: "Una buena película, sin más." tags: - sentiment - analysis - spanish --- # Electricidad-base fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎 [Electricidad](https://huggingface.co/mrm8488/electricidad-base-discriminator) base fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task. ## Fast usage with `pipelines` 🚀 ```python # pip install -q transformers from transformers import AutoModelForSequenceClassification, AutoTokenizer CHKPT = 'mrm8488/electricidad-base-finetuned-muchocine' model = AutoModelForSequenceClassification.from_pretrained(CHKPT) tokenizer = AutoTokenizer.from_pretrained(CHKPT) from transformers import pipeline classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) # It ranks your comments between 1 and 5 (stars) classifier('Es una obra mestra. Brillante.') # [{'label': '5', 'score': 0.9498381614685059}] classifier('Es una película muy buena.') # {'label': '4', 'score': 0.9277070760726929}] classifier('Una buena película, sin más.') # [{'label': '3', 'score': 0.9768431782722473}] classifier('Esperaba mucho más.') # [{'label': '2', 'score': 0.7063605189323425}] classifier('He tirado el dinero. Una basura. Vergonzoso.') # [{'label': '1', 'score': 0.8494752049446106}] ```
mrm8488/electricidad-small-finetuned-xnli-es
b6cc30c15e40df8bc9909b23ec850203e95fc3cf
2021-04-29T18:34:29.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:xnli", "transformers", "spanish", "nli", "xnli", "license:mit" ]
text-classification
false
mrm8488
null
mrm8488/electricidad-small-finetuned-xnli-es
3
1
transformers
21,596
--- language: es tags: - spanish - nli - xnli datasets: - xnli license: mit widget: - text: "Por favor, no piensen en darnos dinero. Por favor, considere piadosamente cuanto puede dar." --- # electricidad-small-finetuned-xnli-es
mrm8488/es-tinybert-v1-1
b8044484429a5a225488db9366559f4ebb62f62f
2021-05-20T00:47:22.000Z
[ "pytorch", "jax", "bert", "transformers" ]
null
false
mrm8488
null
mrm8488/es-tinybert-v1-1
3
null
transformers
21,597
Entry not found
mrm8488/t5-base-finetuned-math-seq-next-term
641ac417011bae89c9d41fd179eb18aa12150d2c
2021-06-23T12:51:37.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-math-seq-next-term
3
null
transformers
21,598
Entry not found
mrm8488/t5-base-finetuned-multinews-512
9c32b54925d13290b1a535d899668f6e8fe1521b
2020-08-31T14:09:11.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-multinews-512
3
null
transformers
21,599
Entry not found