modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k
JorisCos
2021-09-23T15:49:01Z
10
1
asteroid
[ "asteroid", "pytorch", "audio", "ConvTasNet", "audio-to-audio", "dataset:Libri2Mix", "dataset:sep_noisy", "license:cc-by-sa-4.0", "region:us" ]
audio-to-audio
2022-03-02T23:29:04Z
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - Libri2Mix - sep_noisy license: cc-by-sa-4.0 --- ## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k` Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4) Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_noisy` task of the Libri2Mix dataset. Training config: ```yml data: n_src: 2 sample_rate: 8000 segment: 3 task: sep_noisy train_dir: data/wav8k/min/train-360 valid_dir: data/wav8k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 24 early_stop: True epochs: 200 half_lr: True num_workers: 4 ``` Results: On Libri2Mix min test set : ```yml si_sdr: 9.944424856077259 si_sdr_imp: 11.939395359731192 sdr: 10.701526190782072 sdr_imp: 12.481757547845662 sir: 22.633644975545575 sir_imp: 22.45666740833025 sar: 11.131644100944868 sar_imp: 4.248489589311784 stoi: 0.852048619949357 stoi_imp: 0.2071994899565506 ``` License notice: This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k
JorisCos
2021-09-23T15:48:51Z
89
3
asteroid
[ "asteroid", "pytorch", "audio", "ConvTasNet", "audio-to-audio", "dataset:Libri1Mix", "dataset:enh_single", "license:cc-by-sa-4.0", "region:us" ]
audio-to-audio
2022-03-02T23:29:04Z
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - Libri1Mix - enh_single license: cc-by-sa-4.0 --- ## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. Training config: ```yml data: n_src: 1 sample_rate: 16000 segment: 3 task: enh_single train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 32 n_filters: 512 stride: 16 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 1 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 6 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results: On Libri1Mix min test set : ```yml si_sdr: 14.743051006476085 si_sdr_imp: 11.293269700616385 sdr: 15.300522933671061 sdr_imp: 11.797860134458015 sir: Infinity sir_imp: NaN sar: 15.300522933671061 sar_imp: 11.797860134458015 stoi: 0.9310514162434267 stoi_imp: 0.13513159270288563 ``` License notice: This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
hiroshi-matsuda-rit/bert-base-japanese-basic-char-v2
hiroshi-matsuda-rit
2021-09-23T14:49:50Z
12
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia --- # BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831) This pretrained model is almost the same as [cl-tohoku/bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2) but do not need `fugashi` or `unidic_lite`. The only difference is in `word_tokenzer_type` property (specify `basic` instead of `mecab`) in `tokenizer_config.json`.
groadabike/ConvTasNet_DAMP-VSEP_enhboth
groadabike
2021-09-23T13:57:35Z
4
0
asteroid
[ "asteroid", "pytorch", "audio", "ConvTasNet", "audio-to-audio", "dataset:DAMP-VSEP", "license:cc-by-sa-4.0", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - DAMP-VSEP license: cc-by-sa-4.0 --- ## Asteroid model `groadabike/ConvTasNet_DAMP-VSEP_enhboth` Imported from [Zenodo](https://zenodo.org/record/3994193) ### Description: This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset. ### Training config: ```yaml data: channels: 1 n_src: 2 root_path: data sample_rate: 16000 samples_per_track: 10 segment: 3.0 task: enh_both filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet help: None masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0003 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 12 early_stop: True epochs: 50 half_lr: True num_workers: 12 ``` ### Results: ```yaml si_sdr: 14.018196157142519 si_sdr_imp: 14.017103133809577 sdr: 14.498517291333885 sdr_imp: 14.463389151567865 sir: 24.149634529133372 sir_imp: 24.11450638936735 sar: 15.338597389045935 sar_imp: -137.30634122401517 stoi: 0.7639416744417206 stoi_imp: 0.1843383526963759 ``` ### License notice: This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
DDSC/roberta-base-scandinavian
DDSC
2021-09-23T13:54:15Z
71
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "scandinavian", "da", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: da license: cc-by-4.0 tags: - scandinavian - roberta pipeline_tag: fill-mask widget: - text: På biblioteket kan du låne en <mask>. --- # Scandinavian Roberta Base - MC4 ## Description This is a sample reference model for Flax/Jax training using only on the MC4. It is trained for roughly three day on a TPU v3-8. Training procedure... --- ## Description My description
flax-community/nordic-roberta-wiki
flax-community
2021-09-23T13:53:50Z
8
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "feature-extraction", "swedish", "fill-mask", "sv", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: sv license: cc-by-4.0 tags: - swedish - roberta pipeline_tag: fill-mask widget: - text: Meninged med livet är <mask>. --- # Nordic Roberta Wikipedia ## Description Nord roberta model trainined on the swedish danish and norwegian wikipedia. ## Evaluation Evaluation on Named Entity recognition in Danish. I finetuned each model on 3 epochs on DaNE, repeated it 5 times for each model, and calculated 95% confidence intervals for the means. Here are the results: xlm-roberta-base : 88.01 +- 0.43 flax-community/nordic-roberta-wiki: 85.75 +- 0.69 (this model) Maltehb/danish-bert-botxo: 85.38 +- 0.55 flax-community/roberta-base-danish: 80.14 +- 1.47 flax-community/roberta-base-scandinavian : 78.03 +- 3.02 Maltehb/-l-ctra-danish-electra-small-cased: 57.87 +- 3.19 NbAiLab/nb-bert-base : 30.24 +- 1.21 Randomly initialised RoBERTa model: 19.79 +- 2.00 Evaluation on Sentiment analysis in Dansish Here are the results on test set, where each model has been trained 5 times, and the “+-” refers to a 95% confidence interval of the mean score: Maltehb/danish-bert-botxo: 65.19 +- 0.53 NbAiLab/nb-bert-base : 63.80 +- 0.77 xlm-roberta-base : 63.55 +- 1.59 flax-community/nordic-roberta-wiki : 56.46 +- 1.77 flax-community/roberta-base-danish : 54.73 +- 8.96 flax-community/roberta-base-scandinavian : 44.28 +- 9.21 Maltehb/-l-ctra-danish-electra-small-cased : 47.78 +- 12.65 Randomly initialised RoBERTa model: 36.96 +- 1.02 Maltehb/roberta-base-scandinavian : 33.65 +- 8.32 ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish
csae8092/de_MRP_NER
csae8092
2021-09-23T13:46:35Z
8
0
spacy
[ "spacy", "token-classification", "de", "license:cc-by-4.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - de license: cc-by-4.0 model-index: - name: de_MRP_NER results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.905255631 - name: NER Recall type: recall value: 0.8568527919 - name: NER F Score type: f_score value: 0.8803894298 --- NER Model for 'Ministerratsprotokolle' | Feature | Description | | --- | --- | | **Name** | `de_MRP_NER` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `cc-by` | | **Author** | [Peter Andorfer]() | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `GPE`, `LOC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 88.04 | | `ENTS_P` | 90.53 | | `ENTS_R` | 85.69 | | `TOK2VEC_LOSS` | 40077.56 | | `NER_LOSS` | 77727.57 |
colorfulscoop/bert-base-ja
colorfulscoop
2021-09-23T13:46:05Z
18
1
transformers
[ "transformers", "pytorch", "tf", "bert", "pretraining", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ja datasets: wikipedia pipeline_tag: fill-mask widget: - text: 得意な科目は[MASK]です。 license: cc-by-sa-4.0 --- # BERT base Japanese model This repository contains a BERT base model trained on Japanese Wikipedia dataset. ## Training data [Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of June 20, 2021 which is released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for training. The dataset is splitted into three subsets - train, valid and test. Both tokenizer and model are trained with the train split. ## Model description The model architecture is the same as BERT base model (hidden_size: 768, num_hidden_layers: 12, num_attention_heads: 12, max_position_embeddings: 512) except for a vocabulary size. The vocabulary size is set to 32,000 instead of an original size of 30,522. For the model, `transformers.BertForPreTraining` is used. ## Tokenizer description [SentencePiece](https://github.com/google/sentencepiece) tokenizer is used as a tokenizer for this model. While training, the tokenizer model was trained with 1,000,000 samples which were extracted from the train split. The vocabulary size is set to 32,000. A `add_dummy_prefix` option is set to `True` because words are not separated by whitespaces in Japanese. After training, the model is imported to `transformers.DebertaV2Tokenizer` because it supports SentencePiece models and its behavior is consistent when `use_fast` option is set to `True` or `False`. **Note:** The meaning of "consistent" here is as follows. For example, AlbertTokenizer provides AlbertTokenizer and AlbertTokenizerFast. Fast model is used as default. However, the tokenization behavior between them is different and a behavior this mdoel expects is the verions of not fast. Although `use_fast=False` option passing to AutoTokenier or pipeline solves this problem to force to use not fast version of the tokenizer, this option cannot be passed to config.json or model card. Therefore unexpected behavior happens when using Inference API. To avoid this kind of problems, `transformers.DebertaV2Tokenizer` is used in this model. ## Training Training details are as follows. * gradient update is every 256 samples (batch size: 8, accumulate_grad_batches: 32) * gradient clip norm is 1.0 * Learning rate starts from 0 and linearly increased to 0.0001 in the first 10,000 steps * The training set contains around 20M samples. Because 80k * 256 ~ 20M, 1 epochs has around 80k steps. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. The training continued until validation loss got worse. Totally the number of training steps were around 214k. The test set loss was 2.80 . Training code is available in [a GitHub repository](https://github.com/colorfulscoop/bert-ja). ## Usage First, install dependecies. ```sh $ pip install torch==1.8.0 transformers==4.8.2 sentencepiece==0.1.95 ``` Then use `transformers.pipeline` to try mask fill task. ```sh >>> import transformers >>> pipeline = transformers.pipeline("fill-mask", "colorfulscoop/bert-base-ja", revision="v1.0") >>> pipeline("専門として[MASK]を専攻しています") [{'sequence': '専門として工学を専攻しています', 'score': 0.03630176931619644, 'token': 3988, 'token_str': '工学'}, {'sequence': '専門として政治学を専攻しています', 'score': 0.03547220677137375, 'token': 22307, 'token_str': '政治学'}, {'sequence': '専門として教育を専攻しています', 'score': 0.03162326663732529, 'token': 414, 'token_str': '教育'}, {'sequence': '専門として経済学を専攻しています', 'score': 0.026036914438009262, 'token': 6814, 'token_str': '経済学'}, {'sequence': '専門として法学を専攻しています', 'score': 0.02561848610639572, 'token': 10810, 'token_str': '法学'}] ``` Note: specifying a `revision` option is recommended to keep reproducibility when downloading a model via `transformers.pipeline` or `transformers.AutoModel.from_pretrained` . ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the following data as training data * **Name:** ウィキペディア (Wikipedia): フリー百科事典 * **Credit:** https://ja.wikipedia.org/ * **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) * **Link:** https://ja.wikipedia.org/
tohoku-nlp/bert-large-japanese
tohoku-nlp
2021-09-23T13:45:41Z
1,246
9
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT large Japanese (unidic-lite with whole word masking, jawiki-20200831) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0). ## Model architecture The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020. The generated corpus files are 4.0GB in total, containing approximately 30M sentences. We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences. ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/). The training took about 5 days to finish. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
tohoku-nlp/bert-base-japanese-char-v2
tohoku-nlp
2021-09-23T13:45:24Z
136,559
6
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (character-level tokenization with whole word masking, jawiki-20200831) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020. The generated corpus files are 4.0GB in total, containing approximately 30M sentences. We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences. ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters. The vocabulary size is 6144. We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/). The training took about 5 days to finish. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
byeongal/Ko-DialoGPT
byeongal
2021-09-23T13:43:34Z
83
8
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: ko tags: - gpt2 - conversational license: cc-by-nc-sa-4.0 --- ## Ko-DialoGPT ### How to use ```python from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PreTrainedTokenizerFast.from_pretrained('byeongal/Ko-DialoGPT') model = GPT2LMHeadModel.from_pretrained('byeongal/Ko-DialoGPT').to(device) past_user_inputs = [] generated_responses = [] while True: user_input = input(">> User:") if user_input == 'bye': break text_idx = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt') for i in range(len(generated_responses)-1, len(generated_responses)-3, -1): if i < 0: break encoded_vector = tokenizer.encode(generated_responses[i] + tokenizer.eos_token, return_tensors='pt') if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000: text_idx = torch.cat([encoded_vector, text_idx], dim=-1) else: break encoded_vector = tokenizer.encode(past_user_inputs[i] + tokenizer.eos_token, return_tensors='pt') if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000: text_idx = torch.cat([encoded_vector, text_idx], dim=-1) else: break text_idx = text_idx.to(device) inference_output = model.generate( text_idx, max_length=1000, num_beams=5, top_k=20, no_repeat_ngram_size=4, length_penalty=0.65, repetition_penalty=2.0, ) inference_output = inference_output.tolist() bot_response = tokenizer.decode(inference_output[0][text_idx.shape[-1]:], skip_special_tokens=True) print(f"Bot: {bot_response}") past_user_inputs.append(user_input) generated_responses.append(bot_response) ``` ### Reference * [SKT-KoGPT2](https://huggingface.co/skt/kogpt2-base-v2) * [KETI R&D 데이터](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-008) * [한국어 대화 요약](https://aihub.or.kr/aidata/30714)
hiiamsid/BETO_es_binary_classification
hiiamsid
2021-09-23T11:16:37Z
7
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "es", "ticket classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - es tags: - es - ticket classification license: "apache-2.0" datasets: - self made to classify whether text is related to technology or not. metrics: - fscore - accuracy - precision - recall --- # BETO(cased) This model was built using pytorch. ## Model description Input for the model: Any spanish text Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate)) #### How to use Here is how to use this model to get the features of a given text in *PyTorch*: ```python # You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification") model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training procedure I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
vishalz/paraphrase_model
vishalz
2021-09-23T10:00:25Z
3
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
pegasus paraphraser model using <a href="https://huggingface.co/tuner007/pegasus_paraphrase" target="_blank">tuner007/pegasus_paraphrase</a>
gchhablani/fnet-large-finetuned-wnli
gchhablani
2021-09-23T05:39:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: fnet-large-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.38028169014084506 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-wnli This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6953 - Accuracy: 0.3803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 | | 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 | | 0.706 | 3.0 | 477 | 0.6875 | 0.5634 | | 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 | | 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/bert-large-cased-finetuned-wnli
gchhablani
2021-09-23T05:10:44Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-large-cased-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.352112676056338 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-wnli This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7087 - Accuracy: 0.3521 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.7114 | 1.0 | 159 | 0.5634 | 0.6923 | | 0.7141 | 2.0 | 318 | 0.5634 | 0.6895 | | 0.7063 | 3.0 | 477 | 0.5634 | 0.6930 | | 0.712 | 4.0 | 636 | 0.4507 | 0.7077 | | 0.7037 | 5.0 | 795 | 0.3521 | 0.7087 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
adityavithaldas/distilbert-base-uncased-finetuned-ner
adityavithaldas
2021-09-22T19:33:37Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
LysandreJik/testing
LysandreJik
2021-09-22T19:19:12Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: testing results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.6813725490196079 - name: F1 type: f1 value: 0.8104956268221574 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6644 - Accuracy: 0.6814 - F1: 0.8105 - Combined Score: 0.7459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 ### Training results ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.11.0 - Tokenizers 0.10.3
nateraw/resnet50-beans-dummy-sagemaker
nateraw
2021-09-22T18:01:58Z
10
0
timm
[ "timm", "pytorch", "tensorboard", "image-classification", "generated_from_trainer", "dataset:beans", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - timm - generated_from_trainer datasets: - beans model-index: - name: model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans args: default library_tag: timm --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 1.0219 - Acc1: 56.3910 - Acc5: 100.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 20 ### Training results ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.12.1 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-common_voice-ab-demo
patrickvonplaten
2021-09-22T13:57:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "speech-recognition", "common_voice", "generated_from_trainer", "ab", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab license: apache-2.0 tags: - speech-recognition - common_voice - generated_from_trainer model-index: - name: wav2vec2-common_voice-ab-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-ab-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 15.1812 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
Haotian/distilgpt2-finetuned-wikitext2
Haotian
2021-09-22T12:24:29Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: - task: name: Causal Language Modeling type: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
eliza-dukim/bert-base-finetuned-sts
eliza-dukim
2021-09-22T11:01:03Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:klue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - klue metrics: - pearsonr - f1 model-index: - name: bert-base-finetuned-sts results: - task: name: Text Classification type: text-classification dataset: name: klue type: klue args: sts metrics: - name: Pearsonr type: pearsonr value: 0.8756147003619346 - name: F1 type: f1 value: 0.8416666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-sts This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4115 - Pearsonr: 0.8756 - F1: 0.8417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 | | 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 | | 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 | | 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.12.1 - Tokenizers 0.10.3
cfisicaro/distilbert-base-uncased-finetuned-ner
cfisicaro
2021-09-22T10:25:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9281908990011098 - name: Recall type: recall value: 0.9355632621098557 - name: F1 type: f1 value: 0.9318624993035824 - name: Accuracy type: accuracy value: 0.9837641190207635 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0629 - Precision: 0.9282 - Recall: 0.9356 - F1: 0.9319 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2406 | 1.0 | 878 | 0.0721 | 0.9072 | 0.9172 | 0.9122 | 0.9801 | | 0.0529 | 2.0 | 1756 | 0.0637 | 0.9166 | 0.9318 | 0.9241 | 0.9826 | | 0.0315 | 3.0 | 2634 | 0.0629 | 0.9282 | 0.9356 | 0.9319 | 0.9838 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
castorini/ance-dpr-context-multi
castorini
2021-09-22T09:41:18Z
110
2
transformers
[ "transformers", "pytorch", "dpr", "arxiv:2007.00808", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
ozcangundes/mt5-small-turkish-summarization
ozcangundes
2021-09-22T09:31:27Z
299
19
transformers
[ "transformers", "pytorch", "jax", "mt5", "text2text-generation", "summarization", "tr", "dataset:MLSUM", "arxiv:2004.14900", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: tr datasets: - MLSUM pipeline_tag: summarization license: mit --- # mT5-small based Turkish Summarization System [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [MLSUM Turkish news dataset](https://github.com/recitalAI/MLSUM) for **Summarization** downstream task by using Pytorch Lightning.⚡ mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. The model is trained with 10 epochs, 8 batch size and 10e-4 learning rate. It took almost 4 hours. The max news length is kept as 784 and max summary length is determined as 64. **Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task. ## Dataset MLSUM dataset has more than 250K Turkish news with their related summaries. Since the mT5 model size and vocabulary is so large, 20K data is used for training and 4K data is used for validation. For more information about the dataset, please read this [great paper](https://arxiv.org/abs/2004.14900). ## Usage 🚀 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization") def generate_summary(main_news): source_encoding=tokenizer( main_news, max_length=784, padding="max_length", truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"], num_beams=2, max_length=120, repetition_penalty=2.5, length_penalty=2.0, early_stopping=True, use_cache=True ) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python main_news= "Final etabının üçüncü karşılaşması 29 Nisan Pazartesi günü saat 18.00 ’ de Burhan Felek Voleybol Salonu ’ nda oynanacak . Sezonu FIVB Kulüpler Dünya Şampiyonluğu ile açan ve CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı , Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı VakıfBank Spor Sarayı'nda 16-25 , 25-10 , 25-18 ve 25-17'lik setlerle 3-1 mağlup ederek seride durumu 1-1 ' e getirdi . İlk setini 25-16 kaybettiği karşılaşmanın ikinci setinde etkili servisler kullanan sarı-siyahlılar , teknik molasına 12-5 önde girdiği seti 25-10 almayı başardı . Etkili servis performansını üçüncü sette de sürdüren VakıfBank , teknik molasına 12-5 önde girdiği seti 25-18 alarak , karşılaşmada 2-1 öne geçti . Dördüncü sette rakibinin geri dönüşüne izin vermeyen VakıfBank , seti 25-17 , maçı da 3-1 kazanarak seride durumu eşitledi." generate_summary(main_news) #original summary -> "Vestel Venus Sultanlar Ligi final etabı ikinci karşılaşmasında VakıfBank kendi sahasında Eczacıbaşı VitrA'yı 3-1 mağlup etti ve seride durumu 1-1 ' e getirdi ." #output -> "CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı, Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı 3-1 mağlup ederek seride durumu 1-1'e getirdi." ``` ### Example 2 ```python main_news="2023'te yerli tank motoru : Bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını ifade eden Öztürk , şu değerlendirmelerde bulundu : `` Bin 500 beygirlik , şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz . Bu da bir aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız . Bundan sonra hiçbir ülkeye bağımlılığımız kalmadan bu araçları üretmeye devam edeceğiz . Sorumluluğumuzun ağır olduğunu biliyoruz . Ülkemize hizmet etmeye çalışıyoruz . Bunu daha da ileriye götürmek için elimizden gelen çabayı sarf ediyoruz . Ama bu tek başınıza yapılan bir operasyon değil . Türkiye'deki yerli firmalarla beraber ortaklaşa bu işi yürütmeye çalışıyoruz." generate_summary(main_news) #output -> "TÜRKİYE'de bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını belirten Öztürk, `` Bin 500 beygirlik, şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz. Bu da bir aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız.'' dedi." ``` Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
ozcangundes/mt5-small-turkish-squad
ozcangundes
2021-09-22T09:31:24Z
33
3
transformers
[ "transformers", "pytorch", "jax", "mt5", "text2text-generation", "question-answering", "tr", "dataset:TQUAD", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: tr datasets: - TQUAD pipeline_tag: question-answering license: mit --- # mT5-small based Turkish Question Answering System [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/TQuad/turkish-nlp-qa-dataset) for **Q&A** downstream task by using Pytorch Lightning.⚡ The notebook that includes all fine tuning process will be shared on my Github page later. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. **Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task. ## Usage 🚀 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-squad") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-squad") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"], max_length=120) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Pardus, Google'ın öğrencilerle staj ve kendini geliştirme imkânı ile \ tasarılara geliştirici ve katkı sağlamayı amaçladığı açık kaynak tasarısı \ Google Summer of Code'a 2008 ve 2009 olmak üzere iki kere katılmıştır. Bu organizasyona \ ilk katılan Türk tasarısı Pardus olmuştur. Bazı dönemlerde Pardus hakkındaki gelişmeleri \ halka duyurmak ve tasarıya olan ilgiyi arttırmak amacıyla CeBIT Eurasia Bilişim Fuarı'na \ katılım sağlanmaktadır. 2006, 2008, 2009, 2010, 2011,2013 ve 2014 bu fuarlarda Pardus \ standı kurulmuştur.2014 yılında ICT SummitT Now Bilişim Zirvesi'nde yer alınmıştır. \ BİLİŞİM’2014 TBD 31. Ulusal Bilişim Kurultayı ve CITEX’2014 Ankara Bilişim Fuarı’na \ Gümüş sponsorluk ile katkıda bulunulmuş ve Pardus standı kurulmuştur.", "question":"Pardus’un Google Summer of Code'a katıldığı yıllar nelerdir?" } get_answer(question["question"],question["context"]) ``` > 2008 ve 2009 ### Example 2 ```python question2={ "context":"II. Bayezid ve I. Selim devrinde yaşadı ve iki defa hekimbaşılık yaptı. \ Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği \ eseriyle tanınır. Adı kaynaklarda Ahmed ve Mahmud olarak da geçer. Ahi Çelebi \ olarak ün yapmıştır. Babası Tabib Mevlana Kemal ile birlikte 1463’te İstanbul’a yerleşti. \ Mevlana Kemal, devrin ünlü hekimlerindendir. Tebriz ya da Şirvan asıllı olduğu çeşitli \ kaynaklarda belirtilir. Ahi Mehmet Çelebi, hekimliği daha çok babasından öğrendi. Onun \ ölümünden sonra devrin önemli hekimleri Kutbüddin ile Altunîzâde’den ders alıp kısa zamanda \ mesleğini ilerletti. Hekimlik becerisinin yanı sıra kuramsal bilgisiyle de kendisini \ kabul ettirerek önce Fâtih Darüşşifasına hekim, sonra da başhekim oldu. II. Bayezid’in \ güvenini kazanarak mutfak eminliğine, ardından da Hekimbaşılığa getirildi. Dört buçuk \ yıl bu görevde kalan Ahî Çelebi, II. Bayezid’in ölümü üzerine geleneğe uyularak azledildi. \ Bir müddet sonra Yavuz onu tekrar Hekimbaşılığa getirdi ve Mısır seferine beraberinde \ götürdü. I. Selim'in ölümünden sonra Hekimbaşılık tan tekrar azledildi. Kaynakların \ belirttiğine göre, yaşı doksanı geçmiş olduğu halde, hacdan dönerken Kahire’de \ ölmüş ve İmam Şafi'nin kabri civarına defnedilmiştir.", "question":"Ahi Mehmet Çelebi hangi eseri ile tanınır?" } get_answer(question2["question"],question2["context"]) ``` > Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği eseriyle Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
ozcangundes/T5-base-for-BioQA
ozcangundes
2021-09-22T09:31:21Z
25
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "question-answering", "dataset:bioASQ", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: english datasets: - bioASQ pipeline_tag: question-answering license: mit --- # T5-base model fine-tuned on BioASQ for Biological Question Answering 👩‍⚕️👨‍⚕️ [Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task. ## Details of T5 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Dependencies transformers == 4.3.3 sentencepiece >= 0.1.94 ## Usage 🚀 ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA") model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"]) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.", "question":"Which protein does empagliflozin inhibit?" } get_answer(question["question"],question["context"]) ``` > SGLT2 ### Example 2 ```python question2={ "context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.", "question":"What is the typical rash associated with gluten?" } get_answer(question2["question"],question2["context"]) ``` > dermatitis herpetiformis Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
nlp4good/psych-search
nlp4good
2021-09-22T09:29:47Z
46
5
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "mental-health", "en", "dataset:PubMed", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - mental-health license: apache-2.0 datasets: - PubMed --- # Psych-Search Psych-Search is a work in progress to bring cutting edge NLP to mental health practitioners. The model detailed here serves as a foundation for traditional classification models as well as NLU models for a Psych-Search application. The goal of the Psych-Search Application is to use a combination of traditional text classification models to expand the scope of the MESH taxonomy with the inclusion of relevant categories for mental health pracitioners designing suicide prevention programs for adolescent communities within the United States, as well as the automatic extraction and standardization of entities such as risk factors and protective factors. Our first expansion efforts to the MESH taxonomy include categories: - Prevention Strategies - Protective Factors We are actively looking for partners on this work and would love to hear from you! Please ping us at [email protected]. ## Model description This model is an extension of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased). Continued pretraining was done using SciBERT as the base model using abstract text only from Pyschology and Psychiatry PubMed research. Training was done on approximately 3.5 million papers for 10 epochs and evaluated on a task similar to BioASQ Task A. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModel mname = "nlp4good/psych-search" tokenizer = AutoTokenizer.from_pretrained(mname) model = AutoModel.from_pretrained(mname) ``` ### Limitations and bias This model was trained on all PubMed abstracts categorized under [Psychology and Psychiatry](https://meshb.nlm.nih.gov/treeView). As of March 1, this corresponds to approximately 3.2 million papers that contains abstract text. Of these 3.2 million papers, relevant sparse mental health categories were back translated to increase the representation of certain mental health categories. There are several limitation with this dataset including large discrepancies in the number of papers associated with [Sexual and Gender Minorities](https://meshb.nlm.nih.gov/record/ui?ui=D000072339). The training data consisted of the following breakdown across gender groups: Female | Male | Sexual and Gender Minorities -------|---------|---------- 1,896,301 | 1,945,279 | 4,529 Similar discrepancies are present within [Ethnic Groups](https://meshb.nlm.nih.gov/record/ui?ui=D005006) as defined within the MESH taxonomy: | African Americans | Arabs | Asian Americans | Hispanic Americans | Indians, Central American | Indians, North American | Indians, South American | Indigenous Peoples | Mexican Americans | |-------------------|-------|-----------------|--------------------|---------------------------|-------------------------|-------------------------|--------------------|-------------------| | 31,027 | 2,437 | 5,612 | 18,893 | 124 | 5,657 | 633 | 174 | 3,234 | These discrepancies can have a significant impact on information retrieval systems, downstream machine learning models, and other forms of NLP that leverage these pretrained models. ## Training data This model was trained on all PubMed abstracts categorized under [Psychology and Psychiatry](https://meshb.nlm.nih.gov/treeView). As of March 1, this corresponds to approximately 3.2 million papers that contains abstract text. Of these 3.2 million papers, relevant sparse categories were back translated from english to french and from french to english to increase the representation of sparser mental health categories. This included backtranslating the following papers with the following categories: - Depressive Disorder - Risk Factors - Mental Disorders - Child, Preschool - Mental Health In aggregate, this process added 557,980 additional papers to our training data. ## Training procedure Continued pretraining was done on Psychology and Psychiatry PubMed papers for 10 epochs. Default parameters were used with the exception of gradient accumulation steps which was set at 4, with a per device train batch size of 32. 2 x Nvidia 3090's were used in the development of this model. ## Evaluation results To evaluate the effectiveness of psych-search within the mental health domain, an evaluation task was constructed by finetuning psych-search for a task similar to [BioASQ Task A](http://bioasq.org/). Here we perform large scale biomedical indexing using the MESH taxonomy associated with each paper underneath Psychology and Psychiatry. The evaluation metric is the micro F1 score across all second level descriptors within Psychology and Psychiatry. This corresponds to 38 different MESH categories used during evaluation. bert-base-uncased | SciBERT Scivocab Uncased | Psych-Search -------|---------|---------- 0.7348 | 0.7394 | 0.7415 ## Next Steps If you are interested in continuing to build on this work or have other ideas on how we can build on others work, please let us know! We can be reached at [email protected]. Our goal is to bring state of the art NLP capabilities to underserved areas of research, with mental health being our top priority.
nateraw/bert-base-uncased-ag-news
nateraw
2021-09-22T09:28:21Z
26
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "ag_news", "en", "dataset:ag_news", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - ag_news - pytorch license: mit datasets: - ag_news metrics: - accuracy --- # bert-base-uncased-ag-news ## Model description `bert-base-uncased` finetuned on the AG News dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 4 T4 GPUs, 4 epochs. [The code can be found here](https://github.com/nateraw/hf-text-classification) #### Limitations and bias - Not the best model... ## Training data Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=ag_news). ## Training procedure ... ## Eval results ...
macedonizer/sr-roberta-base
macedonizer
2021-09-22T08:59:00Z
8
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "masked-lm", "sr", "dataset:wiki-sr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - sr thumbnail: https://huggingface.co/macedonizer/sr-roberta-base/lets-talk-about-nlp-sr.jpg tags: - masked-lm license: apache-2.0 datasets: - wiki-sr --- # SR-RoBERTa base model Pretrained model on Serbian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/sr-roberta-base') \ unmasker("Београд је <mask> град Србије.") \ [{'score': 0.7834128141403198, 'sequence': 'Београд је главни град Србије', 'token': 3087, 'token_str': ' главни'}, {'score': 0.15424974262714386, 'sequence': 'Београд је највећи град Србије', 'token': 3916, 'token_str': ' највећи'}, {'score': 0.0035441946238279343, 'sequence': 'Београд је најважнији град Србије', 'token': 18577, 'token_str': ' најважнији'}, {'score': 0.003132033161818981, 'sequence': 'Београд је велики град Србије', 'token': 2063, 'token_str': ' велики'}, {'score': 0.0030417360831052065, 'sequence': 'Београд је важан град Србије', 'token': 9463, 'token_str': ' важан'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/mk-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/sr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/sr-gpt2
macedonizer
2021-09-22T08:58:57Z
54
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "sr", "dataset:wiki-sr", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - sr thumbnail: https://huggingface.co/macedonizer/sr-gpt2/desanka-maksimovic.jpeg license: apache-2.0 datasets: - wiki-sr --- # sr-gpt2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). ## Model description sr-gpt2 is a transformers model pretrained on a very large corpus of Serbian data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Macedonian language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a prompt. ### How to use Here is how to use this model to get the features of a given text in PyTorch: import random \ from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained('macedonizer/sr-gpt2') \ model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2') input_text = 'Ја сам био ' if len(input_text) == 0: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) \ else: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ **encoded_input, \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) decoded_output = [] \ for sample in output: \ decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output)
macedonizer/sl-roberta-base
macedonizer
2021-09-22T08:58:54Z
8
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "masked-lm", "sl", "dataset:wiki-sl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - sl thumbnail: https://huggingface.co/macedonizer/sl-roberta-base/ivan-cankar.jpg tags: - masked-lm license: apache-2.0 datasets: - wiki-sl --- # HR-RoBERTa base model Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје. # Model description RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2. # How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \ unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \ [ {'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'}, {'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'}, {'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'}, {'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token': 3280, 'token_str': ' jedini'}, {'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći' ] \ Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
macedonizer/hr-gpt2
macedonizer
2021-09-22T08:58:40Z
6
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "hr", "dataset:wiki-hr", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - hr thumbnail: https://huggingface.co/macedonizer/hr-gpt2/lets-talk-about-nlp-hr.jpg license: apache-2.0 datasets: - wiki-hr --- # hr-gpt2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). ## Model description hr-gpt2 is a transformers model pretrained on a very large corpus of Croation data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of the word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the Macedonian language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a prompt. ### How to use Here is how to use this model to get the features of a given text in PyTorch: import random \\nfrom transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained('macedonizer/hr-gpt2') \ model = AutoModelWithLMHead.from_pretrained('macedonizer/sr-gpt2') input_text = 'Ja sam bio ' if len(input_text) == 0: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) \ else: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ **encoded_input, \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) decoded_output = [] \ for sample in output: \ decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output)
macedonizer/blaze-koneski
macedonizer
2021-09-22T08:58:34Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "mk", "dataset:wiki-mk", "dataset:blaze-koneski-poetry", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - mk thumbnail: https://huggingface.co/macedonizer/blaze-koneski/blaze-koneski.jpg license: apache-2.0 datasets: - wiki-mk - blaze-koneski-poetry --- # blaze-koneski GPT-2 type of model. We finetuned macedonizer/mk-gpt-2 with Blaze Koneski's poetry. ## About Blaze Koneski Born in a village near Prilep in 1921. Studied philology at Skopje University and worked there as a professor. Was the first chairman of the Macedonian Academy of Sciences and Arts, corresponding member of the Yugoslav Academy of Sciences and Arts, as well as of the Serbian and Slovene Academies, and honorary doctor of the Universities of Chicago and Krakow. Wrote poetry, short stories, and essays, as well as scholarly works, many of them on the Macedonian language. Editor of the Dictionarv of the Macedonian Language, translator of Heine and Shakespeare. His works have been translated into Serbian, Croatian, Slovene, Albanian, Turkish, Hungarian, French, Russian, Italian, Greek, Polish, Romanian, German, and English. Winner of numerous prizes, including the Golden Wreath of the Struga Poetry Evenings. ### How to use Here is how to use this model to get the features of a given text in PyTorch: import random from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained('macedonizer/blaze-koneski') nmodel = AutoModelWithLMHead.from_pretrained('macedonizer/blaze-koneski') input_text = 'Москва ' if len(input_text) == 0: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) \ else: \ encoded_input = tokenizer(input_text, return_tensors="pt") \ output = model.generate( \ **encoded_input, \ bos_token_id=random.randint(1, 50000), \ do_sample=True, \ top_k=50, \ max_length=1024, \ top_p=0.95, \ num_return_sequences=1, \ ) decoded_output = [] \ for sample in output: \ decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output)
liaad/srl-pt_mbert-base
liaad
2021-09-22T08:56:31Z
6
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "bert-base-multilingual-cased", "semantic role labeling", "finetuned", "multilingual", "pt", "dataset:PropBank.Br", "arxiv:2101.01213", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - multilingual - pt tags: - bert-base-multilingual-cased - semantic role labeling - finetuned license: apache-2.0 datasets: - PropBank.Br metrics: - F1 Measure --- # mBERT fine-tuned on Portuguese semantic role labeling ## Model description This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on Portuguese semantic role labeling data. This is part of a project from which resulted the following models: * [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base) * [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large) * [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base) * [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large) * [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base) * [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base) * [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large) * [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base) * [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base) * [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large) * [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base) * [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large) * [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large) * [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large) For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Intended uses & limitations #### How to use To use the transformers portion of this model: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("liaad/srl-pt_mbert-base") model = AutoModel.from_pretrained("liaad/srl-pt_mbert-base") ``` To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Training procedure The model was trained on the PropBank.Br datasets, using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Eval results | Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) | | --------------- | ------ | ----- | | `srl-pt_bertimbau-base` | 76.30 | 73.33 | | `srl-pt_bertimbau-large` | 77.42 | 74.85 | | `srl-pt_xlmr-base` | 75.22 | 72.82 | | `srl-pt_xlmr-large` | 77.59 | 73.84 | | `srl-pt_mbert-base` | 72.76 | 66.89 | | `srl-en_xlmr-base` | 66.59 | 65.24 | | `srl-en_xlmr-large` | 67.60 | 64.94 | | `srl-en_mbert-base` | 63.07 | 58.56 | | `srl-enpt_xlmr-base` | 76.50 | 73.74 | | `srl-enpt_xlmr-large` | **78.22** | 74.55 | | `srl-enpt_mbert-base` | 74.88 | 69.19 | | `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 | | `ud_srl-pt_xlmr-large` | 77.69 | 74.91 | | `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** | ### BibTeX entry and citation info ```bibtex @misc{oliveira2021transformers, title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling}, author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge}, year={2021}, eprint={2101.01213}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
liaad/srl-enpt_xlmr-large
liaad
2021-09-22T08:56:23Z
14
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "xlm-roberta-large", "semantic role labeling", "finetuned", "multilingual", "pt", "en", "dataset:PropBank.Br", "dataset:CoNLL-2012", "arxiv:2101.01213", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - multilingual - pt - en tags: - xlm-roberta-large - semantic role labeling - finetuned license: apache-2.0 datasets: - PropBank.Br - CoNLL-2012 metrics: - F1 Measure --- # XLM-R large fine-tuned in English and Portuguese semantic role labeling ## Model description This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned first on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models: * [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base) * [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large) * [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base) * [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large) * [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base) * [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base) * [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large) * [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base) * [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base) * [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large) * [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base) * [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large) * [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large) * [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large) For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Intended uses & limitations #### How to use To use the transformers portion of this model: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("liaad/srl-enpt_xlmr-large") model = AutoModel.from_pretrained("liaad/srl-enpt_xlmr-large") ``` To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). #### Limitations and bias - This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow. - The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data. ## Training procedure The model was first fine-tuned on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data; then it was fine-tuned in the PropBank.Br dataset using 10-fold Cross-Validation. The resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Eval results | Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) | | --------------- | ------ | ----- | | `srl-pt_bertimbau-base` | 76.30 | 73.33 | | `srl-pt_bertimbau-large` | 77.42 | 74.85 | | `srl-pt_xlmr-base` | 75.22 | 72.82 | | `srl-pt_xlmr-large` | 77.59 | 73.84 | | `srl-pt_mbert-base` | 72.76 | 66.89 | | `srl-en_xlmr-base` | 66.59 | 65.24 | | `srl-en_xlmr-large` | 67.60 | 64.94 | | `srl-en_mbert-base` | 63.07 | 58.56 | | `srl-enpt_xlmr-base` | 76.50 | 73.74 | | `srl-enpt_xlmr-large` | **78.22** | 74.55 | | `srl-enpt_mbert-base` | 74.88 | 69.19 | | `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 | | `ud_srl-pt_xlmr-large` | 77.69 | 74.91 | | `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** | ### BibTeX entry and citation info ```bibtex @misc{oliveira2021transformers, title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling}, author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge}, year={2021}, eprint={2101.01213}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
liaad/srl-en_xlmr-large
liaad
2021-09-22T08:56:14Z
1,786
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "xlm-roberta-large", "semantic role labeling", "finetuned", "multilingual", "pt", "en", "dataset:PropBank.Br", "dataset:CoNLL-2012", "arxiv:2101.01213", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - multilingual - pt - en tags: - xlm-roberta-large - semantic role labeling - finetuned license: apache-2.0 datasets: - PropBank.Br - CoNLL-2012 metrics: - F1 Measure --- # XLM-R large fine-tuned on English semantic role labeling ## Model description This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models: * [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base) * [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large) * [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base) * [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large) * [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base) * [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base) * [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large) * [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base) * [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base) * [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large) * [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base) * [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large) * [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large) * [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large) For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Intended uses & limitations #### How to use To use the transformers portion of this model: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_xlmr-large") model = AutoModel.from_pretrained("liaad/srl-en_xlmr-large") ``` To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). #### Limitations and bias - This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow. - The models were trained only for 5 epochs. - The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data. ## Training procedure The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Eval results | Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) | | --------------- | ------ | ----- | | `srl-pt_bertimbau-base` | 76.30 | 73.33 | | `srl-pt_bertimbau-large` | 77.42 | 74.85 | | `srl-pt_xlmr-base` | 75.22 | 72.82 | | `srl-pt_xlmr-large` | 77.59 | 73.84 | | `srl-pt_mbert-base` | 72.76 | 66.89 | | `srl-en_xlmr-base` | 66.59 | 65.24 | | `srl-en_xlmr-large` | 67.60 | 64.94 | | `srl-en_mbert-base` | 63.07 | 58.56 | | `srl-enpt_xlmr-base` | 76.50 | 73.74 | | `srl-enpt_xlmr-large` | **78.22** | 74.55 | | `srl-enpt_mbert-base` | 74.88 | 69.19 | | `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 | | `ud_srl-pt_xlmr-large` | 77.69 | 74.91 | | `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** | ### BibTeX entry and citation info ```bibtex @misc{oliveira2021transformers, title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling}, author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge}, year={2021}, eprint={2101.01213}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
liaad/srl-en_mbert-base
liaad
2021-09-22T08:56:08Z
525
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "bert-base-multilingual-cased", "semantic role labeling", "finetuned", "multilingual", "pt", "en", "dataset:PropBank.Br", "dataset:CoNLL-2012", "arxiv:2101.01213", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - multilingual - pt - en tags: - bert-base-multilingual-cased - semantic role labeling - finetuned license: apache-2.0 datasets: - PropBank.Br - CoNLL-2012 metrics: - F1 Measure --- # mBERT fine-tuned on English semantic role labeling ## Model description This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models: * [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base) * [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large) * [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base) * [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large) * [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base) * [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base) * [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large) * [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base) * [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base) * [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large) * [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base) * [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large) * [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large) * [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large) For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Intended uses & limitations #### How to use To use the transformers portion of this model: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_mbert-base") model = AutoModel.from_pretrained("liaad/srl-en_mbert-base") ``` To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). #### Limitations and bias - The models were trained only for 5 epochs. - The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data. ## Training procedure The model was trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). ## Eval results | Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) | | --------------- | ------ | ----- | | `srl-pt_bertimbau-base` | 76.30 | 73.33 | | `srl-pt_bertimbau-large` | 77.42 | 74.85 | | `srl-pt_xlmr-base` | 75.22 | 72.82 | | `srl-pt_xlmr-large` | 77.59 | 73.84 | | `srl-pt_mbert-base` | 72.76 | 66.89 | | `srl-en_xlmr-base` | 66.59 | 65.24 | | `srl-en_xlmr-large` | 67.60 | 64.94 | | `srl-en_mbert-base` | 63.07 | 58.56 | | `srl-enpt_xlmr-base` | 76.50 | 73.74 | | `srl-enpt_xlmr-large` | **78.22** | 74.55 | | `srl-enpt_mbert-base` | 74.88 | 69.19 | | `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 | | `ud_srl-pt_xlmr-large` | 77.69 | 74.91 | | `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** | ### BibTeX entry and citation info ```bibtex @misc{oliveira2021transformers, title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling}, author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge}, year={2021}, eprint={2101.01213}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
kiri-ai/t5-base-qa-summary-emotion
kiri-ai
2021-09-22T08:55:00Z
294
8
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering", "emotion-detection", "summarisation", "en", "dataset:coqa", "dataset:squad_v2", "dataset:go_emotions", "dataset:cnn_dailymail", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - question-answering - emotion-detection - summarisation license: apache-2.0 datasets: - coqa - squad_v2 - go_emotions - cnn_dailymail metrics: - f1 pipeline_tag: text2text-generation widget: - text: 'q: Who is Elon Musk? a: an entrepreneur q: When was he born? c: Elon Musk is an entrepreneur born in 1971. </s>' - text: 'emotion: I hope this works! </s>' --- # T5 Base with QA + Summary + Emotion ## Dependencies Requires transformers>=4.0.0 ## Description This model was finetuned on the CoQa, Squad 2, GoEmotions and CNN/DailyMail. It achieves a score of **F1 79.5** on the Squad 2 dev set and a score of **F1 70.6** on the CoQa dev set. Summarisation and emotion detection has not been evaluated yet. ## Usage ### Question answering #### With Transformers ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") def get_answer(question, prev_qa, context): input_text = [f"q: {qa[0]} a: {qa[1]}" for qa in prev_qa] input_text.append(f"q: {question}") input_text.append(f"c: {context}") input_text = " ".join(input_text) features = tokenizer([input_text], return_tensors='pt') tokens = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=64) return tokenizer.decode(tokens[0], skip_special_tokens=True) print(get_answer("Why is the moon yellow?", "I'm not entirely sure why the moon is yellow.")) # unknown context = "Elon Musk left OpenAI to avoid possible future conflicts with his role as CEO of Tesla." print(get_answer("Why not?", [("Does Elon Musk still work with OpenAI", "No")], context)) # to avoid possible future conflicts with his role as CEO of Tesla ``` #### With Kiri ```python from kiri.models import T5QASummaryEmotion context = "Elon Musk left OpenAI to avoid possible future conflicts with his role as CEO of Tesla." prev_qa = [("Does Elon Musk still work with OpenAI", "No")] model = T5QASummaryEmotion() # Leave prev_qa blank for non conversational question-answering model.qa("Why not?", context, prev_qa=prev_qa) > "to avoid possible future conflicts with his role as CEO of Tesla" ``` ### Summarisation #### With Transformers ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") def summary(context): input_text = f"summarize: {context}" features = tokenizer([input_text], return_tensors='pt') tokens = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=64) return tokenizer.decode(tokens[0], skip_special_tokens=True) ``` #### With Kiri ```python from kiri.models import T5QASummaryEmotion model = T5QASummaryEmotion() model.summarise("Long text to summarise") > "Short summary of long text" ``` ### Emotion detection #### With Transformers ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion") def emotion(context): input_text = f"emotion: {context}" features = tokenizer([input_text], return_tensors='pt') tokens = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=64) return tokenizer.decode(tokens[0], skip_special_tokens=True) ``` #### With Kiri ```python from kiri.models import T5QASummaryEmotion model = T5QASummaryEmotion() model.emotion("I hope this works!") > "optimism" ``` ## About us Kiri makes using state-of-the-art models easy, accessible and scalable. [Website](https://kiri.ai) | [Natural Language Engine](https://github.com/kiri-ai/kiri)
kanishka/GlossBERT
kanishka
2021-09-22T08:54:41Z
134
1
transformers
[ "transformers", "pytorch", "bert", "glossbert", "en", "dataset:SemCor3.0", "arxiv:1908.07245", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - glossbert license: mit datasets: - SemCor3.0 --- ## GlossBERT A BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: '[GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge](https://arxiv.org/pdf/1908.07245.pdf)' Disclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: https://github.com/HSLCY/GlossBERT ## Usage The following code loads GlossBERT: ```py from transformers import AutoTokenizer, BertForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('kanishka/GlossBERT') model = BertForSequenceClassification.from_pretrained('kanishka/GlossBERT') ``` ## Citation If you use this model in any of your projects, please cite the original authors using the following bibtex: ``` @inproceedings{huang-etal-2019-glossbert, title = "{G}loss{BERT}: {BERT} for Word Sense Disambiguation with Gloss Knowledge", author = "Huang, Luyao and Sun, Chi and Qiu, Xipeng and Huang, Xuanjing", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1355", doi = "10.18653/v1/D19-1355", pages = "3507--3512" } ```
junnyu/electra_small_generator
junnyu
2021-09-22T08:54:18Z
5
2
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "masked-lm", "en", "dataset:openwebtext", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/junnyu tags: - pytorch - electra - masked-lm license: mit datasets: - openwebtext --- # 一、 个人在openwebtext数据集上训练得到的electra-small模型 # 二、 复现结果(dev dataset) |Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.| |---|---|---|---|---|---|---|---|---|---| |ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36| |**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30| # 三、 训练细节 - 数据集 openwebtext - 训练batch_size 256 - 学习率lr 5e-4 - 最大句子长度max_seqlen 128 - 训练total step 62.5W - GPU RTX3090 - 训练时间总共耗费2.5天 # 四、 使用 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="junnyu/electra_small_generator", tokenizer="junnyu/electra_small_generator" ) print( fill_mask("HuggingFace is creating a [MASK] that the community uses to solve NLP tasks.") ) ```
jimregan/wav2vec2-large-xlsr-irish-basic
jimregan
2021-09-22T08:52:55Z
20
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ga", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ga datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Irish by Jim O'Regan results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ga-IE type: common_voice args: ga-IE metrics: - name: Test WER type: wer value: 47.4 --- # Wav2Vec2-Large-XLSR-Irish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Irish Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Irish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ga-IE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic") model.to("cuda") # So, tolower() for Irish is a bit complicated: tAthar -> t-athair # toupper() is non-deterministic :) def is_upper_vowel(letter): if letter in ['A', 'E', 'I', 'O', 'U', 'Á', 'É', 'Í', 'Ó', 'Ú']: return True else: return False def irish_lower(word): if len(word) > 1 and word[0] in ['n', 't'] and is_upper_vowel(word[1]): return word[0] + '-' + word[1:].lower() else: return word.lower() def irish_lower_sentence(sentence): return " ".join([irish_lower(w) for w in sentence.split(" ")]) chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*]' def remove_special_characters(sentence): tmp = re.sub('’ ', ' ', sentence) tmp = re.sub("’$", '', tmp) tmp = re.sub('’', '\'', tmp) tmp = re.sub(chars_to_ignore_regex, '', tmp) sentence = irish_lower_sentence(tmp) + ' ' return sentence resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = remove_special_characters(batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 43.7 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/irish/fine-tune-xlsr-wav2vec2-on-irish-asr-with-transformers.ipynb)
jannesg/takalane_xho_roberta
jannesg
2021-09-22T08:52:19Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "xho", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - xho thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - xho - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Xhosa 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_xho_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_xho_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 100000 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_tso_roberta
jannesg
2021-09-22T08:52:13Z
9
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "ts", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ts thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - ts - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Tsonga 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_tso_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_tso_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 20000 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_ssw_roberta
jannesg
2021-09-22T08:52:08Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "tn", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - tn thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - tn - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Tswana 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ssw_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ssw_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 380 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_sot_roberta
jannesg
2021-09-22T08:52:06Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "sot", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - sot thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - sot - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Southern Sotho 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_sot_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_sot_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 20000 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_nso_roberta
jannesg
2021-09-22T08:52:04Z
5
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "nso", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - nso thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - nso - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Northern Sotho 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nso_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nso_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 4746 ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_nbl_roberta
jannesg
2021-09-22T08:52:01Z
7
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "nr", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - nr thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - nr - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Ndebele 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nbl_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nbl_roberta") ``` #### Limitations and bias Updates will be added continously to improve performance. This is a very low resource language, results may be poor at first. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 318M ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
jannesg/takalane_afr_roberta
jannesg
2021-09-22T08:51:59Z
31
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "af", "masked-lm", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - af thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg tags: - af - fill-mask - pytorch - roberta - masked-lm license: mit --- # Takalani Sesame - Salie - Afrikaans 🇿🇦 <img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/> ## Model description Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_afr_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_afr_roberta") ``` #### Limitations and bias Updates will be added continuously to improve performance. ## Training data Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/> **Sentences:** 2.8M ## Training procedure No preprocessing. Standard Huggingface hyperparameters. ## Author Jannes Germishuys [website](http://jannesgg.github.io)
gorkemgoknar/gpt2-turkish-writer
gorkemgoknar
2021-09-22T08:29:24Z
140
11
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "turkish", "aiwriter", "finetuned", "tr", "dataset:wikipedia-turkish", "dataset:custom-book-corpus", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - tr thumbnail: tags: - gpt2 - turkish - aiwriter - finetuned license: apache-2.0 datasets: - wikipedia-turkish - custom-book-corpus metrics: - perplexity - accuracy widget: - text: Bir zaman topu olan ama köpeği olmayan bir çocuk vardı. Parkta context: '' - text: 'Uzun uzun sahile doğru baktı. Düşündüklerinden ' context: '' - text: Çok uzun zaman önce galaksinin uzak bir köşesinde... context: '' - text: "'Bugün kendimi çok hasta hissediyorum' dedi. Karşısında " context: '' --- # Turkish AI Writer based on GPT2-Small # Türkçe Yapay Zeka Yazarı ## Model description This model is enhanced version of gpt2-small-turkish finetuned version. In addition to 28-10-2020 Wikipedia Turkish article dump this model is trained with more than 400 classic novels and plays in Turkish (Including Dostoyevski, Shaekspeare, Dumas) Base work has been done on Pierre Guillou tutorial as on this page. (https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) Note that Since Turkish language is not close to English as in Porteguese instead of training last 2 layers, last 3 layers are trained. Code is converted to work with Fastai 2.X . Using Google Colab for training. Current accuracy 36.3 % , Perplexity : 44.75 Demo (using CPU inference) is available on: http://www.metayazar.com Models are available: * [gpt2-small-tuned-tr] (https://huggingface.co/gorkemgoknar/gpt2-small-turkish) * [gpt2-small-turkish-writer] (https://huggingface.co/gorkemgoknar/gpt2-turkish-writer) ## Intended uses & limitations #### How to use #### Install ```python from transformers import AutoTokenizer, AutoModelWithLMHead import torch tokenizer = AutoTokenizer.from_pretrained("gorkemgoknar/gpt2-turkish-writer") model = AutoModelWithLMHead.from_pretrained("gorkemgoknar/gpt2-turkish-writer") # Get sequence length max of 1024 tokenizer.model_max_length=1024 model.eval() # disable dropout (or leave in train mode to finetune) ``` #### Generate 1 word ```python # input sequence text = "Bu yazıyı bilgisayar yazdı." inputs = tokenizer(text, return_tensors="pt") # model output outputs = model(**inputs, labels=inputs["input_ids"]) loss, logits = outputs[:2] predicted_index = torch.argmax(logits[0, -1, :]).item() predicted_text = tokenizer.decode([predicted_index]) # results print('input text:', text) print('predicted text:', predicted_text) # input text: # predicted text: ``` #### Generate Full Sequence ```python # input sequence text = "Bu yazıyı bilgisayar yazdı." inputs = tokenizer(text, return_tensors="pt") # model output using Top-k sampling text generation method sample_outputs = model.generate(inputs.input_ids, pad_token_id=50256, do_sample=True, max_length=50, # put the token number you want top_k=40, num_return_sequences=1) # generated sequence for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) # >> Generated text # ``` #### Limitations and bias The training data used for this model come from Turkish Wikipedia and books. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Also not much pre-processing was done on books hence chapter names and page numbers can be seen on some cases. This is a work in progress. ## Training data Wikipedia Turkish article dump as of 28-10-2020 Turkish book dataset of >400 classic novels ## Training procedure ## Eval results | epoch |train_loss |valid_loss |accuracy |perplexity |time | | ----- | -------- |--------- | ---------- | --------- | ----- | |0 |4.497828 |4.549605 |0.277328 |94.595070 |2:09:58| |1 |4.503929 |4.519456 |0.275071 |91.785645 |2:04:30| |2 |3.612716 |3.921146 |0.344802 |50.458256 |2:03:22| |3 |3.777645 |4.072006 |0.326130 |58.674530 |1:56:14| |4 |2.934462 |3.801303 |0.363719 |44.759476 |1:58:55| Note: 1cycle rule training is used and epochs are at different times ```
gagan3012/k2t
gagan3012
2021-09-22T08:27:36Z
312
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t", "Keywords to Sentences", "en", "dataset:WebNLG", "dataset:Dart", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: Keywords to Sentences tags: - keytotext - k2t - Keywords to Sentences license: mit datasets: - WebNLG - Dart metrics: - NLG --- # keytotext ![keytotext (1)](https://user-images.githubusercontent.com/49101362/116334480-f5e57a00-a7dd-11eb-987c-186477f94b6e.png) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Keytotext is powered by Huggingface 🤗 [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ## Model: Keytotext is based on the Amazing T5 Model: - `k2t`: [Model](https://huggingface.co/gagan3012/k2t) - `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny) - `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base) Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder ## Usage: Example usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder ``` pip install keytotext ``` ![carbon (3)](https://user-images.githubusercontent.com/49101362/116220679-90e64180-a755-11eb-9246-82d93d924a6c.png) ## UI: UI: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ``` pip install streamlit-tags ``` This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags) ![image](https://user-images.githubusercontent.com/49101362/116162205-fc042980-a6fd-11eb-892e-8f6902f193f4.png)
gagan3012/k2t-tiny
gagan3012
2021-09-22T08:27:33Z
8
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t-tiny", "Keywords to Sentences", "en", "dataset:WebNLG", "dataset:Dart", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: Keywords to Sentences tags: - keytotext - k2t-tiny - Keywords to Sentences license: mit datasets: - WebNLG - Dart metrics: - NLG --- # keytotext ![keytotext (1)](https://user-images.githubusercontent.com/49101362/116334480-f5e57a00-a7dd-11eb-987c-186477f94b6e.png) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Keytotext is powered by Huggingface 🤗 [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ## Model: Keytotext is based on the Amazing T5 Model: - `k2t`: [Model](https://huggingface.co/gagan3012/k2t) - `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny) - `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base) Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder ## Usage: Example usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder ``` pip install keytotext ``` ![carbon (3)](https://user-images.githubusercontent.com/49101362/116220679-90e64180-a755-11eb-9246-82d93d924a6c.png) ## UI: UI: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ``` pip install streamlit-tags ``` This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags) ![image](https://user-images.githubusercontent.com/49101362/116162205-fc042980-a6fd-11eb-892e-8f6902f193f4.png)
gagan3012/k2t-base
gagan3012
2021-09-22T08:27:23Z
87
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t-base", "Keywords to Sentences", "en", "dataset:WebNLG", "dataset:Dart", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: Keywords to Sentences tags: - keytotext - k2t-base - Keywords to Sentences license: mit datasets: - WebNLG - Dart metrics: - NLG --- # keytotext ![keytotext (1)](https://user-images.githubusercontent.com/49101362/116334480-f5e57a00-a7dd-11eb-987c-186477f94b6e.png) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Keytotext is powered by Huggingface 🤗 [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ## Model: Keytotext is based on the Amazing T5 Model: - `k2t`: [Model](https://huggingface.co/gagan3012/k2t) - `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny) - `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base) Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder ## Usage: Example usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder ``` pip install keytotext ``` ![carbon (3)](https://user-images.githubusercontent.com/49101362/116220679-90e64180-a755-11eb-9246-82d93d924a6c.png) ## UI: UI: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ``` pip install streamlit-tags ``` This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags) ![image](https://user-images.githubusercontent.com/49101362/116162205-fc042980-a6fd-11eb-892e-8f6902f193f4.png)
flax-community/medclip
flax-community
2021-09-22T08:25:55Z
4
2
transformers
[ "transformers", "jax", "tensorboard", "hybrid-clip", "vision", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - vision license: apache-2.0 --- # MedCLIP ## Model description ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
flax-community/gpt-neo-125M-apps-all
flax-community
2021-09-22T08:25:32Z
5
2
transformers
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en - python license: mit tags: - gpt_neo - code_synthesis datasets: - apps --- # GPT-Neo-125M-APPS-all > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-125M-APPS-all is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ --all_data true \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-125M-apps-alldata") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
flax-community/gpt-neo-1.3B-apps
flax-community
2021-09-22T08:25:27Z
6
3
transformers
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en - python license: mit tags: - gpt_neo - code_synthesis datasets: - apps --- # GPT-Neo-1.3B-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-1.3B-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-1.3B-apps \ --model_name_or_path EleutherAI/gpt-neo-1.3B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
flax-community/gpt-neo-1.3B-apps-all-2
flax-community
2021-09-22T08:25:21Z
5
2
transformers
[ "transformers", "pytorch", "jax", "gpt_neo", "text-generation", "code_synthesis", "dataset:apps", "arxiv:2107.03374", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en - python license: mit tags: - gpt_neo - code_synthesis datasets: - apps --- # GPT-Code-Clippy-1.3B-APPS-all ## Model Description GPT-Neo-1.3B-APPS-all is a GPT-Neo-1.3B fine-tuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-1.3B-apps). ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ``` python run_clm_apps.py \ --output_dir ./gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125B \ --dataset_name ./apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ --all_data true \ ``` ## Intended Use and Limitations The model is fine-tuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-1.3B-apps-all-2") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. This model is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
digitalepidemiologylab/covid-twitter-bert-v2
digitalepidemiologylab
2021-09-22T08:20:06Z
514
4
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "Twitter", "COVID-19", "en", "arxiv:2005.07503", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png tags: - Twitter - COVID-19 license: mit --- # COVID-Twitter-BERT v2 ## Model description BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to [covid-twitter-bert](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert) - but trained on more data, resulting in higher downstream performance. Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert). ## Intended uses & limitations The model can e.g. be used in the `fill-mask` task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it. #### How to use ```python from transformers import pipeline import json pipe = pipeline(task='fill-mask', model='digitalepidemiologylab/covid-twitter-bert-v2') out = pipe(f"In places with a lot of people, it's a good idea to wear a {pipe.tokenizer.mask_token}") print(json.dumps(out, indent=4)) [ { "sequence": "[CLS] in places with a lot of people, it's a good idea to wear a mask [SEP]", "score": 0.9998226761817932, "token": 7308, "token_str": "mask" }, ... ] ``` ## Training procedure This model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training. ## Eval results The model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges. ### BibTeX entry and citation info ```bibtex @article{muller2020covid, title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter}, author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E}, journal={arXiv preprint arXiv:2005.07503}, year={2020} } ``` or ```Martin Müller, Marcel Salathé, and Per E. Kummervold. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv preprint arXiv:2005.07503 (2020). ```
digitalepidemiologylab/covid-twitter-bert-v2-mnli
digitalepidemiologylab
2021-09-22T08:20:04Z
14
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "Twitter", "COVID-19", "tensorflow", "zero-shot-classification", "en", "dataset:mnli", "arxiv:1909.00161", "arxiv:2005.07503", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png tags: - Twitter - COVID-19 - text-classification - pytorch - tensorflow - bert license: mit datasets: - mnli pipeline_tag: zero-shot-classification widget: - text: To stop the pandemic it is important that everyone turns up for their shots. candidate_labels: health, sport, vaccine, guns --- # COVID-Twitter-BERT v2 MNLI ## Model description This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data. The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161). The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The model is already finetuned on 400'000 generaic logical tasks. We can then use it as a zero-shot classifier by reformulating the classification task as a question. Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related. The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes. Then you would finetune the model on this. With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training. Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert). ## Usage Please note that how you formulate the question can give slightly different results. Collecting a training set and finetuning on this, will most likely give you better accuracy. The easiest way to try this out is by using the Hugging Face pipeline. This uses the default Enlish template where it puts the text "This example is " in front of the text. ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.' candidate_labels = ['health', 'sport', 'vaccine','guns'] hypothesis_template = 'This example is {}.' classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True) ``` ## Training procedure The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/). ## References ```bibtex @article{muller2020covid, title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter}, author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E}, journal={arXiv preprint arXiv:2005.07503}, year={2020} } ``` or ``` Martin Müller, Marcel Salathé, and Per E. Kummervold. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv preprint arXiv:2005.07503 (2020). ```
Coolhand/Abuela
Coolhand
2021-09-22T08:19:41Z
0
1
null
[ "image_restoration", "superresolution", "en", "arxiv:2009.07047", "license:mit", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: - en thumbnail: https://github.com/Nick-Harvey/for_my_abuela/blob/master/cuban_large.jpg tags: - image_restoration - superresolution license: mit metrics: --- @inproceedings{wan2020bringing, title={Bringing Old Photos Back to Life}, author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2747--2757}, year={2020} } @article{wan2020old, title={Old Photo Restoration via Deep Latent Space Translation}, author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang}, journal={arXiv preprint arXiv:2009.07047}, year={2020} }
cristian-popa/bart-tl-ng
cristian-popa
2021-09-22T08:18:06Z
21
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "topic labeling", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en <!-- thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png --> tags: - topic labeling license: apache-2.0 metrics: - ndcg --- # MyModel ## Model description This is the `BART-TL-ng` model from the paper [BART-TL: Weakly-Supervised Topic Label Generation](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf). We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works. For more details not covered here, you can read the paper or look at the open-source implementation: https://github.com/CristianViorelPopa/BART-TL-topic-label-generation. There are two models made available from the paper: * [BART-TL-all](https://huggingface.co/cristian-popa/bart-tl-all) * [BART-TL-ng](https://huggingface.co/cristian-popa/bart-tl-ng) ## Intended uses & limitations #### How to use The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM mname = "cristian-popa/bart-tl-ng" tokenizer = AutoTokenizer.from_pretrained(mname) model = AutoModelForSeq2SeqLM.from_pretrained(mname) input = "site web google search website online internet social content user" enc = tokenizer(input, return_tensors="pt", truncation=True, padding="max_length", max_length=128) outputs = model.generate( input_ids=enc.input_ids, attention_mask=enc.attention_mask, max_length=15, min_length=1, do_sample=False, num_beams=25, length_penalty=1.0, repetition_penalty=1.5 ) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # windows live messenger ``` #### Limitations and bias The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy. ## Training data The model was fine-tuned on 5 different StackExchange corpora (see https://archive.org/download/stackexchange for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here. ## Training procedure The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the [NETL](https://www.aclweb.org/anthology/C16-1091.pdf) method, along with n-grams from the topics. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the [paper](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf) or by following [this notebook](https://github.com/CristianViorelPopa/BART-TL-topic-label-generation/blob/main/notebooks/end_to_end_workflow.ipynb). ## Eval results model | Top-1 Avg. | Top-3 Avg. | Top-5 Avg. | nDCG-1 | nDCG-3 | nDCG-5 ------------|------------|------------|------------|--------|--------|------- NETL (U) | 2.66 | 2.59 | 2.50 | 0.83 | 0.85 | 0.87 NETL (S) | 2.74 | 2.57 | 2.49 | 0.88 | 0.85 | 0.88 BART-TL-all | 2.64 | 2.52 | 2.43 | 0.83 | 0.84 | 0.87 BART-TL-ng | 2.62 | 2.50 | 2.33 | 0.82 | 0.84 | 0.85 ### BibTeX entry and citation info ```bibtex @inproceedings{popa-rebedea-2021-bart, title = "{BART}-{TL}: Weakly-Supervised Topic Label Generation", author = "Popa, Cristian and Rebedea, Traian", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.eacl-main.121", pages = "1418--1425", abstract = "We propose a novel solution for assigning labels to topic models by using multiple weak labelers. The method leverages generative transformers to learn accurate representations of the most important topic terms and candidate labels. This is achieved by fine-tuning pre-trained BART models on a large number of potential labels generated by state of the art non-neural models for topic labeling, enriched with different techniques. The proposed BART-TL model is able to generate valuable and novel labels in a weakly-supervised manner and can be improved by adding other weak labelers or distant supervision on similar tasks.", } ```
bagdaebhishek/IndianPoliticalTweetsLMMedium
bagdaebhishek
2021-09-22T08:13:46Z
17
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "India", "politics", "tweets", "BJP", "Congress", "AAP", "lm-head", "en", "dataset:Twitter", "dataset:IndianPolitics", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg tags: - India - politics - tweets - BJP - Congress - AAP - pytorch - gpt2 - lm-head - text-generation license: apache-2.0 datasets: - Twitter - IndianPolitics --- # Model name Indian Political Tweets LM Medium (Based on GPT2-Medium) ## Model description This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better. ## Intended uses & limitations This finetuned model can be used to generate tweets which are related to Indian politics. #### How to use ```python from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM") text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer) init_sentence = "India will always be" print(text_generator(init_sentence)) ``` #### Limitations and bias 1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text. 2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc. 3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model. ## Training data I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog. ## Training procedure For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values. I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles. ### Hardware 1. GPU: GTX 1080Ti 2. CPU: Ryzen 3900x 3. RAM: 32GB This model took roughly 36 hours to fine-tune.
Luyu/bert-base-mdoc-bm25
Luyu
2021-09-22T08:11:56Z
3,789
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "text reranking", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - en tags: - text reranking license: apache-2.0 datasets: - MS MARCO document ranking --- # BERT Reranker for MS-MARCO Document Ranking ## Model description A text reranker trained for BM25 retriever on MS MARCO document dataset. ## Intended uses & limitations It is possible to work with other retrievers like but using aligned BM25 works the best. We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md). #### How to use See our [project repo page](https://github.com/luyug/Reranker). ## Eval results MRR @10: 0.423 on Dev. ### BibTeX entry and citation info ```bibtex @inproceedings{gao2021lce, title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline}, author={Luyu Gao and Zhuyun Dai and Jamie Callan}, year={2021}, booktitle={The 43rd European Conference On Information Retrieval (ECIR)}, } ```
VirenS13117/distilbert-base-uncased-finetuned-cola
VirenS13117
2021-09-21T22:22:02Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5286324175580216 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7809 - Matthews Correlation: 0.5286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 | | 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 | | 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 | | 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 | | 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
huggingtweets/boss_lady_fenja-ladyfenja_promo
huggingtweets
2021-09-21T16:19:05Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/boss_lady_fenja-ladyfenja_promo/1632241140819/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1424482960749776907/NL5l0P9Q_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1432371607977275395/j60VC-cp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">✨Boss Lady Fenja✨ 9.6% 🦋 & Boss_Lady_Fenja_promo</div> <div style="text-align: center; font-size: 14px;">@boss_lady_fenja-ladyfenja_promo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ✨Boss Lady Fenja✨ 9.6% 🦋 & Boss_Lady_Fenja_promo. | Data | ✨Boss Lady Fenja✨ 9.6% 🦋 | Boss_Lady_Fenja_promo | | --- | --- | --- | | Tweets downloaded | 3153 | 654 | | Retweets | 380 | 240 | | Short tweets | 646 | 160 | | Tweets kept | 2127 | 254 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jpqrjjb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boss_lady_fenja-ladyfenja_promo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10coew7p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10coew7p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/boss_lady_fenja-ladyfenja_promo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lighteternal/nli-xlm-r-greek
lighteternal
2021-09-21T16:01:42Z
57
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "xlm-roberta-base", "zero-shot-classification", "el", "en", "dataset:multi_nli", "dataset:snli", "dataset:allnli_greek", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - el - en tags: - xlm-roberta-base datasets: - multi_nli - snli - allnli_greek metrics: - accuracy pipeline_tag: zero-shot-classification widget: - text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας." candidate_labels: "τεχνολογία, πολιτική, αθλητισμός" multi_class: false license: apache-2.0 --- # Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased). The model can be used in two ways: * NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. * Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid. ## Performance Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set: | Metric | Value | | --- | --- | | Accuracy | 0.8409 | ## To use the model for NLI/Textual Entailment #### Usage with sentence_transformers Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('lighteternal/nli-xlm-r-greek') scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'), ('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'), ('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] print(scores, labels) # Οutputs #[[-3.1526504 2.9981945 -0.3108107] # [ 5.0549307 -2.757949 -1.6220676] # [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral'] ``` #### Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek') tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek') features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'], ['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## To use the model for Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek') sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας" candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"] res = classifier(sent, candidate_labels) print(res) #outputs: #{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]} ``` ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) ### Citation info Citation for the Greek model TBA. Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) Kudos to @nreimers (Nils Reimers) for his support on Github .
ancs21/xlm-roberta-large-vi-qa
ancs21
2021-09-21T16:01:14Z
129
4
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "vi", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: vi tags: - vi - xlm-roberta widget: - text: Toà nhà nào cao nhất Việt Nam? context: Landmark 81 là một toà nhà chọc trời trong tổ hợp dự án Vinhomes Tân Cảng, một dự án có tổng mức đầu tư 40.000 tỷ đồng, do Công ty Cổ phần Đầu tư xây dựng Tân Liên Phát thuộc Vingroup làm chủ đầu tư. Toà tháp cao 81 tầng, hiện tại là toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018. license: mit metrics: - f1 - em --- # XLM-RoBERTa large for QA on Vietnamese languages (also support various languages) ## Overview - Language model: xlm-roberta-large - Fine-tune: [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) - Language: Vietnamese - Downstream-task: Extractive QA - Dataset: [mailong25/bert-vietnamese-question-answering](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset) - Training data: train-v2.0.json (SQuAD 2.0 format) - Eval data: dev-v2.0.json (SQuAD 2.0 format) - Infrastructure: 1x Tesla P100 (Google Colab) ## Performance Evaluated on dev-v2.0.json ``` exact: 136 / 141 f1: 0.9692671394799054 ``` Evaluated on Vietnamese XQuAD: [xquad.vi.json](https://github.com/deepmind/xquad/blob/master/xquad.vi.json) ``` exact: 604 / 1190 f1: 0.7224454217571596 ``` ## Author An Pham (ancs21.ps [at] gmail.com) ## License MIT
Mulin/my_wolf_model
Mulin
2021-09-21T15:39:03Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
My First Model - for classification of wolf
gniemiec/mt5-small-finetuned-xsum
gniemiec
2021-09-21T13:22:57Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: mt5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 2.8351 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-xsum This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 2.8351 - Rouge2: 0.3143 - Rougel: 2.6488 - Rougelsum: 2.6463 - Gen Len: 4.9416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | nan | 1.0 | 12753 | nan | 2.8351 | 0.3143 | 2.6488 | 2.6463 | 4.9416 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
Frederick0291/t5-small-finetuned-billsum
Frederick0291
2021-09-21T08:33:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:billsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: t5-small-finetuned-billsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum args: default metrics: - name: Rouge1 type: rouge value: 16.6044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-billsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.0972 - Rouge1: 16.6044 - Rouge2: 12.8656 - Rougel: 15.7876 - Rougelsum: 15.9784 - Gen Len: 18.9948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.3854 | 1.0 | 2369 | 2.0972 | 16.6044 | 12.8656 | 15.7876 | 15.9784 | 18.9948 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
abrhaleitela/TigXLNet
abrhaleitela
2021-09-21T08:06:12Z
2
1
transformers
[ "transformers", "pytorch", "xlnet", "arxiv:2006.07698", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# Transferring Monolingual Model to Low-Resource Language: The Case Of Tigrinya: ## Proposed Method: <img src="data/proposed.png" height = "330" width ="760" > The proposed method transfers a mono-lingual Transformer model into new target language at lexical level by learning new token embeddings. All implementation in this repo uses XLNet as a source Transformer model, however, other Transformer models can also be used similarly. ## Main files: All files are IPython Notebook files which can be excuted simply in Google Colab. - train.ipynb : Fine-tunes XLNet (mono-lingual transformer) on new target language (Tigrinya) sentiment analysis dataset. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bSSrKE-TSphUyrNB2UWhFI-Bkoz0a5l0?usp=sharing) - test.ipynb : Evaluates the fine-tuned model on test data. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17R1lvRjxILVNk971vzZT79o2OodwaNIX?usp=sharing) - token_embeddings.ipynb : Trains a word2vec token embeddings for Tigrinya language. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hCtetAllAjBw28EVQkJFpiKdFtXmuxV7?usp=sharing) - process_Tigrinya_comments.ipynb : Extracts Tigrinya comments from mixed language contents. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1-ndLlBV-iLZNBW3Z8OfKAqUUCjvGbdZU?usp=sharing) - extract_YouTube_comments.ipynb : Downloads available comments from a YouTube channel ID. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1b7G85wHKe18y45JIDtvDJdO5dOkRmDdp?usp=sharing) - auto_labelling.ipynb : Automatically labels Tigrinya comments in to positive or negative sentiments based on [Emoji's sentiment](http://kt.ijs.si/data/Emoji_sentiment_ranking/). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wnZf7CBBCIr966vRUITlxKCrANsMPpV7?usp=sharing) ## Tigrinya Tokenizer: A [sentencepiece](https://github.com/google/sentencepiece) based tokenizer for Tigrinya has been released to the public and can be accessed as in the following: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("abryee/TigXLNet") tokenizer.tokenize("ዋዋዋው እዛ ፍሊም ካብተን ዘድንቀን ሓንቲ ኢያ ሞ ብጣዕሚ ኢና ነመስግን ሓንቲ ክብላ ደልየ ዘሎኹ ሓደራኣኹም ኣብ ጊዜኹም ተረክቡ") ## TigXLNet: A new general purpose transformer model for low-resource language Tigrinya is also released to the public and be accessed as in the following: from transformers import AutoConfig, AutoModel config = AutoConfig.from_pretrained("abryee/TigXLNet") config.d_head = 64 model = AutoModel.from_pretrained("abryee/TigXLNet", config=config) ## Evaluation: The proposed method is evaluated using two datasets: - A newly created sentiment analysis dataset for low-resource language (Tigriyna). <table> <tr> <td> <table> <thead> <tr> <th><sub>Models</sub></th> <th><sub>Configuration</sub></th> <th><sub>F1-Score</sub></th> </tr> </thead> <tbody> <tr> <td rowspan=3><sub>BERT</sub></td> <td rowspan=1><sub>+Frozen BERT weights</sub></td> <td><sub>54.91</sub></td> </tr> <tr> <td rowspan=1><sub>+Random embeddings</sub></td> <td><sub>74.26</sub></td> </tr> <tr> <td rowspan=1><sub>+Frozen token embeddings</sub></td> <td><sub>76.35</sub></td> </tr> <tr> <td rowspan=3><sub>mBERT</sub></td> <td rowspan=1><sub>+Frozen mBERT weights</sub></td> <td><sub>57.32</sub></td> </tr> <tr> <td rowspan=1><sub>+Random embeddings</sub></td> <td><sub>76.01</sub></td> </tr> <tr> <td rowspan=1><sub>+Frozen token embeddings</sub></td> <td><sub>77.51</sub></td> </tr> <tr> <td rowspan=3><sub>XLNet</sub></td> <td rowspan=1><sub>+Frozen XLNet weights</sub></td> <td><strong><sub>68.14</sub></strong></td> </tr> <tr> <td rowspan=1><sub>+Random embeddings</sub></td> <td><strong><sub>77.83</sub></strong></td> </tr> <tr> <td rowspan=1><sub>+Frozen token embeddings</sub></td> <td><strong><sub>81.62</sub></strong></td> </tr> </tbody> </table> </td> <td><img src="data/effect_of_dataset_size.png" alt="3" width = 480px height = 280px></td> </tr> </table> - Cross-lingual Sentiment dataset ([CLS](https://zenodo.org/record/3251672#.Xs65VzozbIU)). <table> <thead> <tr> <th rowspan=2><sub>Models</sub></th> <th rowspan=1 colspan=3><sub>English</sub></th> <th rowspan=1 colspan=3><sub>German</sub></th> <th rowspan=1 colspan=3><sub>French</sub></th> <th rowspan=1 colspan=3><sub>Japanese</sub></th> <th rowspan=2><sub>Average</sub></th> </tr> <tr> <th colspan=1><sub>Books</sub></th> <th colspan=1><sub>DVD</sub></th> <th colspan=1><sub>Music</sub></th> <th colspan=1><sub>Books</sub></th> <th colspan=1><sub>DVD</sub></th> <th colspan=1><sub>Music</sub></th> <th colspan=1><sub>Books</sub></th> <th colspan=1><sub>DVD</sub></th> <th colspan=1><sub>Music</sub></th> <th colspan=1><sub>Books</sub></th> <th colspan=1><sub>DVD</sub></th> <th colspan=1><sub>Music</sub></th> </tr> </thead> <tbody> <tr> <td colspan=1><sub>XLNet</sub></td> <td colspan=1><sub><strong>92.90</strong></sub></td> <td colspan=1><sub><strong>93.31</strong></sub></td> <td colspan=1><sub><strong>92.02</strong></sub></td> <td colspan=1><sub>85.23</sub></td> <td colspan=1><sub>83.30</sub></td> <td colspan=1><sub>83.89</sub></td> <td colspan=1><sub>73.05</sub></td> <td colspan=1><sub>69.80</sub></td> <td colspan=1><sub>70.12</sub></td> <td colspan=1><sub>83.20</sub></td> <td colspan=1><sub><strong>86.07</strong></sub></td> <td colspan=1><sub>85.24</sub></td> <td colspan=1><sub>83.08</sub></td> </tr> <tr> <td colspan=1><sub>mBERT</sub></td> <td colspan=1><sub>92.78</sub></td> <td colspan=1><sub>90.30</sub></td> <td colspan=1><sub>91.88</sub></td> <td colspan=1><sub><strong>88.65</strong></sub></td> <td colspan=1><sub><strong>85.85</strong></sub></td> <td colspan=1><sub><strong>90.38</strong></sub></td> <td colspan=1><sub><strong>91.09</strong></sub></td> <td colspan=1><sub><strong>88.57</strong></sub></td> <td colspan=1><sub><strong>93.67</strong></sub></td> <td colspan=1><sub><strong>84.35</strong></sub></td> <td colspan=1><sub>81.77</sub></td> <td colspan=1><sub><strong>87.53</strong></sub></td> <td colspan=1><sub><strong>88.90</strong></sub></td> </tr> </tbody> </table> ## Dataset used for this paper: We have constructed new sentiment analysis dataset for Tigrinya language and it can be found in the zip file (Tigrinya Sentiment Analysis Dataset) ## Citing our paper: Our paper can be accessed from ArXiv [link](https://arxiv.org/pdf/2006.07698.pdf), and please consider citing our work. @misc{tela2020transferring, title={Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya}, author={Abrhalei Tela and Abraham Woubie and Ville Hautamaki}, year={2020}, eprint={2006.07698}, archivePrefix={arXiv}, primaryClass={cs.CL} } ## Any questions, comments, feedback is appreciated! And can be forwarded to the following email: [email protected]
ramsrigouthamg/t5-large-paraphraser-diverse-high-quality
ramsrigouthamg
2021-09-21T05:21:49Z
602
26
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
Blog post with more details as well as easy to use Google Colab link: https://towardsdatascience.com/high-quality-sentence-paraphraser-using-transformers-in-nlp-c33f4482856f !pip install transformers==4.10.2 !pip install sentencepiece==0.1.96 ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") tokenizer = AutoTokenizer.from_pretrained("ramsrigouthamg/t5-large-paraphraser-diverse-high-quality") import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) # Beam Search context = "Once, a group of frogs were roaming around the forest in search of water." text = "paraphrase: "+context + " </s>" encoding = tokenizer.encode_plus(text,max_length =128, padding=True, return_tensors="pt") input_ids,attention_mask = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) model.eval() beam_outputs = model.generate( input_ids=input_ids,attention_mask=attention_mask, max_length=128, early_stopping=True, num_beams=15, num_return_sequences=3 ) print ("\n\n") print ("Original: ",context) for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print (sent) ``` **Output from the above code** ``` Original: Once, a group of frogs were roaming around the forest in search of water. paraphrasedoutput: A herd of frogs were wandering around the woods in search of water. paraphrasedoutput: A herd of frogs was wandering around the woods in search of water. paraphrasedoutput: A herd of frogs were wandering around the forest in search of water at one time. ```
gchhablani/bert-large-cased-finetuned-cola
gchhablani
2021-09-21T04:06:19Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-large-cased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5957317644481708 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-cola This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.8385 - Matthews Correlation: 0.5957 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5533 | 1.0 | 2138 | 0.7943 | 0.4439 | | 0.5004 | 2.0 | 4276 | 0.7272 | 0.5678 | | 0.2865 | 3.0 | 6414 | 0.8385 | 0.5957 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
fabriceyhc/bert-base-uncased-dbpedia_14
fabriceyhc
2021-09-21T00:56:12Z
52
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:dbpedia_14", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - sibyl datasets: - dbpedia_14 metrics: - accuracy model-index: - name: bert-base-uncased-dbpedia_14 results: - task: name: Text Classification type: text-classification dataset: name: dbpedia_14 type: dbpedia_14 args: dbpedia_14 metrics: - name: Accuracy type: accuracy value: 0.9902857142857143 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-dbpedia_14 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the dbpedia_14 dataset. It achieves the following results on the evaluation set: - Loss: 0.0547 - Accuracy: 0.9903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 34650 - training_steps: 346500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.7757 | 0.03 | 2000 | 0.2732 | 0.9880 | | 0.1002 | 0.06 | 4000 | 0.0620 | 0.9891 | | 0.0547 | 0.09 | 6000 | 0.0723 | 0.9879 | | 0.0558 | 0.12 | 8000 | 0.0678 | 0.9875 | | 0.0534 | 0.14 | 10000 | 0.0554 | 0.9896 | | 0.0632 | 0.17 | 12000 | 0.0670 | 0.9888 | | 0.0612 | 0.2 | 14000 | 0.0733 | 0.9873 | | 0.0667 | 0.23 | 16000 | 0.0623 | 0.9896 | | 0.0636 | 0.26 | 18000 | 0.0836 | 0.9868 | | 0.0705 | 0.29 | 20000 | 0.0776 | 0.9855 | | 0.0726 | 0.32 | 22000 | 0.0805 | 0.9861 | | 0.0778 | 0.35 | 24000 | 0.0713 | 0.9870 | | 0.0713 | 0.38 | 26000 | 0.1277 | 0.9805 | | 0.0965 | 0.4 | 28000 | 0.0810 | 0.9855 | | 0.0881 | 0.43 | 30000 | 0.0910 | 0.985 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
fabriceyhc/bert-base-uncased-yahoo_answers_topics
fabriceyhc
2021-09-21T00:54:22Z
13
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:yahoo_answers_topics", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - sibyl datasets: - yahoo_answers_topics metrics: - accuracy model-index: - name: bert-base-uncased-yahoo_answers_topics results: - task: name: Text Classification type: text-classification dataset: name: yahoo_answers_topics type: yahoo_answers_topics args: yahoo_answers_topics metrics: - name: Accuracy type: accuracy value: 0.7499166666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-yahoo_answers_topics This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yahoo_answers_topics dataset. It achieves the following results on the evaluation set: - Loss: 0.8092 - Accuracy: 0.7499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 86625 - training_steps: 866250 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.162 | 0.01 | 2000 | 1.7444 | 0.5681 | | 1.3126 | 0.02 | 4000 | 1.0081 | 0.7054 | | 0.9592 | 0.03 | 6000 | 0.9021 | 0.7234 | | 0.8903 | 0.05 | 8000 | 0.8827 | 0.7276 | | 0.8685 | 0.06 | 10000 | 0.8540 | 0.7341 | | 0.8422 | 0.07 | 12000 | 0.8547 | 0.7365 | | 0.8535 | 0.08 | 14000 | 0.8264 | 0.7372 | | 0.8178 | 0.09 | 16000 | 0.8331 | 0.7389 | | 0.8325 | 0.1 | 18000 | 0.8242 | 0.7411 | | 0.8181 | 0.12 | 20000 | 0.8356 | 0.7437 | | 0.8171 | 0.13 | 22000 | 0.8090 | 0.7451 | | 0.8092 | 0.14 | 24000 | 0.8469 | 0.7392 | | 0.8057 | 0.15 | 26000 | 0.8185 | 0.7478 | | 0.8085 | 0.16 | 28000 | 0.8090 | 0.7467 | | 0.8229 | 0.17 | 30000 | 0.8225 | 0.7417 | | 0.8151 | 0.18 | 32000 | 0.8262 | 0.7419 | | 0.81 | 0.2 | 34000 | 0.8149 | 0.7383 | | 0.8073 | 0.21 | 36000 | 0.8225 | 0.7441 | | 0.816 | 0.22 | 38000 | 0.8037 | 0.744 | | 0.8217 | 0.23 | 40000 | 0.8409 | 0.743 | | 0.82 | 0.24 | 42000 | 0.8286 | 0.7385 | | 0.8101 | 0.25 | 44000 | 0.8282 | 0.7413 | | 0.8254 | 0.27 | 46000 | 0.8170 | 0.7414 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
fabriceyhc/bert-base-uncased-ag_news
fabriceyhc
2021-09-21T00:54:07Z
541
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:ag_news", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - sibyl datasets: - ag_news metrics: - accuracy model-index: - name: bert-base-uncased-ag_news results: - task: name: Text Classification type: text-classification dataset: name: ag_news type: ag_news args: default metrics: - name: Accuracy type: accuracy value: 0.9375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-ag_news This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.3284 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 7425 - training_steps: 74250 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5773 | 0.13 | 2000 | 0.3627 | 0.8875 | | 0.3101 | 0.27 | 4000 | 0.2938 | 0.9208 | | 0.3076 | 0.4 | 6000 | 0.3114 | 0.9092 | | 0.3114 | 0.54 | 8000 | 0.4545 | 0.9008 | | 0.3154 | 0.67 | 10000 | 0.3875 | 0.9083 | | 0.3095 | 0.81 | 12000 | 0.3390 | 0.9142 | | 0.2948 | 0.94 | 14000 | 0.3341 | 0.9133 | | 0.2557 | 1.08 | 16000 | 0.4573 | 0.9092 | | 0.258 | 1.21 | 18000 | 0.3356 | 0.9217 | | 0.2455 | 1.35 | 20000 | 0.3348 | 0.9283 | | 0.2361 | 1.48 | 22000 | 0.3218 | 0.93 | | 0.254 | 1.62 | 24000 | 0.3814 | 0.9033 | | 0.2528 | 1.75 | 26000 | 0.3628 | 0.9158 | | 0.2282 | 1.89 | 28000 | 0.3302 | 0.9308 | | 0.224 | 2.02 | 30000 | 0.3967 | 0.9225 | | 0.174 | 2.15 | 32000 | 0.3669 | 0.9333 | | 0.1848 | 2.29 | 34000 | 0.3435 | 0.9283 | | 0.19 | 2.42 | 36000 | 0.3552 | 0.93 | | 0.1865 | 2.56 | 38000 | 0.3996 | 0.9258 | | 0.1877 | 2.69 | 40000 | 0.3749 | 0.9258 | | 0.1951 | 2.83 | 42000 | 0.3963 | 0.9258 | | 0.1702 | 2.96 | 44000 | 0.3655 | 0.9317 | | 0.1488 | 3.1 | 46000 | 0.3942 | 0.9292 | | 0.1231 | 3.23 | 48000 | 0.3998 | 0.9267 | | 0.1319 | 3.37 | 50000 | 0.4292 | 0.9242 | | 0.1334 | 3.5 | 52000 | 0.4904 | 0.9192 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
deval/bert-base-NER-finetuned-ner
deval
2021-09-20T16:15:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:x_glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - x_glue metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-NER-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: x_glue type: x_glue args: ner metrics: - name: Precision type: precision value: 0.2273838630806846 - name: Recall type: recall value: 0.11185727172496743 - name: F1 type: f1 value: 0.14994961370507223 - name: Accuracy type: accuracy value: 0.8485324947589099 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-NER-finetuned-ner This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the x_glue dataset. It achieves the following results on the evaluation set: - Loss: 1.4380 - Precision: 0.2274 - Recall: 0.1119 - F1: 0.1499 - Accuracy: 0.8485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0822 | 1.0 | 878 | 1.1648 | 0.2068 | 0.1101 | 0.1437 | 0.8471 | | 0.0102 | 2.0 | 1756 | 1.2697 | 0.2073 | 0.1110 | 0.1445 | 0.8447 | | 0.0049 | 3.0 | 2634 | 1.3945 | 0.2006 | 0.1073 | 0.1399 | 0.8368 | | 0.0025 | 4.0 | 3512 | 1.3994 | 0.2243 | 0.1126 | 0.1499 | 0.8501 | | 0.0011 | 5.0 | 4390 | 1.4380 | 0.2274 | 0.1119 | 0.1499 | 0.8485 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
dominiqueblok/roberta-base-finetuned-ner
dominiqueblok
2021-09-20T16:02:48Z
187
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9529566113766282 - name: Recall type: recall value: 0.9604268983755194 - name: F1 type: f1 value: 0.9566771720212616 - name: Accuracy type: accuracy value: 0.988938664048357 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0492 - Precision: 0.9530 - Recall: 0.9604 - F1: 0.9567 - Accuracy: 0.9889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2031 | 1.0 | 878 | 0.0560 | 0.9381 | 0.9445 | 0.9413 | 0.9858 | | 0.0446 | 2.0 | 1756 | 0.0480 | 0.9510 | 0.9578 | 0.9544 | 0.9887 | | 0.0263 | 3.0 | 2634 | 0.0492 | 0.9530 | 0.9604 | 0.9567 | 0.9889 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
matthias-wright/stylegan2
matthias-wright
2021-09-20T13:28:40Z
0
1
null
[ "arxiv:1912.04958", "region:us" ]
null
2022-03-02T23:29:05Z
# Analyzing and Improving the Image Quality of StyleGAN <b>Paper:</b> <a href="https://arxiv.org/abs/1912.04958">https://arxiv.org/abs/1912.04958</a> # About These are the pretrained weights for [this](https://github.com/matthias-wright/flaxmodels/tree/main/flaxmodels/stylegan2) StyleGAN2 implementation in Jax/Flax. The weights are taken from [this](https://github.com/NVlabs/stylegan2) and [this](https://github.com/NVlabs/stylegan2-ada) repository. # Documentation [Here](https://github.com/matthias-wright/flaxmodels/blob/main/docs/Documentation.md#1-checkpoints) is a documentation that explains the preprocessing steps as well as the format of the pretrained weights.
huggingartists/machine-gun-kelly
huggingartists
2021-09-20T12:50:31Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/machine-gun-kelly", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/machine-gun-kelly tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/bee1868cba78bf4b170886b3368c4ae8.640x640x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Machine Gun Kelly</div> <a href="https://genius.com/artists/machine-gun-kelly"> <div style="text-align: center; font-size: 14px;">@machine-gun-kelly</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Machine Gun Kelly. Dataset is available [here](https://huggingface.co/datasets/huggingartists/machine-gun-kelly). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/machine-gun-kelly") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/33f2ce6m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Machine Gun Kelly's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2bbn6fvb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2bbn6fvb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/machine-gun-kelly') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/machine-gun-kelly") model = AutoModelWithLMHead.from_pretrained("huggingartists/machine-gun-kelly") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
gniemiec/t5-small-finetuned-xsum
gniemiec
2021-09-20T11:36:55Z
35
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 23.0533 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.7967 - Rouge1: 23.0533 - Rouge2: 3.912 - Rougel: 17.8534 - Rougelsum: 17.8581 - Gen Len: 18.6878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.0574 | 1.0 | 1276 | 2.7967 | 23.0533 | 3.912 | 17.8534 | 17.8581 | 18.6878 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-base-finetuned-stsb
gchhablani
2021-09-20T09:09:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - spearmanr model-index: - name: fnet-base-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8219397497728022 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-stsb This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.7894 - Pearson: 0.8256 - Spearmanr: 0.8219 - Combined Score: 0.8238 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:| | 1.5473 | 1.0 | 360 | 0.8120 | 0.7751 | 0.8115 | 0.8125 | | 0.6954 | 2.0 | 720 | 0.8145 | 0.8717 | 0.8160 | 0.8130 | | 0.4828 | 3.0 | 1080 | 0.8238 | 0.7894 | 0.8256 | 0.8219 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/bert-base-cased-finetuned-sst2
gchhablani
2021-09-20T09:09:06Z
10,484
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9231651376146789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-sst2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3649 - Accuracy: 0.9232 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 0.233 | 1.0 | 4210 | 0.9174 | 0.2841 | | 0.1261 | 2.0 | 8420 | 0.9278 | 0.3310 | | 0.0768 | 3.0 | 12630 | 0.9232 | 0.3649 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-base-finetuned-qqp
gchhablani
2021-09-20T09:08:34Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy - f1 model-index: - name: fnet-base-finetuned-qqp results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue args: qqp metrics: - name: Accuracy type: accuracy value: 0.8847390551570616 - name: F1 type: f1 value: 0.8466197090382463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-qqp This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3686 - Accuracy: 0.8847 - F1: 0.8466 - Combined Score: 0.8657 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.3484 | 1.0 | 22741 | 0.3014 | 0.8676 | 0.8297 | 0.8487 | | 0.2387 | 2.0 | 45482 | 0.3011 | 0.8801 | 0.8429 | 0.8615 | | 0.1739 | 3.0 | 68223 | 0.3686 | 0.8847 | 0.8466 | 0.8657 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-base-finetuned-mnli
gchhablani
2021-09-20T09:08:10Z
14
1
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: fnet-base-finetuned-mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.7674938974776241 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-mnli This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6443 - Accuracy: 0.7675 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7143 | 1.0 | 24544 | 0.6169 | 0.7504 | | 0.5407 | 2.0 | 49088 | 0.6218 | 0.7627 | | 0.4178 | 3.0 | 73632 | 0.6564 | 0.7658 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/bert-base-cased-finetuned-mnli
gchhablani
2021-09-20T09:07:21Z
12
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-finetuned-mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.8410292921074044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-mnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5721 - Accuracy: 0.8410 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5323 | 1.0 | 24544 | 0.4431 | 0.8302 | | 0.3447 | 2.0 | 49088 | 0.4725 | 0.8353 | | 0.2267 | 3.0 | 73632 | 0.5887 | 0.8368 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/bert-base-cased-finetuned-wnli
gchhablani
2021-09-20T09:07:04Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.4647887323943662 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6996 - Accuracy: 0.4648 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7299 | 1.0 | 40 | 0.6923 | 0.5634 | | 0.6982 | 2.0 | 80 | 0.7027 | 0.3803 | | 0.6972 | 3.0 | 120 | 0.7005 | 0.4507 | | 0.6992 | 4.0 | 160 | 0.6977 | 0.5352 | | 0.699 | 5.0 | 200 | 0.6996 | 0.4648 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-base-finetuned-mrpc
gchhablani
2021-09-20T09:06:55Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer - fnet-bert-base-comparison datasets: - glue metrics: - accuracy - f1 model-index: - name: fnet-base-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7720588235294118 - name: F1 type: f1 value: 0.8502415458937198 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-mrpc This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.9653 - Accuracy: 0.7721 - F1: 0.8502 - Combined Score: 0.8112 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.544 | 1.0 | 230 | 0.5272 | 0.7328 | 0.8300 | 0.7814 | | 0.4034 | 2.0 | 460 | 0.6211 | 0.7255 | 0.8298 | 0.7776 | | 0.2602 | 3.0 | 690 | 0.9110 | 0.7230 | 0.8306 | 0.7768 | | 0.1688 | 4.0 | 920 | 0.8640 | 0.7696 | 0.8489 | 0.8092 | | 0.0913 | 5.0 | 1150 | 0.9653 | 0.7721 | 0.8502 | 0.8112 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
JorgeSarry/est5base-simplify
JorgeSarry
2021-09-20T08:42:39Z
6
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: es --- This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish WikiEdits for sentence simplification. You can use it with the command "simplify:"
huggingartists/i-dont-know-how-but-they-found-me
huggingartists
2021-09-20T07:59:50Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/i-dont-know-how-but-they-found-me", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/i-dont-know-how-but-they-found-me tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4683327bb3a8906b18e9af8207c36dc9.645x645x1.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">I DONT KNOW HOW BUT THEY FOUND ME</div> <a href="https://genius.com/artists/i-dont-know-how-but-they-found-me"> <div style="text-align: center; font-size: 14px;">@i-dont-know-how-but-they-found-me</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from I DONT KNOW HOW BUT THEY FOUND ME. Dataset is available [here](https://huggingface.co/datasets/huggingartists/i-dont-know-how-but-they-found-me). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/i-dont-know-how-but-they-found-me") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1j7uofwh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on I DONT KNOW HOW BUT THEY FOUND ME's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1abhthz2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1abhthz2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/i-dont-know-how-but-they-found-me') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/i-dont-know-how-but-they-found-me") model = AutoModelWithLMHead.from_pretrained("huggingartists/i-dont-know-how-but-they-found-me") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/upsahl
huggingartists
2021-09-20T07:35:35Z
3
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/upsahl", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/upsahl tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/e0fa9b5bdd037ab75031dd7372d05cd6.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">UPSAHL</div> <a href="https://genius.com/artists/upsahl"> <div style="text-align: center; font-size: 14px;">@upsahl</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from UPSAHL. Dataset is available [here](https://huggingface.co/datasets/huggingartists/upsahl). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/upsahl") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2o3af3ts/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on UPSAHL's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2lr9eqkt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2lr9eqkt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/upsahl') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/upsahl") model = AutoModelWithLMHead.from_pretrained("huggingartists/upsahl") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
slauw87/bart_summarisation
slauw87
2021-09-20T05:27:36Z
5,860
59
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "sagemaker", "summarization", "en", "dataset:samsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - sagemaker - bart - summarization license: apache-2.0 datasets: - samsum model-index: - name: bart-large-cnn-samsum results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization" type: samsum metrics: - name: Validation ROGUE-1 type: rogue-1 value: 43.2111 - name: Validation ROGUE-2 type: rogue-2 value: 22.3519 - name: Validation ROGUE-L type: rogue-l value: 33.315 - name: Test ROGUE-1 type: rogue-1 value: 41.8283 - name: Test ROGUE-2 type: rogue-2 value: 20.9857 - name: Test ROGUE-L type: rogue-l value: 32.3602 widget: - text: | Sugi: I am tired of everything in my life. Tommy: What? How happy you life is! I do envy you. Sugi: You don't know that I have been over-protected by my mother these years. I am really about to leave the family and spread my wings. Tommy: Maybe you are right. --- ## `bart-large-cnn-samsum` This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) ## Hyperparameters { "dataset_name": "samsum", "do_eval": true, "do_predict": true, "do_train": true, "fp16": true, "learning_rate": 5e-05, "model_name_or_path": "facebook/bart-large-cnn", "num_train_epochs": 3, "output_dir": "/opt/ml/model", "per_device_eval_batch_size": 4, "per_device_train_batch_size": 4, "predict_with_generate": true, "seed": 7 } ## Usage from transformers import pipeline summarizer = pipeline("summarization", model="slauw87/bart-large-cnn-samsum") conversation = '''Sugi: I am tired of everything in my life. Tommy: What? How happy you life is! I do envy you. Sugi: You don't know that I have been over-protected by my mother these years. I am really about to leave the family and spread my wings. Tommy: Maybe you are right. ''' nlp(conversation) ## Results | key | value | | --- | ----- | | eval_rouge1 | 43.2111 | | eval_rouge2 | 22.3519 | | eval_rougeL | 33.3153 | | eval_rougeLsum | 40.0527 | | predict_rouge1 | 41.8283 | | predict_rouge2 | 20.9857 | | predict_rougeL | 32.3602 | | predict_rougeLsum | 38.7316 |
eugenesiow/drln
eugenesiow
2021-09-20T01:00:50Z
759
4
transformers
[ "transformers", "DRLN", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1906.12021", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Densely Residual Laplacian Super-Resolution (DRLN) DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Densely Residual Laplacian Super-resolution](https://arxiv.org/abs/1906.12021) by Anwar et al. (2020) and first released in [this repository](https://github.com/saeed-anwar/DRLN). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/drln_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import DrlnModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = DrlnModel.from_pretrained('eugenesiow/drln', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, DrlnModel, DrlnConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = DrlnConfig( scale=4, # train a model to upscale 4x ) model = DrlnModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |drln | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.22/0.9614** | |Set5 |3x |30.39/0.8678 |**35.31/0.9423** | |Set5 |4x |28.42/0.8101 |**32.55/0.899** | |Set14 |2x |30.22/0.8683 |**34.01/0.9211** | |Set14 |3x |27.53/0.7737 |**31.21/0.8619** | |Set14 |4x |25.99/0.7023 |**28.96/0.7901** | |BSD100 |2x |29.55/0.8425 |**33.93/0.9269** | |BSD100 |3x |27.20/0.7382 |**29.77/0.8223** | |BSD100 |4x |25.96/0.6672 |**28.65/0.7692** | |Urban100 |2x |26.66/0.8408 |**32.82/0.934** | |Urban100 |3x | |**29.79/0.8825** | |Urban100 |4x |23.14/0.6573 |**26.56/0.7998** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/drln_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @misc{anwar2019densely, title={Densely Residual Laplacian Super-Resolution}, author={Saeed Anwar and Nick Barnes}, year={2019}, eprint={1906.12021}, archivePrefix={arXiv}, primaryClass={eess.IV} } ```
cambridgeltl/mirror-roberta-base-sentence
cambridgeltl
2021-09-19T22:48:01Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "arxiv:2104.08027", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - sentence-embeddings - sentence-similarity ### cambridgeltl/mirror-roberta-base-sentence An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). The model is trained with unlabelled raw sentences, using [roberta-base](https://huggingface.co/roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input. Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs. ### Citation ```bibtex @inproceedings{ liu2021fast, title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders}, author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel}, booktitle={EMNLP 2021}, year={2021} } ```
huggingartists/egor-kreed
huggingartists
2021-09-19T20:00:22Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/egor-kreed", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/egor-kreed tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/f52808edb2078f52ddab162623f0c6e3.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ЕГОР КРИД (EGOR KREED)</div> <a href="https://genius.com/artists/egor-kreed"> <div style="text-align: center; font-size: 14px;">@egor-kreed</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from ЕГОР КРИД (EGOR KREED). Dataset is available [here](https://huggingface.co/datasets/huggingartists/egor-kreed). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/egor-kreed") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3l7nf6hj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on ЕГОР КРИД (EGOR KREED)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1mtfkshl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1mtfkshl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/egor-kreed') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/egor-kreed") model = AutoModelWithLMHead.from_pretrained("huggingartists/egor-kreed") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingartists/red-hot-chili-peppers
huggingartists
2021-09-19T18:27:13Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/red-hot-chili-peppers", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/red-hot-chili-peppers tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/2879181f9522394ad29c16478421aa77.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Red Hot Chili Peppers</div> <a href="https://genius.com/artists/red-hot-chili-peppers"> <div style="text-align: center; font-size: 14px;">@red-hot-chili-peppers</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Red Hot Chili Peppers. Dataset is available [here](https://huggingface.co/datasets/huggingartists/red-hot-chili-peppers). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/red-hot-chili-peppers") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2spp06qm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Red Hot Chili Peppers's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/opiwx19q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/opiwx19q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/red-hot-chili-peppers') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/red-hot-chili-peppers") model = AutoModelWithLMHead.from_pretrained("huggingartists/red-hot-chili-peppers") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
FreeSpinsCoinMaster/dsdqfdqsfsf
FreeSpinsCoinMaster
2021-09-18T19:39:17Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/ro-bux_nc-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-onlyfans-hack-2021_oq-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-v-bucks-g1_zo-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-tiktok-fans-generator_sg-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/spins.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/pubg.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/google.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/7frtg.pdf
huggingtweets/spdustin
huggingtweets
2021-09-18T17:45:07Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/spdustin/1631987071347/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322384879355596800/TI3cvQUL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">➖Dustin Miller➖</div> <div style="text-align: center; font-size: 14px;">@spdustin</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ➖Dustin Miller➖. | Data | ➖Dustin Miller➖ | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 389 | | Short tweets | 185 | | Tweets kept | 2674 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35io6xkx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spdustin's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tasqdxp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tasqdxp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/spdustin') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ai_hexcrawl-gptmicrofic
huggingtweets
2021-09-18T03:18:36Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ai_hexcrawl-gptmicrofic/1631934945678/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391882949650440200/lmEKl2ZQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1261895681561804800/r6vOZGoH_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AI Hexcrawl & GPT2-Microfic</div> <div style="text-align: center; font-size: 14px;">@ai_hexcrawl-gptmicrofic</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AI Hexcrawl & GPT2-Microfic. | Data | AI Hexcrawl | GPT2-Microfic | | --- | --- | --- | | Tweets downloaded | 737 | 1127 | | Retweets | 26 | 9 | | Short tweets | 1 | 9 | | Tweets kept | 710 | 1109 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cmbpada/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl-gptmicrofic's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5g9tts1o) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5g9tts1o/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ai_hexcrawl-gptmicrofic') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ai_hexcrawl-dril_gpt2-drilbot_neo
huggingtweets
2021-09-18T02:30:19Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/ai_hexcrawl-dril_gpt2-drilbot_neo/1631932214962/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1386749605216407555/QIJeyWfE_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391882949650440200/lmEKl2ZQ_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wintbot_neo & wint but Al & AI Hexcrawl</div> <div style="text-align: center; font-size: 14px;">@ai_hexcrawl-dril_gpt2-drilbot_neo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wintbot_neo & wint but Al & AI Hexcrawl. | Data | wintbot_neo | wint but Al | AI Hexcrawl | | --- | --- | --- | --- | | Tweets downloaded | 3207 | 3198 | 737 | | Retweets | 268 | 41 | 26 | | Short tweets | 272 | 49 | 1 | | Tweets kept | 2667 | 3108 | 710 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g9pfbo8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ai_hexcrawl-dril_gpt2-drilbot_neo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/226pt34g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/226pt34g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ai_hexcrawl-dril_gpt2-drilbot_neo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
indolem/indobertweet-base-uncased
indolem
2021-09-18T01:24:17Z
118,037
11
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "Twitter", "id", "arxiv:2109.04607", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - id tags: - Twitter license: apache-2.0 datasets: - Twitter 2021 widget: - text: "guweehh udh ga' paham lg sm [MASK]" --- # IndoBERTweet 🐦 ## 1. Paper Fajri Koto, Jey Han Lau, and Timothy Baldwin. [_IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization_](https://arxiv.org/pdf/2109.04607.pdf). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (**EMNLP 2021**), Dominican Republic (virtual). ## 2. About [IndoBERTweet](https://github.com/indolem/IndoBERTweet) is the first large-scale pretrained model for Indonesian Twitter that is trained by extending a monolingually trained Indonesian BERT model with additive domain-specific vocabulary. In this paper, we show that initializing domain-specific vocabulary with average-pooling of BERT subword embeddings is more efficient than pretraining from scratch, and more effective than initializing based on word2vec projections. ## 3. Pretraining Data We crawl Indonesian tweets over a 1-year period using the official Twitter API, from December 2019 to December 2020, with 60 keywords covering 4 main topics: economy, health, education, and government. We obtain in total of **409M word tokens**, two times larger than the training data used to pretrain [IndoBERT](https://aclanthology.org/2020.coling-main.66.pdf). Due to Twitter policy, this pretraining data will not be released to public. ## 4. How to use Load model and tokenizer (tested with transformers==3.5.1) ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("indolem/indobertweet-base-uncased") model = AutoModel.from_pretrained("indolem/indobertweet-base-uncased") ``` **Preprocessing Steps:** * lower-case all words * converting user mentions and URLs into @USER and HTTPURL, respectively * translating emoticons into text using the [emoji package](https://pypi.org/project/emoji/). ## 5. Results over 7 Indonesian Twitter Datasets <table> <col> <colgroup span="2"></colgroup> <colgroup span="2"></colgroup> <tr> <th rowspan="2">Models</td> <th colspan="2" scope="colgroup">Sentiment</th> <th colspan="1" scope="colgroup">Emotion</th> <th colspan="2" scope="colgroup">Hate Speech</th> <th colspan="2" scope="colgroup">NER</th> <th rowspan="2" scope="colgroup">Average</th> </tr> <tr> <th scope="col">IndoLEM</th> <th scope="col">SmSA</th> <th scope="col">EmoT</th> <th scope="col">HS1</th> <th scope="col">HS2</th> <th scope="col">Formal</th> <th scope="col">Informal</th> </tr> <tr> <td scope="row">mBERT</td> <td>76.6</td> <td>84.7</td> <td>67.5</td> <td>85.1</td> <td>75.1</td> <td>85.2</td> <td>83.2</td> <td>79.6</td> </tr> <tr> <td scope="row">malayBERT</td> <td>82.0</td> <td>84.1</td> <td>74.2</td> <td>85.0</td> <td>81.9</td> <td>81.9</td> <td>81.3</td> <td>81.5</td> </tr> <tr> <td scope="row">IndoBERT (Willie, et al., 2020)</td> <td>84.1</td> <td>88.7</td> <td>73.3</td> <td>86.8</td> <td>80.4</td> <td>86.3</td> <td>84.3</td> <td>83.4</td> </tr> <tr> <td scope="row">IndoBERT (Koto, et al., 2020)</td> <td>84.1</td> <td>87.9</td> <td>71.0</td> <td>86.4</td> <td>79.3</td> <td>88.0</td> <td><b>86.9</b></td> <td>83.4</td> </tr> <tr> <td scope="row">IndoBERTweet (1M steps from scratch)</td> <td>86.2</td> <td>90.4</td> <td>76.0</td> <td><b>88.8</b></td> <td><b>87.5</b></td> <td><b>88.1</b></td> <td>85.4</td> <td>86.1</td> </tr> <tr> <td scope="row">IndoBERT + Voc adaptation + 200k steps</td> <td><b>86.6</b></td> <td><b>92.7</b></td> <td><b>79.0</b></td> <td>88.4</td> <td>84.0</td> <td>87.7</td> <td><b>86.9</b></td> <td><b>86.5</b></td> </tr> </table> ## Citation If you use our work, please cite: ```bibtex @inproceedings{koto2021indobertweet, title={IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization}, author={Fajri Koto and Jey Han Lau and Timothy Baldwin}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)}, year={2021} } ```