modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
sileod/roberta-base-mnli
86d5eb9545d2276806ce7290e670134a65e95e84
2022-05-31T10:08:10.000Z
[ "pytorch", "roberta", "text-classification", "dataset:multi_nli", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
sileod
null
sileod/roberta-base-mnli
516
1
transformers
2,300
--- license: mit tags: - generated_from_trainer datasets: - multi_nli metrics: - accuracy model-index: - name: roberta-base-mnli results: - task: name: Text Classification type: text-classification dataset: name: multi_nli type: multi_nli args: default metrics: - name: Accuracy type: accuracy value: 0.8719307182883341 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-mnli This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the multi_nli dataset. It achieves the following results on the evaluation set: - Loss: 0.4661 - Accuracy: 0.8719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4172 | 1.0 | 24544 | 0.4175 | 0.8508 | | 0.3324 | 2.0 | 49088 | 0.4146 | 0.8609 | | 0.2191 | 3.0 | 73632 | 0.4661 | 0.8719 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
nvidia/mit-b2
44acc700d01cdfdac6f5c236e69da847985eaac3
2022-07-29T13:15:51.000Z
[ "pytorch", "tf", "segformer", "image-classification", "dataset:imagenet_1k", "arxiv:2105.15203", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
nvidia
null
nvidia/mit-b2
515
null
transformers
2,301
--- license: apache-2.0 tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b2-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b2") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b2") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
prajjwal1/bert-mini-mnli
2793a188a2d6f995f9e6a5f73d9dd8b7a3a3aaa6
2021-10-05T17:57:20.000Z
[ "pytorch", "jax", "bert", "text-classification", "arxiv:1908.08962", "arxiv:2110.01518", "transformers" ]
text-classification
false
prajjwal1
null
prajjwal1/bert-mini-mnli
515
null
transformers
2,302
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI. If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). ``` MNLI: 68.04% MNLI-mm: 69.17% ``` These models were trained for 4 epochs. [@prajjwal_1](https://twitter.com/prajjwal_1)
Biasface/DDDC
4481ffe566e96900e4b4e4df6ebc815524295bbf
2021-11-30T17:30:53.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Biasface
null
Biasface/DDDC
513
null
transformers
2,303
--- tags: - conversational --- #hi
studio-ousia/mluke-base
0f3c9dc42873eaf0e807bd2736bc4cfbe73de3b2
2022-03-11T02:58:43.000Z
[ "pytorch", "luke", "fill-mask", "multilingual", "transformers", "named entity recognition", "relation classification", "question answering", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
studio-ousia
null
studio-ousia/mluke-base
513
3
transformers
2,304
--- language: multilingual thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png tags: - luke - named entity recognition - relation classification - question answering license: apache-2.0 --- ## mLUKE **mLUKE** (multilingual LUKE) is a multilingual extension of LUKE. Please check the [official repository](https://github.com/studio-ousia/luke) for more details and updates. This is the mLUKE base model with 12 hidden layers, 768 hidden size. The total number of parameters in this model is 585M (278M for the word embeddings and encoder, 307M for the entity embeddings). The model was initialized with the weights of XLM-RoBERTa(base) and trained using December 2020 version of Wikipedia in 24 languages. ### Citation If you find mLUKE useful for your work, please cite the following paper: ```latex @inproceedings{ri2021mluke, title={mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models}, author={Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka}, booktitle={arXiv}, year={2021} } ```
DeepPavlov/distilrubert-small-cased-conversational
e348066b4a7279b97138038299bddc6580a9169a
2022-06-28T17:19:09.000Z
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
false
DeepPavlov
null
DeepPavlov/distilrubert-small-cased-conversational
513
null
transformers
2,305
--- language: - ru --- # distilrubert-small-cased-conversational Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational). Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used * KL loss (between teacher and student output logits) * MLM loss (between tokens labels and student output logits) * Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student) * MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student) The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb. To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency). All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb. | Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. | |-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------| | Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 | | Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 | To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models). Also, results could be found in the [paper](https://arxiv.org/abs/2205.02340) Tables 1&2 as well as performance benchmarks and training details. # Citation If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper: ``` @misc{https://doi.org/10.48550/arxiv.2205.02340, doi = {10.48550/ARXIV.2205.02340}, url = {https://arxiv.org/abs/2205.02340}, author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\) \[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017. \[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. \[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
tner/xlm-roberta-large-uncased-wnut2017
d2f13491ebb59b477fa61dc0224d88daf851513f
2021-02-13T00:12:33.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/xlm-roberta-large-uncased-wnut2017
512
null
transformers
2,306
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017") ```
huggingface/distilbert-base-uncased-finetuned-mnli
0fadb1fe60cd119b3af82e2bf9cb98a59336d7bc
2021-02-25T20:27:07.000Z
[ "pytorch", "tf", "distilbert", "text-classification", "transformers" ]
text-classification
false
huggingface
null
huggingface/distilbert-base-uncased-finetuned-mnli
512
null
transformers
2,307
Entry not found
SIKU-BERT/sikuroberta
bb25260d5c321924fe4fb353c09191c0aaf5c5c6
2021-09-22T00:22:36.000Z
[ "pytorch", "bert", "fill-mask", "zh", "transformers", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
SIKU-BERT
null
SIKU-BERT/sikuroberta
511
2
transformers
2,308
--- language: - "zh" thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "roberta" - "pytorch" inference: false license: "apache-2.0" --- # SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikuroberta") model = AutoModel.from_pretrained("SIKU-BERT/sikuroberta") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
Helsinki-NLP/opus-mt-ru-fr
55c73236818495c7a6dd5a98e3529de3481bc3ae
2021-09-10T14:02:31.000Z
[ "pytorch", "jax", "marian", "text2text-generation", "ru", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ru-fr
510
null
transformers
2,309
--- tags: - translation license: apache-2.0 --- ### opus-mt-ru-fr * source languages: ru * target languages: fr * OPUS readme: [ru-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012.ru.fr | 18.3 | 0.497 | | newstest2013.ru.fr | 21.6 | 0.516 | | Tatoeba.ru.fr | 51.5 | 0.670 |
KoichiYasuoka/chinese-roberta-base-upos
2fcc4e89732370e30451b65e5a7227c78811f0d4
2022-02-11T06:28:59.000Z
[ "pytorch", "bert", "token-classification", "zh", "dataset:universal_dependencies", "transformers", "chinese", "pos", "wikipedia", "dependency-parsing", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/chinese-roberta-base-upos
510
2
transformers
2,310
--- language: - "zh" tags: - "chinese" - "token-classification" - "pos" - "wikipedia" - "dependency-parsing" datasets: - "universal_dependencies" license: "apache-2.0" pipeline_tag: "token-classification" --- # chinese-roberta-base-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-base-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
anton-l/wav2vec2-base-superb-sv
0a1a74d00d5e44dbd7344b65c9847a1eb625c73b
2021-12-14T12:49:10.000Z
[ "pytorch", "wav2vec2", "audio-xvector", "transformers" ]
null
false
anton-l
null
anton-l/wav2vec2-base-superb-sv
510
null
transformers
2,311
Entry not found
Davlan/bert-base-multilingual-cased-finetuned-yoruba
000f80b4509f73bca9a33f9db0573d6f67396a12
2022-06-27T11:50:30.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "yo", "transformers", "autotrain_compatible" ]
fill-mask
false
Davlan
null
Davlan/bert-base-multilingual-cased-finetuned-yoruba
509
null
transformers
2,312
Hugging Face's logo --- language: yo datasets: --- # bert-base-multilingual-cased-finetuned-yoruba ## Model description **bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba') >>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun") [{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746, 'token': 12176, 'token_str': 'Mary'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092, 'token': 13704, 'token_str': 'Queen'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615, 'token': 14382, 'token_str': 'ti'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525, 'token': 11515, 'token_str': 'King'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962, 'token': 14005, 'token_str': 'Lady'}] ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | yo_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 78.97 | 82.58 [BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | 75.13 | 79.11 ### BibTeX entry and citation info By David Adelani ``` ```
facebook/wav2vec2-xls-r-1b
6d8fad78d7d9c252adfdf48da029590b21f47414
2021-11-18T16:32:35.000Z
[ "pytorch", "wav2vec2", "pretraining", "multilingual", "dataset:common_voice", "dataset:multilingual_librispeech", "arxiv:2111.09296", "transformers", "speech", "xls_r", "xls_r_pretrained", "license:apache-2.0" ]
null
false
facebook
null
facebook/wav2vec2-xls-r-1b
509
10
transformers
2,313
--- language: multilingual datasets: - common_voice - multilingual_librispeech tags: - speech - xls_r - xls_r_pretrained license: apache-2.0 --- # Wav2Vec2-XLS-R-1B [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) counting **1 billion** parameters. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png) XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz. **Note**: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out [**this blog**](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about ASR. [XLS-R Paper](https://arxiv.org/abs/2111.09296) **Abstract** This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on 436K hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 20%-33% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this google colab](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb) for more information on how to fine-tune the model. You can find other pretrained XLS-R models with different numbers of parameters: * [300M parameters version](https://huggingface.co/facebook/wav2vec2-xls-r-300m) * [1B version version](https://huggingface.co/facebook/wav2vec2-xls-r-1b) * [2B version version](https://huggingface.co/facebook/wav2vec2-xls-r-2b)
oliverguhr/spelling-correction-english-base
a30d76e2e7de0b0b350304c8e17cef99da8eb8e7
2022-06-13T12:09:01.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "en", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
oliverguhr
null
oliverguhr/spelling-correction-english-base
509
2
transformers
2,314
--- language: - en license: mit widget: - text: "lets do a comparsion" example_title: "1" - text: "Their going to be here so0n" example_title: "2" - text: "ze shop is cloed due to covid 19" example_title: "3" metrics: - cer --- This is an experimental model that should fix your typos and punctuation. If you like to run your own experiments or train for a different language, have a look at [the code](https://github.com/oliverguhr/spelling). ## Model description This is a proof of concept spelling correction model for English. ## Intended uses & limitations This project is work in progress, be aware that the model can produce artefacts. You can test the model using the pipeline-interface: ```python from transformers import pipeline fix_spelling = pipeline("text2text-generation",model="oliverguhr/spelling-correction-english-base") print(fix_spelling("lets do a comparsion",max_length=2048)) ```
SEBIS/code_trans_t5_base_code_documentation_generation_python
f42aaecddfc35f12575e9c887ee79cf3d6cdb97d
2021-06-23T04:43:22.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_base_code_documentation_generation_python
508
null
transformers
2,315
--- tags: - summarization widget: - text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" --- # CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus python dataset. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
indobenchmark/indobert-large-p2
4b280c3bfcc1ed2d6b4589be5c876076b7d73568
2021-05-19T20:28:22.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "id", "dataset:Indo4B", "arxiv:2009.05387", "transformers", "indobert", "indobenchmark", "indonlu", "license:mit" ]
feature-extraction
false
indobenchmark
null
indobenchmark/indobert-large-p2
508
null
transformers
2,316
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT Large Model (phase2 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-large-p2") model = AutoModel.from_pretrained("indobenchmark/indobert-large-p2") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
kamalkraj/bioelectra-base-discriminator-pubmed-pmc-lt
d807405696fdace62f42841dc06289d2354e1158
2021-06-10T14:22:08.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
null
false
kamalkraj
null
kamalkraj/bioelectra-base-discriminator-pubmed-pmc-lt
508
2
transformers
2,317
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset. For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()] ```
facebook/regnet-y-040
40577f588ce4b8b3a306e59b93b117047e0a6625
2022-06-30T18:56:14.000Z
[ "pytorch", "tf", "regnet", "image-classification", "dataset:imagenet-1k", "arxiv:2003.13678", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
facebook
null
facebook/regnet-y-040
508
null
transformers
2,318
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # RegNet RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls). Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
cross-encoder/quora-roberta-base
195493c8767e7155c449e9ff7e64890d116d432d
2021-08-05T08:41:36.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/quora-roberta-base
507
1
transformers
2,319
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
valurank/distilroberta-news-small
dad826d1ce6732850428d4673ff50835c8f7f59b
2022-06-08T20:45:50.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:valurank/news-small", "transformers", "license:other" ]
text-classification
false
valurank
null
valurank/distilroberta-news-small
507
null
transformers
2,320
--- license: other language: en datasets: - valurank/news-small --- # DistilROBERTA fine-tuned for news classification This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medium, good). ## Training data The dataset used to fine-tune the model is [news-small](https://huggingface.co/datasets/valurank/news-small), the 300 article news dataset manually annotated by Alex. ## Inputs Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
Milos/slovak-gpt-j-1.4B
1ca9a664fba18d050377579e43b92897efca62d4
2022-02-17T14:29:47.000Z
[ "pytorch", "gptj", "text-generation", "sk", "arxiv:2104.09864", "transformers", "Slovak GPT-J", "causal-lm", "license:gpl-3.0" ]
text-generation
false
Milos
null
Milos/slovak-gpt-j-1.4B
506
null
transformers
2,321
--- language: - sk tags: - Slovak GPT-J - pytorch - causal-lm license: gpl-3.0 --- # Slovak GPT-J-1.4B Slovak GPT-J-1.4B with the whopping `1,415,283,792` parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M), are still available. ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 1.4B trainable parameters. <figure> | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,415,283,792 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2048 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.657`. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-1.4B") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-1.4B") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Tradičné jedlo na Orave sú" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov' ``` ### Capabilities, Limitations, and Biases Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series. For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines in the examples that follow. These were generated with both `top_p` and `temperature` set to `0.9`. A spin on "Mor ho!" by Samo Chalpuka ``` PROMPT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí OUTPUT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí: nad ním svieti pevný hrad na vysokom bralí. Pod tým hradom je zem slovenská, na nej je Slovensko naše. Na tom vrchu leží slovenský národ, pod tým bralom leží krajina slovenská. A pod tým hradom leží i jeho sláva: na tom vrchu, na tom brale leží sláva slovenského národa.``` ``` Jokes are a bit hit or miss. Very often they contain vulgarisms and/or are highly politically incorrect. This one is rather exception than the rule. ``` PROMPT Nasledujú najlepšie vtipy. Prečo OUTPUT Nasledujú najlepšie vtipy. Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka. ``` The infamous unicorn prompt from GPT-2 paper. ``` PROMPT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. OUTPUT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč. Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách." Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|> ``` Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela pravdivá.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-1.4B, author = {Kondela, Milos}, title = {{Slovak GPT-J-1.4B}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-1.4B}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
SIKU-BERT/sikubert
fc656de2d6bde33919102dd3abe31c843f42226a
2021-09-13T13:34:40.000Z
[ "pytorch", "bert", "fill-mask", "zh", "transformers", "chinese", "classical chinese", "literary chinese", "ancient chinese", "roberta", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
SIKU-BERT
null
SIKU-BERT/sikubert
506
2
transformers
2,322
--- language: - "zh" thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "roberta" - "pytorch" inference: false license: "apache-2.0" --- # SikuBERT ## Model description ![SikuBERT](https://raw.githubusercontent.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing/main/appendix/sikubert.png) Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert") model = AutoModel.from_pretrained("SIKU-BERT/sikubert") ``` ## About Us We are from Nanjing Agricultural University. > Created with by SIKU-BERT [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
google/owlvit-base-patch32
4641e344cbbe8e25e0f2ab4e7e53372091ef9cfd
2022-07-21T10:49:01.000Z
[ "pytorch", "owlvit", "transformers" ]
null
false
google
null
google/owlvit-base-patch32
506
null
transformers
2,323
Entry not found
boychaboy/SNLI_distilbert-base-cased
fabefe1f7390d5aecf5d152e13da5998eee2e84d
2021-05-10T17:08:47.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
boychaboy
null
boychaboy/SNLI_distilbert-base-cased
505
null
transformers
2,324
Entry not found
microsoft/unispeech-sat-base-plus-sv
a492b4bf41b1bd2fa6e6d07c6eae573b3f711b66
2021-12-17T13:56:17.000Z
[ "pytorch", "unispeech-sat", "audio-xvector", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.05752", "transformers", "speech" ]
null
false
microsoft
null
microsoft/unispeech-sat-base-plus-sv
505
null
transformers
2,325
--- language: - en tags: - speech --- # UniSpeech-SAT-Base for Speaker Verification [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/) The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu **Abstract** *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..* The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT. # Fine-tuning details The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss [X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf) # Usage ## Speaker Verification ```python from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector from datasets import load_dataset import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-plus-sv') model = UniSpeechSatForXVector.from_pretrained('microsoft/unispeech-sat-base-plus-sv') # audio files are decoded on the fly inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt") embeddings = model(**inputs).embeddings embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() # the resulting embeddings can be used for cosine similarity-based retrieval cosine_sim = torch.nn.CosineSimilarity(dim=-1) similarity = cosine_sim(embeddings[0], embeddings[1]) threshold = 0.89 # the optimal threshold is dataset-dependent if similarity < threshold: print("Speakers are not the same!") ``` # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/UniSpeechSAT.png)
nkoh01/MSRoberta
3ff20e811ea95572470d3538cad29e816f05d7f4
2021-05-20T18:51:20.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
nkoh01
null
nkoh01/MSRoberta
505
null
transformers
2,326
# MSRoBERTa Fine-tuned RoBERTa MLM model for [`Miscrosoft Sentence Completion Challenge`](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR_SCCD.pdf). This model case-sensitive following the `Roberta-base` model. # Model description (taken from: [here](https://huggingface.co/roberta-base)) RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline,AutoModelForMaskedLM,AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("nkoh01/MSRoberta") model = AutoModelForMaskedLM.from_pretrained("nkoh01/MSRoberta") unmasker = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) unmasker("Hello, it is a <mask> to meet you.") [{'score': 0.9508683085441589, 'sequence': 'hello, it is a pleasure to meet you.', 'token': 10483, 'token_str': ' pleasure'}, {'score': 0.015089659951627254, 'sequence': 'hello, it is a privilege to meet you.', 'token': 9951, 'token_str': ' privilege'}, {'score': 0.013942377641797066, 'sequence': 'hello, it is a joy to meet you.', 'token': 5823, 'token_str': ' joy'}, {'score': 0.006964420434087515, 'sequence': 'hello, it is a delight to meet you.', 'token': 13213, 'token_str': ' delight'}, {'score': 0.0024567877408117056, 'sequence': 'hello, it is a honour to meet you.', 'token': 6671, 'token_str': ' honour'}] ``` ## Installations Make sure you run `!pip install transformers` command to install the transformers library before running the commands above. ## Bias and limitations Under construction.
fmikaelian/camembert-base-fquad
341bf4683d9388a0a4022ce4062283255dc9246c
2020-12-11T21:40:08.000Z
[ "pytorch", "camembert", "question-answering", "fr", "transformers", "autotrain_compatible" ]
question-answering
false
fmikaelian
null
fmikaelian/camembert-base-fquad
504
1
transformers
2,327
--- language: fr --- # camembert-base-fquad ## Description A baseline model for question-answering in french ([CamemBERT](https://camembert-model.fr/) model fine-tuned on [FQuAD](https://fquad.illuin.tech/)) ## Training hyperparameters ```shell python3 ./examples/question-answering/run_squad.py \ --model_type camembert \ --model_name_or_path camembert-base \ --do_train \ --do_eval \ --do_lower_case \ --train_file train.json \ --predict_file valid.json \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir output \ --per_gpu_eval_batch_size=3 \ --per_gpu_train_batch_size=3 \ --save_steps 10000 ``` ## Evaluation results ```shell {"f1": 77.24515316052342, "exact_match": 52.82308657465496} ``` ## Usage ```python from transformers import pipeline nlp = pipeline('question-answering', model='fmikaelian/camembert-base-fquad', tokenizer='fmikaelian/camembert-base-fquad') nlp({ 'question': "Qui est Claude Monet?", 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." }) ```
lanwuwei/GigaBERT-v3-Arabic-and-English
ee5c781756946364d989e0102b91b4a15390f6ac
2021-05-19T00:17:42.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "en", "ar", "dataset:gigaword", "dataset:oscar", "dataset:wikipedia", "transformers" ]
feature-extraction
false
lanwuwei
null
lanwuwei/GigaBERT-v3-Arabic-and-English
504
null
transformers
2,328
--- language: - en - ar datasets: - gigaword - oscar - wikipedia --- ## GigaBERT-v3 GigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper: @inproceedings{lan2020gigabert, author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan}, title = {An Empirical Study of Pre-trained Transformers for Arabic Information Extraction}, booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)}, year = {2020} } ## Usage ``` from transformers import * tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English", do_lower_case=True) model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English") ``` More code examples can be found [here](https://github.com/lanwuwei/GigaBERT).
orai-nlp/ElhBERTeu
8d4de0a5d8c49f260010d5ea239afe77de31cfe2
2022-07-06T10:21:53.000Z
[ "pytorch", "bert", "feature-extraction", "eu", "transformers", "basque", "euskara", "license:cc-by-4.0" ]
feature-extraction
false
orai-nlp
null
orai-nlp/ElhBERTeu
502
0
transformers
2,329
--- license: cc-by-4.0 language: eu tags: - bert - basque - euskara --- # ElhBERTeu This is a BERT model for Basque introduced in [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](). To train ElhBERTeu, we collected different corpora sources from several domains: updated (2021) national and local news sources, Basque Wikipedia, as well as novel news sources and texts from other domains, such as science (both academic and divulgative), literature or subtitles. More details about the corpora used and their sizes are shown in the following table. Texts from news sources were oversampled (duplicated) as done during the training of BERTeus. In total 575M tokens were used for pre-training ElhBERTeu. |Domain | Size | |-----------|----------| |News | 2 x 224M | |Wikipedia | 40M | |Science | 58M | |Literature | 24M | |Others | 7M | |Total | 575M | ElhBERTeu is a base, cased monolingual BERT model for Basque, with a vocab size of 50K, which has 124M parameters in total. ElhBERTeu was trained following the design decisions for [BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased). The tokenizer and the hyper-parameter settings remained the same, with the only difference being that the full pre-training of the model (1M steps) was performed with a sequence length of 512 on a v3-8 TPU. The model has been evaluated on the recently created [BasqueGLUE](https://github.com/Elhuyar/BasqueGLUE) NLU benchmark: | Model | AVG | NERC | F_intent | F_slot | BHTC | BEC | Vaxx | QNLI | WiC | coref | |-----------|:---------:|:---------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| | | | F1 | F1 | F1 | F1 | F1 | MF1 | acc | acc | acc | | BERTeus | 73.23 | 81.92 | **82.52** | 74.34 |**78.26**| 69.43 | 59.30 |**74.26**| 70.71 |**68.31**| | ElhBERTeu | **73.71** | **82.30** | 82.24 |**75.64**| 78.05 |**69.89**|**63.81**| 73.84 |**71.71**| 65.93 | If you use this model, please cite the following paper: - G. Urbizu, I. San Vicente, X. Saralegi, R. Agerri, A. Soroa. BasqueGLUE: A Natural Language Understanding Benchmark for Basque. In proceedings of the 13th Language Resources and Evaluation Conference (LREC 2022). June 2022. Marseille, France ``` @InProceedings{urbizu2022basqueglue, author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor}, title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {1603--1612}, abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.}, url = {https://aclanthology.org/2022.lrec-1.172} } ``` License: CC BY 4.0
dhtocks/Named-Entity-Recognition
c9eb2cb284b0b69709132d19eeac3816ceb89c5b
2022-01-15T11:22:33.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
dhtocks
null
dhtocks/Named-Entity-Recognition
500
null
transformers
2,330
Entry not found
anas-awadalla/splinter-large-finetuned-squad
36015d000da8055edcfbbf0a14c6f5d31a2e837c
2022-05-15T10:51:43.000Z
[ "pytorch", "splinter", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/splinter-large-finetuned-squad
500
null
transformers
2,331
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: splinter-large-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # splinter-large-finetuned-squad This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
STAM/agricore
b6dfd05bfdcb097a78e563599517f8441452b404
2022-06-01T14:24:16.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
STAM
null
STAM/agricore
500
null
transformers
2,332
--- license: mit ---
TofuBoy/DialoGPT-medium-boon
cd59807e12d63621addb6c915273fe8621ba6145
2022-01-23T05:46:38.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
TofuBoy
null
TofuBoy/DialoGPT-medium-boon
499
null
transformers
2,333
--- tags: - conversational --- # Boon Bot DialoGPT Model
Recognai/zeroshot_selectra_medium
6c3ff31c3c1acb96375d7913f90a19707af33b9a
2022-03-27T09:30:04.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:xnli", "transformers", "zero-shot-classification", "nli", "license:apache-2.0" ]
zero-shot-classification
false
Recognai
null
Recognai/zeroshot_selectra_medium
498
3
transformers
2,334
--- language: es tags: - zero-shot-classification - nli - pytorch datasets: - xnli pipeline_tag: zero-shot-classification license: apache-2.0 widget: - text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo" candidate_labels: "cultura, sociedad, economia, salud, deportes" --- # Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA *Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html). In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier. ## Usage ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Recognai/zeroshot_selectra_medium") classifier( "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo", candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"], hypothesis_template="Este ejemplo es {}." ) """Output {'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo', 'labels': ['sociedad', 'cultura', 'economia', 'salud', 'deportes'], 'scores': [0.6450043320655823, 0.16710571944713593, 0.08507631719112396, 0.0759836807847023, 0.026829993352293968]} """ ``` The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.** ## Demo and tutorial If you want to see this model in action, we have created a basic tutorial using [Rubrix](https://www.rubrix.ml/), a free and open-source tool to *explore, annotate, and monitor data for NLP*. The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time). You can [find the tutorial here](https://rubrix.readthedocs.io/en/master/tutorials/zeroshot_data_annotation.html). See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example). <video width="100%" controls><source src="https://github.com/recognai/rubrix-materials/raw/main/tutorials/videos/zeroshot_selectra_news_data_annotation.mp4" type="video/mp4"></video> ## Metrics | Model | Params | XNLI (acc) | \*MLSUM (acc) | | --- | --- | --- | --- | | [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 | | zs SELECTRA medium | 41M | **0.807** | **0.589** | | [zs SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) | **22M** | 0.795 | 0.446 | \*evaluated with zero-shot learning (ZSL) - **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion. - **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb) ## Training Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details. ## Authors - David Fidalgo ([GitHub](https://github.com/dcfidalgo)) - Daniel Vila ([GitHub](https://github.com/dvsrepo)) - Francisco Aranda ([GitHub](https://github.com/frascuchon)) - Javier Lopez ([GitHub](https://github.com/javispp))
castorini/doc2query-t5-large-msmarco
e607227b4d07161391f3a61a7ccd9efcf875ea14
2021-11-24T19:16:08.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
castorini
null
castorini/doc2query-t5-large-msmarco
497
null
transformers
2,335
For more information, check [doc2query.ai](http://doc2query.ai)
j-hartmann/purchase-intention-english-roberta-large
e26a7d11ced410a78f1fe9a710e61ca14a2a0014
2022-02-06T12:22:55.000Z
[ "pytorch", "roberta", "text-classification", "en", "transformers", "sentiment", "twitter" ]
text-classification
false
j-hartmann
null
j-hartmann/purchase-intention-english-roberta-large
497
1
transformers
2,336
--- language: "en" tags: - roberta - sentiment - twitter widget: - text: "This looks tasty. Where can I buy it??" - text: "Now I want this, too." - text: "You look great today!" - text: "I just love spring and sunshine!" --- This RoBERTa-based model can classify *expressed purchase intentions* in English language text in 2 classes: - purchase intention 🤩 - no purchase intention 😐 The model was fine-tuned on 2,000 manually annotated social media posts. The hold-out accuracy is 95% (vs. a balanced 50% random-chance baseline). For details on the training approach see Web Appendix F in Hartmann et al. (2021). # Application ```python from transformers import pipeline classifier = pipeline("text-classification", model="j-hartmann/purchase-intention-english-roberta-large", return_all_scores=True) classifier("I want this!") ``` ```python Output: [[{'label': 'no', 'score': 0.0014553926885128021}, {'label': 'yes', 'score': 0.9985445737838745}]] ``` # Reference Please cite [this paper](https://journals.sagepub.com/doi/full/10.1177/00222437211037258) when you use our model. Feel free to reach out to [[email protected]](mailto:[email protected]) with any questions or feedback you may have. ``` @article{hartmann2021, title={The Power of Brand Selfies}, author={Hartmann, Jochen and Heitmann, Mark and Schamp, Christina and Netzer, Oded}, journal={Journal of Marketing Research} year={2021} } ```
naver/efficient-splade-V-large-query
eb23fdf72c344e26d37d63a86cf536b3a6e11118
2022-07-08T13:12:08.000Z
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:ms_marco", "transformers", "splade", "query-expansion", "document-expansion", "bag-of-words", "passage-retrieval", "knowledge-distillation", "document encoder", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
fill-mask
false
naver
null
naver/efficient-splade-V-large-query
497
null
transformers
2,337
--- license: cc-by-nc-sa-4.0 language: "en" tags: - splade - query-expansion - document-expansion - bag-of-words - passage-retrieval - knowledge-distillation - document encoder datasets: - ms_marco --- ## Efficient SPLADE Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **query** one, please also download the **doc** one (https://huggingface.co/naver/efficient-splade-V-large-doc). For additional details, please visit: * paper: https://dl.acm.org/doi/10.1145/3477495.3531833 * code: https://github.com/naver/splade | | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms | --- | --- | --- | --- | --- | | `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3 | `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7 ## Citation If you use our checkpoint, please cite our work (need to update): ``` @inproceedings{10.1145/3477495.3531833, author = {Lassance, Carlos and Clinchant, St\'{e}phane}, title = {An Efficiency Study for SPLADE Models}, year = {2022}, isbn = {9781450387323}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3477495.3531833}, doi = {10.1145/3477495.3531833}, abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.}, booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, pages = {2220–2226}, numpages = {7}, keywords = {splade, latency, information retrieval, sparse representations}, location = {Madrid, Spain}, series = {SIGIR '22} } ```
canwenxu/BERT-of-Theseus-MNLI
ee82a9e7c3fec19661f93a2291295ea62e8acee1
2021-05-19T13:58:30.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "dataset:multi_nli", "arxiv:2002.02925", "arxiv:2005.00628", "transformers" ]
feature-extraction
false
canwenxu
null
canwenxu/BERT-of-Theseus-MNLI
496
null
transformers
2,338
--- thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png datasets: - multi_nli --- # BERT-of-Theseus See our paper ["BERT-of-Theseus: Compressing BERT by Progressive Module Replacing"](http://arxiv.org/abs/2002.02925). BERT-of-Theseus is a new compressed BERT by progressively replacing the components of the original BERT. ![BERT of Theseus](https://github.com/JetRunner/BERT-of-Theseus/blob/master/bert-of-theseus.png?raw=true) ## Load Pretrained Model on MNLI We provide a 6-layer pretrained model on MNLI as a general-purpose model, which can transfer to other sentence classification tasks, outperforming DistillBERT (with the same 6-layer structure) on six tasks of GLUE (dev set). | Method | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | |-----------------|------|------|------|------|------|-------|-------| | BERT-base | 83.5 | 89.5 | 91.2 | 89.8 | 71.1 | 91.5 | 88.9 | | DistillBERT | 79.0 | 87.5 | 85.3 | 84.9 | 59.9 | 90.7 | 81.2 | | BERT-of-Theseus | 82.1 | 87.5 | 88.8 | 88.8 | 70.1 | 91.8 | 87.8 | Please Note: this checkpoint is for [Intermediate-Task Transfer Learning](https://arxiv.org/abs/2005.00628) so it does not include the classification head for MNLI! Please fine-tune it before use (like DistilBERT).
readerbench/RoBERT-base
42fa3f7ca1731b66401081554a36ef072279402a
2021-05-20T04:05:43.000Z
[ "pytorch", "tf", "jax", "bert", "ro", "transformers" ]
null
false
readerbench
null
readerbench/RoBERT-base
496
null
transformers
2,339
Model card for RoBERT-base --- language: - ro --- # RoBERT-base ## Pretrained BERT model for Romanian Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, **RoBERT-base** and RoBERT-large, all versions uncased. | Model | Weights | L | H | A | MLM accuracy | NSP accuracy | |----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:| | RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 | | *RoBERT-base* | *114M* | *12* | *768* | *12* | *0.6511* | *0.9802* | | RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 | All models are available: * [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small) * [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) * [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large) #### How to use ```python # tensorflow from transformers import AutoModel, AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = TFAutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="tf") outputs = model(inputs) # pytorch from transformers import AutoModel, AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = AutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="pt") outputs = model(**inputs) ``` ## Training data The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process. | Corpus | Words | Sentences | Size (GB)| |-----------|:---------:|:---------:|:--------:| | Oscar | 1.78B | 87M | 10.8 | | RoTex | 240M | 14M | 1.5 | | RoWiki | 50M | 2M | 0.3 | | **Total** | **2.07B** | **103M** | **12.6** | ## Downstream performance ### Sentiment analysis We report Macro-averaged F1 score (in %) | Model | Dev | Test | |------------------|:--------:|:--------:| | multilingual-BERT| 68.96 | 69.57 | | XLM-R-base | 71.26 | 71.71 | | BERT-base-ro | 70.49 | 71.02 | | RoBERT-small | 66.32 | 66.37 | | *RoBERT-base* | *70.89* | *71.61* | | RoBERT-large | **72.48**| **72.11**| ### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %). | Model | Dialect Classification | MD to RO | RO to MD | |-------------------|:----------------------:|:--------:|:--------:| | 2-CNN + SVM | 93.40 | 65.09 | 75.21 | | Char+Word SVM | 96.20 | 69.08 | 81.93 | | BiGRU | 93.30 | **70.10**| 80.30 | | multilingual-BERT | 95.34 | 68.76 | 78.24 | | XLM-R-base | 96.28 | 69.93 | 82.28 | | BERT-base-ro | 96.20 | 69.93 | 78.79 | | RoBERT-small | 95.67 | 69.01 | 80.40 | | *RoBERT-base* | *97.39* | *68.30* | *81.09* | | RoBERT-large | **97.78** | 69.91 | **83.65**| ### Diacritics Restoration Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %. | Model | word level | char level | |-----------------------------|:----------:|:----------:| | BiLSTM | 99.42 | - | | CharCNN | 98.40 | 99.65 | | CharCNN + multilingual-BERT | 99.72 | 99.94 | | CharCNN + XLM-R-base | 99.76 | **99.95** | | CharCNN + BERT-base-ro | **99.79** | **99.95** | | CharCNN + RoBERT-small | 99.73 | 99.94 | | *CharCNN + RoBERT-base* | *99.78* | **99.95** | | CharCNN + RoBERT-large | 99.76 | **99.95** | ### BibTeX entry and citation info ```bibtex @inproceedings{masala2020robert, title={RoBERT--A Romanian BERT Model}, author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6626--6637}, year={2020} } ```
GroNLP/gpt2-small-dutch
a4d770e17c7b3b2aa3ff29c6e52c7c8284974fb9
2021-05-21T09:55:47.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "nl", "arxiv:2012.05628", "transformers", "adaption", "recycled", "gpt2-small" ]
text-generation
false
GroNLP
null
GroNLP/gpt2-small-dutch
495
null
transformers
2,340
--- language: nl tags: - adaption - recycled - gpt2-small pipeline_tag: text-generation --- # GPT-2 recycled for Dutch (small) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch") model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
textattack/roberta-base-MRPC
c8e94968c57c5d825bf0476261d3fb0602c1e0ac
2021-05-20T22:07:47.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/roberta-base-MRPC
495
null
transformers
2,341
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9117647058823529, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
thanathorn/mt5-cpe-kmutt-thai-sentence-sum
cc479312c558c62618d794a961d994be2a12d0fc
2022-05-13T18:20:03.000Z
[ "pytorch", "mt5", "text2text-generation", "th", "transformers", "summarization", "mT5", "autotrain_compatible" ]
summarization
false
thanathorn
null
thanathorn/mt5-cpe-kmutt-thai-sentence-sum
495
1
transformers
2,342
--- tags: - summarization - mT5 language: - th widget: - text: "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด" --- # mt5-cpe-kmutt-thai-sentence-sum This repository contains the finetuned mT5-base model for Thai sentence summarization. The architecture of the model is based on mT5 model and fine-tuned on text-summarization pairs in Thai. Also, this project is a Senior Project of Computer Engineering Student at King Mongkut’s University of Technology Thonburi. ## Usage on SimpleTransformer (Tested on version 0.63.4) ```python from simpletransformers.t5 import T5Model, T5Args from torch import cuda model = T5Model("t5", "thanathorn/mt5-cpe-kmutt-thai-sentence-sum", use_cuda=cuda.is_available()) sentence = "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด" prediction = model.predict([sentence]) print(prediction[0]) ``` (See the example on <a href="https://colab.research.google.com/drive/1XiNkZLgy1USwHYFVf_nEzOSWbHGSnYdg?usp=sharing">Google Colab</a>) ### Score <ul> <li>ROUGE-1: 61.7805</li> <li>ROUGE-2: 45.9689</li> <li>ROUGE-L: 59.3542</li> </ul> ### Intended uses & limitations <ul> <li>You can use this model for Thai sentence text summarization.</li> <li>Not intended to use with paragraph text.</li> </ul>
Harshveer/autonlp-formality_scoring_2-32597818
0683aa8fe9feb6b9824e38a256f6258aaaf79f34
2021-11-14T06:46:39.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:Harshveer/autonlp-data-formality_scoring_2", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
Harshveer
null
Harshveer/autonlp-formality_scoring_2-32597818
494
null
transformers
2,343
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Harshveer/autonlp-data-formality_scoring_2 co2_eq_emissions: 8.655894631203154 --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 32597818 - CO2 Emissions (in grams): 8.655894631203154 ## Validation Metrics - Loss: 0.5410276651382446 - MSE: 0.5410276651382446 - MAE: 0.5694561004638672 - R2: 0.6830431129198475 - RMSE: 0.735545814037323 - Explained Variance: 0.6834385395050049 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
bayartsogt/albert-mongolian
33be497e1f7f561b0b1d58880d523be723830771
2021-03-17T19:01:07.000Z
[ "pytorch", "tf", "albert", "fill-mask", "mn", "arxiv:1904.00962", "transformers", "autotrain_compatible" ]
fill-mask
false
bayartsogt
null
bayartsogt/albert-mongolian
494
2
transformers
2,344
--- language: mn --- # ALBERT-Mongolian [pretraining repo link](https://github.com/bayartsogt-ya/albert-mongolian) ## Model description Here we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus. ## Evaluation Result: ``` loss = 1.7478163 masked_lm_accuracy = 0.6838185 masked_lm_loss = 1.6687671 sentence_order_accuracy = 0.998125 sentence_order_loss = 0.007942731 ``` ## Fine-tuning Result on Eduge Dataset: ``` precision recall f1-score support байгал орчин 0.85 0.83 0.84 999 боловсрол 0.80 0.80 0.80 873 спорт 0.98 0.98 0.98 2736 технологи 0.88 0.93 0.91 1102 улс төр 0.92 0.85 0.89 2647 урлаг соёл 0.93 0.94 0.94 1457 хууль 0.89 0.87 0.88 1651 эдийн засаг 0.83 0.88 0.86 2509 эрүүл мэнд 0.89 0.92 0.90 1159 accuracy 0.90 15133 macro avg 0.89 0.89 0.89 15133 weighted avg 0.90 0.90 0.90 15133 ``` ## Reference 1. [ALBERT - official repo](https://github.com/google-research/albert) 2. [WikiExtrator](https://github.com/attardi/wikiextractor) 3. [Mongolian BERT](https://github.com/tugstugi/mongolian-bert) 4. [ALBERT - Japanese](https://github.com/alinear-corp/albert-japanese) 5. [Mongolian Text Classification](https://github.com/sharavsambuu/mongolian-text-classification) 6. [You's paper](https://arxiv.org/abs/1904.00962) ## Citation ``` @misc{albert-mongolian, author = {Bayartsogt Yadamsuren}, title = {ALBERT Pretrained Model on Mongolian Datasets}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/bayartsogt-ya/albert-mongolian/}} } ``` ## For More Information Please contact by [email protected]
bigjoedata/rockbot355M
c43da88f2a0221ca19bdc99d81cbcc05d65474eb
2021-05-21T14:17:25.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
bigjoedata
null
bigjoedata/rockbot355M
494
null
transformers
2,345
# 🎸 🥁 Rockbot 🎤 🎧 A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock). **Instructions:** Type in a fake song title, pick an artist, click "Generate". Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable. Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot. Just have fun. [Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed [Github](https://github.com/bigjoedata/rockbot) [GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot) [DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic. 🎹 🪘 🎷 🎺 🪗 🪕 🎻 ## Background With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate) ### Full Tech Stack [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.). [Python](https://www.python.org/). [Streamlit](https://www.streamlit.io/). [GPT-2](https://openai.com/blog/better-language-models/). [AITextGen](https://github.com/minimaxir/aitextgen). [Pandas](https://pandas.pydata.org/). [LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/). [Google Colab](https://colab.research.google.com/) (GPU based Training). [Knime](https://www.knime.com/) (data cleaning). ## How to Use The Model Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation. ### Training Parameters Used ai.train("lyrics.txt", line_by_line=False, from_cache=False, num_steps=10000, generate_every=2000, save_every=2000, save_gdrive=False, learning_rate=1e-3, batch_size=3, eos_token="<|endoftext|>", #fp16=True ) ### To Use Generate With Prompt (Use Title Case): Song Name BY Artist Name
superb/wav2vec2-base-superb-sid
73365f1ed139a3d88fb8a72b98ecac3a38a1fa0e
2021-11-04T16:03:40.000Z
[ "pytorch", "wav2vec2", "audio-classification", "en", "dataset:superb", "arxiv:2105.01051", "transformers", "speech", "audio", "license:apache-2.0" ]
audio-classification
false
superb
null
superb/wav2vec2-base-superb-sid
494
null
transformers
2,346
--- language: en datasets: - superb tags: - speech - audio - wav2vec2 - audio-classification widget: - example_title: VoxCeleb Speaker id10003 src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav - example_title: VoxCeleb Speaker id10004 src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav license: apache-2.0 --- # Wav2Vec2-Base for Speaker Identification ## Model description This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1). The base model is [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification). ## Usage examples You can use the model via the Audio Classification pipeline: ```python from datasets import load_dataset from transformers import pipeline dataset = load_dataset("anton-l/superb_demo", "si", split="test") classifier = pipeline("audio-classification", model="superb/wav2vec2-base-superb-sid") labels = classifier(dataset[0]["file"], top_k=5) ``` Or use the model directly: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "si", split="test") dataset = dataset.map(map_to_array) model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-sid") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-sid") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**test**| `0.7518` | `0.7518` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
crystallyzing/DialoGPT-small-nishikiyama
e2268eaff68c5ac9dc1e475d7b3362f22c5f67ff
2022-06-21T00:05:00.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
crystallyzing
null
crystallyzing/DialoGPT-small-nishikiyama
494
null
transformers
2,347
--- tags: - conversational --- # Nishiki Chatbot Model
Norod78/hebrew-gpt_neo-tiny
61d3dddbbf95e3096e6a4249dc5d7fe396de529a
2022-07-04T07:27:46.000Z
[ "pytorch", "jax", "gpt_neo", "text-generation", "he", "transformers", "license:mit" ]
text-generation
false
Norod78
null
Norod78/hebrew-gpt_neo-tiny
493
null
transformers
2,348
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "עוד בימי קדם" - text: "קוראים לי דורון ואני מעוניין ל" - text: "קוראים לי איציק ואני חושב ש" - text: "החתול שלך מאוד חמוד ו" license: mit --- # hebrew-gpt_neo-tiny Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 1024: max_len = 1024 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
asafaya/bert-large-arabic
980a2eb3a4b8b3eb156b82ae30cc9768ef3794de
2021-05-19T00:07:46.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "dataset:oscar", "dataset:wikipedia", "transformers", "autotrain_compatible" ]
fill-mask
false
asafaya
null
asafaya/bert-large-arabic
492
null
transformers
2,349
--- language: ar datasets: - oscar - wikipedia --- # Arabic BERT Large Model Pretrained BERT Large language model for Arabic _If you use this model in your work, please cite this paper:_ ``` @inproceedings{safaya-etal-2020-kuisail, title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media", author = "Safaya, Ali and Abdullatif, Moutasem and Yuret, Deniz", booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation", month = dec, year = "2020", address = "Barcelona (online)", publisher = "International Committee for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.semeval-1.271", pages = "2054--2059", } ``` ## Pretraining Corpus `arabic-bert-large` model was pretrained on ~8.2 Billion words: - Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/) - Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html) and other Arabic resources which sum up to ~95GB of text. __Notes on training data:__ - Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. - Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model. - The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. ## Pretraining details - This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc). - Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256. ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-large-arabic") model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-large-arabic") ``` ## Results For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) ## Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
facebook/hubert-xlarge-ls960-ft
8b565fd5c194610f72ff01f4fecf7ccde17f9638
2022-05-24T10:44:12.000Z
[ "pytorch", "tf", "hubert", "automatic-speech-recognition", "en", "dataset:libri-light", "dataset:librispeech_asr", "arxiv:2106.07447", "transformers", "speech", "audio", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
facebook
null
facebook/hubert-xlarge-ls960-ft
492
7
transformers
2,350
--- language: en datasets: - libri-light - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: hubert-large-ls960-ft results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.8 --- # Hubert-Extra-Large-Finetuned [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of [hubert-xlarge-ll60k](https://huggingface.co/facebook/hubert-xlarge-ll60k). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage The model can be used for automatic-speech-recognition as follows: ```python import torch from transformers import Wav2Vec2Processor, HubertForCTC from datasets import load_dataset processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-xlarge-ls960-ft") model = HubertForCTC.from_pretrained("facebook/hubert-xlarge-ls960-ft") ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) # ->"A MAN SAID TO THE UNIVERSE SIR I EXIST" ```
DMetaSoul/sbert-chinese-general-v2-distill
7f91a6d64ffa5a0031587f9738dd603219abf8c3
2022-04-02T09:58:33.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers", "semantic-search", "chinese" ]
sentence-similarity
false
DMetaSoul
null
DMetaSoul/sbert-chinese-general-v2-distill
492
null
sentence-transformers
2,351
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v2-distill 此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好且编码速度更快**。 离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 6% 左右(具体结果详见下文评估小节)。 # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2-distill') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 这里主要跟蒸馏前对应的 teacher 模型作了对比: *性能:* | | Teacher | Student | Gap | | ---------- | --------------------- | ------------------- | ----- | | Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x | | Cost | 23s | 12s | -47% | | Latency | 38ms | 20ms | -47% | | Throughput | 418 sentence/s | 791 sentence/s | 1.9x | *精度:* | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** | | -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- | | **Teacher** | 77.19% | 72.59% | 36.79% | 76.91% | 49.62% | 16.24% | 63.15% | 56.07% | | **Student** | 76.49% | 73.33% | 26.46% | 64.26% | 46.02% | 11.83% | 52.45% | 50.12% | | **Gap** (abs.) | - | - | - | - | - | - | - | -5.95% | *基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256* ## Citing & Authors E-mail: [email protected]
tscholak/1wnr382e
44847d47b5b59789aadc86c7f88d2574cf1f284c
2022-01-10T21:50:25.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:spider", "arxiv:2109.05093", "transformers", "text2sql", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
tscholak
null
tscholak/1wnr382e
490
null
transformers
2,352
--- language: - en thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab" tags: - text2sql widget: - "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id" license: "apache-2.0" datasets: - spider metrics: - spider --- ## tscholak/1wnr382e Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [T5-Large](https://huggingface.co/t5-large). ### Training Data The model has been fine-tuned on the 7000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves Spider's zero-shot text-to-SQL translation task, and that means that it can generalize to unseen SQL databases. ### Training Objective This model was initialized with [T5-Large](https://huggingface.co/t5-large) and fine-tuned with the text-to-text generation objective. Questions are always grounded in a database schema, and the model is trained to predict the SQL query that would be used to answer the question. The input to the model is composed of the user's natural language question, the database identifier, and a list of tables and their columns: ``` [question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... ``` The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's question: ``` [db_id] | [sql] ``` ### Performance Out of the box, this model achieves 65.3 % exact-set match accuracy and 67.2 % execution accuracy on the Spider development set. Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **69.1 %** exact-set match accuracy and **72.9 %** execution accuracy on the Spider development set. ### Usage Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model. ### References 1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) 2. [Official PICARD code](https://github.com/ElementAI/picard) ### Citation ```bibtex @inproceedings{Scholak2021:PICARD, author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau}, title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.779", pages = "9895--9901", } ```
codeparrot/codeparrot-small-multi
7753edbe82562bf23c6ff15ad46ce6f0f2307139
2022-07-15T10:56:13.000Z
[ "pytorch", "gpt2", "text-generation", "code", "dataset:codeparrot/github-code-clean", "dataset:openai_humaneval", "transformers", "generation", "license:apache-2.0" ]
text-generation
false
codeparrot
null
codeparrot/codeparrot-small-multi
490
null
transformers
2,353
--- language: - code license: apache-2.0 tags: - code - gpt2 - generation datasets: - "codeparrot/github-code-clean" - "openai_humaneval" metrics: - "evaluate-metric/code_eval" --- # CodeParrot-Multi 🦜 (small) CodeParrot-Multi 🦜 is a GPT-2 model (110M parameters) trained to generate code in 9 programming languages: "Java", "JavaScript", "PHP", "Python", "C#", "C++", "GO", "Ruby" and "TypeScript". ## Usage You can load the CodeParrot-Multi model and tokenizer directly in `transformers`: ```Python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small-multi") model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small-multi") inputs = tokenizer("def hello_world():", return_tensors="pt") outputs = model(**inputs) ``` or with a `pipeline`: ```Python from transformers import pipeline pipe = pipeline("text-generation", model="codeparrot/codeparrot-small-multi") outputs = pipe("def hello_world():") ``` ## Training The model was trained on the small [Github code small](https://huggingface.co/datasets/loubnabnl/github-small-near-dedup) after near deduplication, a subset of [Github code dataset](https://huggingface.co/datasets/codeparrot/github-code-clean) with the following settings: |Config|Value| |-------|-----| |Batch size| 192 | |Context size| 1024 | |Training steps| 300'000| |Gradient accumulation| 2| |Gradient checkpointing| False| |Learning rate| 5e-4 | |Weight decay | 0.1 | |Warmup steps| 2000 | |Schedule| Cosine | The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 58 billion tokens. ## Performance We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges: | Metric | Value | |-------|-----| |pass@1 | --% | |pass@10 | --% | |pass@100 | --% | The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests. ## Resources - Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
HooshvareLab/gpt2-fa
9c1fa5edb93f30ca93df0d1f1abcc44bcc73e5d1
2021-05-21T10:51:23.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "fa", "transformers", "license:apache-2.0" ]
text-generation
false
HooshvareLab
null
HooshvareLab/gpt2-fa
489
null
transformers
2,354
--- language: fa license: apache-2.0 widget: - text: "در یک اتفاق شگفت انگیز، پژوهشگران" - text: "گرفتگی بینی در کودکان و به‌خصوص نوزادان باعث می‌شود" - text: "امیدواریم نوروز امسال سالی" --- # ParsGPT2 ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ParsGPT2, author = {Hooshvare Team}, title = {ParsGPT2 the Persian version of GPT2}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/hooshvare/parsgpt}}, } ``` ## Questions? Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo.
IMSyPP/hate_speech_en
ffe54334b9df65e704492d2d660610dd848658d6
2022-05-16T06:13:38.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers", "license:mit" ]
text-classification
false
IMSyPP
null
IMSyPP/hate_speech_en
489
1
transformers
2,355
--- widget: - text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University." language: - en license: mit --- # Hate Speech Classifier for Social Media Content in English Language A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model. ## Tokenizer During training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of four distinct classes: * 0 - acceptable * 1 - inappropriate * 2 - offensive * 3 - violent
dbmdz/bert-base-german-europeana-uncased
f703f5a27791d5c8e083eab510563083fb7ed18d
2021-05-19T14:55:07.000Z
[ "pytorch", "tf", "jax", "bert", "de", "transformers", "historic german", "license:mit" ]
null
false
dbmdz
null
dbmdz/bert-base-german-europeana-uncased
489
null
transformers
2,356
--- language: de license: mit tags: - "historic german" --- # 🤗 + 📚 dbmdz BERT models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources German Europeana BERT models 🎉 # German Europeana BERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/vocab.txt) ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 2.3 our German Europeana BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-uncased") ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
naver-clova-ocr/bros-large-uncased
a644113dc6c2b6dd53f99f94feb7ed4a5e3fdf71
2022-04-05T13:57:07.000Z
[ "pytorch", "bros", "arxiv:2108.04539", "transformers" ]
null
false
naver-clova-ocr
null
naver-clova-ocr/bros-large-uncased
489
1
transformers
2,357
# BROS GitHub: https://github.com/clovaai/bros ## Introduction BROS (BERT Relying On Spatiality) is a pre-trained language model focusing on text and layout for better key information extraction from documents.<br> Given the OCR results of the document image, which are text and bounding box pairs, it can perform various key information extraction tasks, such as extracting an ordered item list from receipts.<br> For more details, please refer to our paper: BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents<br> Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park<br> AAAI 2022 - Main Technical Track [[arXiv]](https://arxiv.org/abs/2108.04539) ## Pre-trained models | name | # params | Hugging Face - Models | |---------------------|---------:|-------------------------------------------------------------------------------------------------| | bros-base-uncased | < 110M | [naver-clova-ocr/bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) | | bros-large-uncased (**this**) | < 340M | [naver-clova-ocr/bros-large-uncased](https://huggingface.co/naver-clova-ocr/bros-large-uncased) |
oigele/Fb_improved_zeroshot
d68aaffe80f68f2a820944c59a92b2e285741725
2021-11-29T11:51:49.000Z
[ "pytorch", "bart", "text-classification", "dataset:multi_nli", "arxiv:1909.00161", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
oigele
null
oigele/Fb_improved_zeroshot
488
4
transformers
2,358
--- pipeline_tag: zero-shot-classification datasets: - multi_nli widget: - text: "natural language processing" candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science" hypothesis_template: "This is {}." --- # Fb_improved_zeroshot Zero-Shot Model designed to classify academic search logs in German and English. Developed by students at ETH Zürich. This model was trained using the [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli/) checkpoint provided by Meta on Huggingface. It was then fine-tuned to suit the needs of this project. ## NLI-based Zero-Shot Text Classification This method is based on Natural Language Inference (NLI), see [Yin et al.](https://arxiv.org/abs/1909.00161). The following tutorials are taken from the model card of [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli/). #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="oigele/Fb_improved_zeroshot") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "natural language processing" candidate_labels = ['Location & Address', 'Employment', 'Organizational', 'Name', 'Service', 'Studies', 'Science'] classifier(sequence_to_classify, candidate_labels) ``` If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently: ```python candidate_labels = ['Location & Address', 'Employment', 'Organizational', 'Name', 'Service', 'Studies', 'Science'] classifier(sequence_to_classify, candidate_labels, multi_class=True) ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('oigele/Fb_improved_zeroshot/') tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') premise = sequence hypothesis = f'This is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1]
imxly/sentence_roberta_wwm_ext
28b1082b623326456cdec17ee4b521e21e823434
2021-05-19T20:20:32.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
imxly
null
imxly/sentence_roberta_wwm_ext
487
null
transformers
2,359
Entry not found
KBLab/bert-base-swedish-cased-pos
eae7acf6c32812794b8edd93a944c6b1bd1e402a
2021-05-18T21:20:59.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
KBLab
null
KBLab/bert-base-swedish-cased-pos
486
2
transformers
2,360
Entry not found
lirondos/anglicisms-spanish-mbert
11e819e8161f1162b2b09d253dde4a927a9dc3e0
2022-05-16T14:03:29.000Z
[ "pytorch", "bert", "token-classification", "es", "dataset:coalas", "transformers", "anglicisms", "loanwords", "borrowing", "codeswitching", "arxiv:2203.16169", "license:cc-by-4.0", "autotrain_compatible" ]
token-classification
false
lirondos
null
lirondos/anglicisms-spanish-mbert
486
null
transformers
2,361
--- language: - es license: cc-by-4.0 tags: - anglicisms # Example: audio - loanwords # Example: automatic-speech-recognition - borrowing # Example: speech - codeswitching # Example to specify a library: allennlp - arxiv:2203.16169 datasets: - coalas # Example: common_voice. Use dataset id from https://hf.co/datasets widget: - text: "Las fake news sobre la celebrity se reprodujeron por los 'mass media' en prime time." - text: "Me gusta el cine noir y el anime." - text: "Benching, estar en el banquillo de tu 'crush' mientras otro juega de titular." - text: "Recetas de noviembre para el batch cooking." - text: "Utilizaron técnicas de machine learning, big data o blockchain." --- # anglicisms-spanish-mbert This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*. The model is a fine-tuned version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings. The model considers two labels: * ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*) * ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*) The model uses BIO encoding to account for multitoken borrowings. **⚠ This is not the best-performing model for this task.** For the best-performing model (F1=85.76) see [Flair model](https://huggingface.co/lirondos/anglicisms-spanish-flair-cs). ## Metrics (on the test set) The following table summarizes the results obtained on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus. | LABEL | Precision | Recall | F1 | |:-------|-----:|-----:|---------:| | ALL | 88.09 | 79.46 | 83.55 | | ENG | 88.44 | 82.16 | 85.19 | | OTHER | 37.5 | 6.52 | 11.11 | ## Dataset This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens). |Set | Tokens | ENG | OTHER | Unique | |:-------|-----:|-----:|---------:|---------:| |Training |231,126 |1,493 | 28 |380 | |Development |82,578 |306 |49 |316| |Test |58,997 |1,239 |46 |987| |**Total** |372,701 |3,038 |123 |1,683 | ## More info More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*. ## How to use ``` from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("lirondos/anglicisms-spanish-mbert") model = AutoModelForTokenClassification.from_pretrained("lirondos/anglicisms-spanish-mbert") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = example = "Buscamos data scientist para proyecto de machine learning." borrowings = nlp(example) print(borrowings) ``` ## Citation If you use this model, please cite the following reference: ``` @inproceedings{alvarez-mellado-lignos-2022-detecting, title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling", author = "{\'A}lvarez-Mellado, Elena and Lignos, Constantine", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.268", pages = "3868--3888", abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.", } ```
FangLee/DialoGPT-small-Kirito
b367d8ac8cbfabbaeb96bfd98a3f4550687daa99
2021-09-04T14:25:26.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
FangLee
null
FangLee/DialoGPT-small-Kirito
485
null
transformers
2,362
--- tags: - conversational --- @Kirito DialoGPT Small Model
filco306/gpt2-base-style-paraphraser
e320d414ae5ef9a893c4a6bc3604117f9e436c53
2021-08-28T19:27:41.000Z
[ "pytorch", "text-generation", "arxiv:2010.05700", "transformers" ]
text-generation
false
filco306
null
filco306/gpt2-base-style-paraphraser
485
2
transformers
2,363
# GPT2 base style transfer paraphraser This is the trained base-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author. ## Citation If you found this model useful, please cite the original work: ``` @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } ```
Helsinki-NLP/opus-mt-th-fr
b9e7a1b2d0a2aa9c1cc4123c37dcef4b13d41c15
2021-09-11T10:48:01.000Z
[ "pytorch", "marian", "text2text-generation", "th", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-th-fr
484
null
transformers
2,364
--- tags: - translation license: apache-2.0 --- ### opus-mt-th-fr * source languages: th * target languages: fr * OPUS readme: [th-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/th-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.th.fr | 20.4 | 0.363 |
deepset/bert-base-german-cased-hatespeech-GermEval18Coarse
9423036452a34960b227e787d8fd86063c6b87ad
2021-05-19T15:25:01.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:cc-by-4.0" ]
text-classification
false
deepset
null
deepset/bert-base-german-cased-hatespeech-GermEval18Coarse
484
6
transformers
2,365
--- license: cc-by-4.0 --- This is a German BERT v1 (https://deepset.ai/german-bert) trained to do hate speech detection on the GermEval18Coarse dataset
ELiRF/mbart-large-cc25-dacsa-es
c0f9e6d88fc2f865327cb63898186036944d204e
2022-07-11T17:34:09.000Z
[ "pytorch", "mbart", "text2text-generation", "es", "arxiv:2001.08210", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
ELiRF
null
ELiRF/mbart-large-cc25-dacsa-es
484
null
transformers
2,366
--- language: es tags: - summarization widget: - text: "La Universitat Politècnica de València (UPV), a través del proyecto Atenea “plataforma de mujeres, arte y tecnología” y en colaboración con las compañías tecnológicas Metric Salad y Zetalab, ha digitalizado y modelado en 3D para la 35ª edición del Festival Dansa València, que se celebra del 2 al 10 de abril, la primera pieza de danza en un metaverso específico.La pieza No es amor, dirigida por Lara Misó, forma parte de la programación de esta edición del Festival Dansa València y explora la figura geométrica del círculo desde todas sus perspectivas: espacial, corporal y compositiva. No es amor está inspirada en el trabajo de la artista japonesa Yayoi Kusama y mira de cerca las diferentes facetas de una obsesión. Así da cabida a la insistencia, la repetición, el trastorno, la hipnosis y la liberación. El proceso de digitalización, materializado por Metric Salad y ZetaLab, ha sido complejo respecto a otros ya realizados debido al enorme desafío que conlleva el modelado en 3D de cuerpos en movimiento al ritmo de la composición de la obra. El objetivo era generar una experiencia lo más realista posible y fidedigna de la original para que el resultado final fuera un proceso absolutamente inmersivo. Así, el metaverso está compuesto por figuras modeladas en 3D junto a cuatro proyecciones digitalizadas en pantallas flotantes con las que el usuario podrá interactuar según se vaya acercando, bien mediante los comandos del ordenador, bien a través de gafas de realidad virtual. El objetivo es que cuando el usuario se acerque a cada una de las proyecciones tenga la sensación de una inmersión casi completa al fundirse con el contenido audiovisual que le genere una experiencia intimista y muy real." --- # mBART (large-cc25 model), fine-tuned on the *Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset for Spanish The mBART model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. The large-cc25 version of the mBART model is pre-trained in 25 languages, including English, Spanish, Italian, and other ones. # Model description The mBART-large-cc25 model has been fine-tuned for abstractive text summarization for Spanish. # Training data The mBART-larges-cc25 model has been fine-tuned on *the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset, specifically with the Spanish articles. The Spanish subset contains 1.802.919 document-summary pairs of Spanish news articles. The DACSA dataset can be requested at the following address: https://xarrador.dsic.upv.es/resources/dacsa # Intended uses & limitations The model can be used for text summarization, especially in news articles. # How to use You can use the summarization model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html): ```python from transformers import pipeline summarizer = pipeline("summarization", model="ELiRF/mbart-large-cc25-dacsa-es") ARTICLE = """La Universitat Politècnica de València (UPV), a través del proyecto Atenea “plataforma de mujeres, arte y tecnología” y en colaboración con las compañías tecnológicas Metric Salad y Zetalab, ha digitalizado y modelado en 3D para la 35ª edición del Festival Dansa València, que se celebra del 2 al 10 de abril, la primera pieza de danza en un metaverso específico.La pieza No es amor, dirigida por Lara Misó, forma parte de la programación de esta edición del Festival Dansa València y explora la figura geométrica del círculo desde todas sus perspectivas: espacial, corporal y compositiva. No es amor está inspirada en el trabajo de la artista japonesa Yayoi Kusama y mira de cerca las diferentes facetas de una obsesión. Así da cabida a la insistencia, la repetición, el trastorno, la hipnosis y la liberación. El proceso de digitalización, materializado por Metric Salad y ZetaLab, ha sido complejo respecto a otros ya realizados debido al enorme desafío que conlleva el modelado en 3D de cuerpos en movimiento al ritmo de la composición de la obra. El objetivo era generar una experiencia lo más realista posible y fidedigna de la original para que el resultado final fuera un proceso absolutamente inmersivo. Así, el metaverso está compuesto por figuras modeladas en 3D junto a cuatro proyecciones digitalizadas en pantallas flotantes con las que el usuario podrá interactuar según se vaya acercando, bien mediante los comandos del ordenador, bien a través de gafas de realidad virtual. El objetivo es que cuando el usuario se acerque a cada una de las proyecciones tenga la sensación de una inmersión casi completa al fundirse con el contenido audiovisual que le genere una experiencia intimista y muy real. """ print(summarizer(ARTICLE, truncation=True)) >>>[{'summary_text': "La pieza No es amor, dirigida por Lara Misó, forma parte de la programación de esta edición del Festival Dansa València."}] ``` ### BibTeX entry ```bibtex @inproceedings{segarra-soriano-etal-2022-dacsa, title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles", author = "Segarra Soriano, Encarnaci{\'o}n and Ahuir, Vicent and Hurtado, Llu{\'\i}s-F. and Gonz{\'a}lez, Jos{\'e}", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.434", pages = "5931--5943", abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.", } ```
ckiplab/bert-tiny-chinese-ner
a18df36c7f73ae3329877506be48a86c09599e8d
2022-05-10T03:28:12.000Z
[ "pytorch", "bert", "token-classification", "zh", "transformers", "license:gpl-3.0", "autotrain_compatible" ]
token-classification
false
ckiplab
null
ckiplab/bert-tiny-chinese-ner
483
null
transformers
2,367
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
navteca/roberta-base-squad2
6c7bec0e5e05d24070d598661767d8004c097553
2021-04-06T16:27:48.000Z
[ "pytorch", "jax", "roberta", "question-answering", "en", "dataset:squad_v2", "transformers", "license:mit", "autotrain_compatible" ]
question-answering
false
navteca
null
navteca/roberta-base-squad2
482
null
transformers
2,368
--- datasets: - squad_v2 language: en license: mit pipeline_tag: question-answering tags: - roberta - question-answering --- # Roberta base model for QA (SQuAD 2.0) This model uses [roberta-base](https://huggingface.co/roberta-base). ## Training Data The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. It can be used for question answering task. ## Usage and Performance The trained model can be used like this: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline # Load model & tokenizer roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-base-squad2') roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-base-squad2') # Get predictions nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer) result = nlp({ 'question': 'How many people live in Berlin?', 'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.' }) print(result) #{ # "answer": "3,520,031" # "end": 36, # "score": 0.96186668, # "start": 27, #} ```
Helsinki-NLP/opus-mt-en-he
6b58caddd6ee489cafb8dd45d0e76a9c9b61de4c
2021-09-09T21:35:50.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "en", "he", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-he
481
1
transformers
2,369
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-he * source languages: en * target languages: he * OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.he | 40.1 | 0.609 |
speechbrain/asr-crdnn-transformerlm-librispeech
6c7c0a922755a083805630e0c1bfc2258da3fe4c
2021-11-30T00:38:21.000Z
[ "en", "dataset:librispeech", "arxiv:2106.04624", "speechbrain", "automatic-speech-recognition", "CTC", "Attention", "Tranformer", "pytorch", "license:apache-2.0" ]
automatic-speech-recognition
false
speechbrain
null
speechbrain/asr-crdnn-transformerlm-librispeech
481
null
speechbrain
2,370
--- language: "en" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - Tranformer - pytorch - speechbrain license: "apache-2.0" datasets: - librispeech metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CRDNN with CTC/Attention and RNNLM trained on LibriSpeech This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 05-03-21 | 2.90 | 8.51 | 1xV100 16GB | ## Pipeline description This ASR system is composed of 3 different but linked blocks: 1. Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. 2. Neural language model (Transformer LM) trained on the full 10M words dataset. 3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of N blocks of convolutional neural networks with normalization and pooling on the frequency domain. Then, a bidirectional LSTM with projection layers is connected to a final DNN to obtain the final acoustic representation that is given to the CTC and attention decoders. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-transformerlm-librispeech", savedir="pretrained_models/asr-crdnn-transformerlm-librispeech") asr_model.transcribe_file("speechbrain/asr-crdnn-transformerlm-librispeech/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. ### Training The model was trained with SpeechBrain (Commit hash: 'eca313cc'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/seq2seq python train.py hparams/train_BPE_5000.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1kSwdBT8kDhnmTLzrOPDL77LX_Eq-3Tzl?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
IIC/dpr-spanish-passage_encoder-squades-base
fa963e0a2626fa6ea5553894d5685cd262cc6382
2022-04-02T15:08:22.000Z
[ "pytorch", "bert", "fill-mask", "es", "dataset:squad_es", "arxiv:2004.04906", "transformers", "sentence similarity", "passage retrieval", "model-index", "autotrain_compatible" ]
fill-mask
false
IIC
null
IIC/dpr-spanish-passage_encoder-squades-base
481
3
transformers
2,371
--- language: - es tags: - sentence similarity # Example: audio - passage retrieval # Example: automatic-speech-recognition datasets: - squad_es metrics: - eval_loss: 0.08608942725107592 - eval_accuracy: 0.9925325215819639 - eval_f1: 0.8805402320715237 - average_rank: 0.27430093209054596 model-index: - name: dpr-spanish-passage_encoder-squades-base results: - task: type: text similarity # Required. Example: automatic-speech-recognition name: text similarity # Optional. Example: Speech Recognition dataset: type: squad_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: squad_es # Required. Example: Common Voice zh-CN args: es # Optional. Example: zh-CN metrics: - type: loss value: 0.08608942725107592 name: eval_loss - type: accuracy value: 0.99 name: accuracy - type: f1 value: 0.88 name: f1 - type: avgrank value: 0.2743 name: avgrank --- [Dense Passage Retrieval](https://arxiv.org/abs/2004.04906) is a set of tools for performing state of the art open-domain question answering. It was initially developed by Facebook and there is an [official repository](https://github.com/facebookresearch/DPR). DPR is intended to retrieve the relevant documents to answer a given question, and is composed of 2 models, one for encoding passages and other for encoding questions. This concrete model is the one used for encoding passages. Regarding its use, this model should be used to vectorize the documents in the database of a question answering system in Spanish. Then, when a new question enters, [the question encoder should be used](https://huggingface.co/avacaondata/dpr-spanish-question_encoder-squades-base) to encode it, and then we compare that encoding with the encodings of the database to find the most similar documents, which then should be used for either extracting the answer or generating it. For training the model, we used the spanish version of SQUAD, [SQUAD-ES](https://huggingface.co/datasets/squad_es), with which we created positive and negative examples for the model. Example of use: ```python from transformers import DPRContextEncoder, DPRContextEncoderTokenizer model_str = "avacaondata/dpr-spanish-passage_encoder-squades-base" tokenizer = DPRContextEncoderTokenizer.from_pretrained(model_str) model = DPRContextEncoder.from_pretrained(model_str) input_ids = tokenizer("Usain Bolt ganó varias medallas de oro en las Olimpiadas del año 2012", return_tensors="pt")["input_ids"] embeddings = model(input_ids).pooler_output ``` The full metrics of this model on the evaluation split of SQUADES are: ``` evalloss: 0.08608942725107592 acc: 0.9925325215819639 f1: 0.8805402320715237 acc_and_f1: 0.9365363768267438 average_rank: 0.27430093209054596 ``` And the classification report: ``` precision recall f1-score support hard_negative 0.9961 0.9961 0.9961 325878 positive 0.8805 0.8805 0.8805 10514 accuracy 0.9925 336392 macro avg 0.9383 0.9383 0.9383 336392 weighted avg 0.9925 0.9925 0.9925 336392 ``` ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model.
luyaojie/uie-base-en
966f8b1fc4c74e94ab552081605913ad5133cc41
2022-04-15T13:09:21.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
text2text-generation
false
luyaojie
null
luyaojie/uie-base-en
481
null
transformers
2,372
--- license: cc-by-nc-sa-4.0 ---
zuu/grammar-error-correcter
e6b6507ef6e9308d0e344845c2e7486eaaecca5d
2022-06-02T18:10:59.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
zuu
null
zuu/grammar-error-correcter
481
0
transformers
2,373
```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM GED_TOKENIZER = AutoTokenizer.from_pretrained("zuu/grammar-error-correcter") GED_MODEL = AutoModelForSeq2SeqLM.from_pretrained("zuu/grammar-error-correcter") # Incorrect text incorrect_text = 'young children should avoid exposure to contageous disease' # Tokenize text tokens= GED_TOKENIZER( [incorrect_text], padding=True, return_tensors='pt' ) corrections = GED_MODEL.generate(**tokens) corrections = GED_TOKENIZER.batch_decode( corrections, skip_special_tokens=True ) ```
cambridgeltl/simctg_lccc_dialogue
45b51e1c98f8dc6f0b65a2ade9bdff6d9a128b79
2022-06-25T19:21:55.000Z
[ "pytorch", "gpt2", "text-generation", "arxiv:2008.03946", "arxiv:2202.06417", "transformers" ]
text-generation
false
cambridgeltl
null
cambridgeltl/simctg_lccc_dialogue
480
null
transformers
2,374
This model provides a Chinese GPT-2 language model trained with SimCTG on the LCCC benchmark [(Wang et al., 2020)](https://arxiv.org/pdf/2008.03946v2.pdf) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417). We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation. ## 1. Installation of SimCTG: ```yaml pip install simctg --upgrade ``` ## 2. Initialize SimCTG Model: ```python import torch # load SimCTG language model from simctg.simctggpt import SimCTGGPT model_name = r'cambridgeltl/simctg_lccc_dialogue' model = SimCTGGPT(model_name) model.eval() tokenizer = model.tokenizer eos_token = '[SEP]' eos_token_id = tokenizer.convert_tokens_to_ids([eos_token])[0] ``` ## 3. Prepare the Text Prefix: ```python context_list = ['刺猬很可爱!以前别人送了只没养,味儿太大!', '是很可爱但是非常臭', '是啊,没办法养', '那个怎么养哦不会扎手吗'] prefix_text = eos_token.join(context_list).strip(eos_token) + eos_token print ('Prefix is: {}'.format(prefix_text)) tokens = tokenizer.tokenize(prefix_text) input_ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = torch.LongTensor(input_ids).view(1,-1) ``` ## 4. Generate Text with Contrastive Search: ```python beam_width, alpha, decoding_len = 5, 0.6, 64 output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width, alpha=alpha, decoding_len=decoding_len, end_of_sequence_token_id=eos_token_id, early_stop=True) print("Output:\n" + 100 * '-') print(''.join(tokenizer.decode(output))) ''' Prefix is: 刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP] Output: ---------------------------------------------------------------------------------------------------- 刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP]我觉得还好,就是有点臭 ''' ``` For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG). ## 5. Citation: If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks! ```bibtex @article{su2022contrastive, title={A Contrastive Framework for Neural Text Generation}, author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel}, journal={arXiv preprint arXiv:2202.06417}, year={2022} } ```
j-hartmann/emotion-english-roberta-large
ab319b8cfc7ca91478e74bce639ed8b8e0927d0c
2021-08-29T11:48:09.000Z
[ "pytorch", "roberta", "text-classification", "en", "transformers", "sentiment", "emotion", "twitter", "reddit" ]
text-classification
false
j-hartmann
null
j-hartmann/emotion-english-roberta-large
480
1
transformers
2,375
--- language: "en" tags: - roberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- ## Description ℹ With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets and predicts Ekman's 6 basic emotions, plus a neutral class: 1) anger 🤬 2) disgust 🤢 3) fear 😨 4) joy 😀 5) neutral 😐 6) sadness 😭 7) surprise 😲 The model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large). For further details on this emotion model, please refer to the model card of its [DistilRoBERTa](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) version.
Helsinki-NLP/opus-mt-tc-big-fr-en
df6dfc5e22be93169ad457196ad8472ad749f886
2022-06-01T13:01:21.000Z
[ "pytorch", "marian", "text2text-generation", "en", "fr", "transformers", "translation", "opus-mt-tc", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-tc-big-fr-en
480
1
transformers
2,376
--- language: - en - fr tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-fr-en results: - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: flores101-devtest type: flores_101 args: fra eng devtest metrics: - name: BLEU type: bleu value: 46.0 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: multi30k_test_2016_flickr type: multi30k-2016_flickr args: fra-eng metrics: - name: BLEU type: bleu value: 49.7 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: multi30k_test_2017_flickr type: multi30k-2017_flickr args: fra-eng metrics: - name: BLEU type: bleu value: 52.0 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: multi30k_test_2017_mscoco type: multi30k-2017_mscoco args: fra-eng metrics: - name: BLEU type: bleu value: 50.6 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: multi30k_test_2018_flickr type: multi30k-2018_flickr args: fra-eng metrics: - name: BLEU type: bleu value: 44.9 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: news-test2008 type: news-test2008 args: fra-eng metrics: - name: BLEU type: bleu value: 26.5 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newsdiscussdev2015 type: newsdiscussdev2015 args: fra-eng metrics: - name: BLEU type: bleu value: 34.4 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newsdiscusstest2015 type: newsdiscusstest2015 args: fra-eng metrics: - name: BLEU type: bleu value: 40.2 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-eng metrics: - name: BLEU type: bleu value: 59.8 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: tico19-test type: tico19-test args: fra-eng metrics: - name: BLEU type: bleu value: 41.3 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2009 type: wmt-2009-news args: fra-eng metrics: - name: BLEU type: bleu value: 30.4 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2010 type: wmt-2010-news args: fra-eng metrics: - name: BLEU type: bleu value: 33.4 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2011 type: wmt-2011-news args: fra-eng metrics: - name: BLEU type: bleu value: 33.8 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2012 type: wmt-2012-news args: fra-eng metrics: - name: BLEU type: bleu value: 33.6 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2013 type: wmt-2013-news args: fra-eng metrics: - name: BLEU type: bleu value: 34.8 - task: name: Translation fra-eng type: translation args: fra-eng dataset: name: newstest2014 type: wmt-2014-news args: fra-eng metrics: - name: BLEU type: bleu value: 39.4 --- # opus-mt-tc-big-fr-en Neural machine translation model for translating from French (fr) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): fra * target language(s): eng * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT fra-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-eng/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "J'ai adoré l'Angleterre.", "C'était la seule chose à faire." ] model_name = "pytorch-models/opus-mt-tc-big-fr-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # I loved England. # It was the only thing to do. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-en") print(pipe("J'ai adoré l'Angleterre.")) # expected output: I loved England. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-eng | tatoeba-test-v2021-08-07 | 0.73772 | 59.8 | 12681 | 101754 | | fra-eng | flores101-devtest | 0.69350 | 46.0 | 1012 | 24721 | | fra-eng | multi30k_test_2016_flickr | 0.68005 | 49.7 | 1000 | 12955 | | fra-eng | multi30k_test_2017_flickr | 0.70596 | 52.0 | 1000 | 11374 | | fra-eng | multi30k_test_2017_mscoco | 0.69356 | 50.6 | 461 | 5231 | | fra-eng | multi30k_test_2018_flickr | 0.65751 | 44.9 | 1071 | 14689 | | fra-eng | newsdiscussdev2015 | 0.59008 | 34.4 | 1500 | 27759 | | fra-eng | newsdiscusstest2015 | 0.62603 | 40.2 | 1500 | 26982 | | fra-eng | newssyscomb2009 | 0.57488 | 31.1 | 502 | 11818 | | fra-eng | news-test2008 | 0.54316 | 26.5 | 2051 | 49380 | | fra-eng | newstest2009 | 0.56959 | 30.4 | 2525 | 65399 | | fra-eng | newstest2010 | 0.59561 | 33.4 | 2489 | 61711 | | fra-eng | newstest2011 | 0.60271 | 33.8 | 3003 | 74681 | | fra-eng | newstest2012 | 0.59507 | 33.6 | 3003 | 72812 | | fra-eng | newstest2013 | 0.59691 | 34.8 | 3000 | 64505 | | fra-eng | newstest2014 | 0.64533 | 39.4 | 3003 | 70708 | | fra-eng | tico19-test | 0.63326 | 41.3 | 2100 | 56323 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 19:02:28 EEST 2022 * port machine: LM0-400-22516.local
ZipperXYZ/DialoGPT-medium-TheWorldMachineExpressive2
4e7b2dda5588080784ac6f7482060026296d5cea
2022-06-22T01:36:28.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ZipperXYZ
null
ZipperXYZ/DialoGPT-medium-TheWorldMachineExpressive2
480
null
transformers
2,377
--- tags: - conversational --- # The world machine DialoGPT model
facebook/hubert-xlarge-ll60k
b0cef767123fe004883915a053f538f1737a1e47
2021-10-20T10:20:44.000Z
[ "pytorch", "tf", "hubert", "feature-extraction", "en", "dataset:libri-light", "arxiv:2106.07447", "transformers", "speech", "license:apache-2.0" ]
feature-extraction
false
facebook
null
facebook/hubert-xlarge-ll60k
479
3
transformers
2,378
--- language: en datasets: - libri-light tags: - speech license: apache-2.0 --- # Hubert-Extra-Large [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... The model was pretrained on [Libri-Light](https://github.com/facebookresearch/libri-light). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
mrm8488/roberta-med-small2roberta-med-small-finetuned-cnn_daily_mail-summarization
3df1c9e04581ca196e80b9ce1e4c22db6431bec7
2021-04-06T09:22:39.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "en", "dataset:cnn_dailymail", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
mrm8488
null
mrm8488/roberta-med-small2roberta-med-small-finetuned-cnn_daily_mail-summarization
479
null
transformers
2,379
--- language: en license: apache-2.0 datasets: - cnn_dailymail tags: - summarization --- Shared [RoBERTa2RoBERTa (med-small)](https://huggingface.co/nyu-mll/roberta-med-small-1M-1) Summarization with 🤗EncoderDecoder Framework This model is a warm-started *RoBERTaShared* (med-small) model fine-tuned on the *cnn_dailymail* summarization dataset. The model achieves a **16.90** ROUGE-2 score on *cnn_dailymail*'s test dataset.
facebook/xglm-7.5B
b4f0ef7d74603a0e63a05695cd38d08260961e3a
2022-02-14T22:54:52.000Z
[ "pytorch", "xglm", "text-generation", "arxiv:2112.10668", "transformers", "license:mit" ]
text-generation
false
facebook
null
facebook/xglm-7.5B
478
5
transformers
2,380
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png inference: false --- # XGLM-7.5B XGLM-7.5B is a multilingual autoregressive language model (with 7.5 billion parameters) trained on a balanced corpus of a diverse set of languages totaling 500 billion sub-tokens. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm). ## Training Data Statistics The training data statistics of XGLM-7.5B is shown in the table below. | ISO-639-1| family | name | # tokens | ratio | ratio w/ lowRes upsampling | |:--------|:-----------------|:------------------------|-------------:|------------:|-------------:| | en | Indo-European | English | 803526736124 | 0.489906 | 0.3259 | | ru | Indo-European | Russian | 147791898098 | 0.0901079 | 0.0602 | | zh | Sino-Tibetan | Chinese | 132770494630 | 0.0809494 | 0.0483 | | de | Indo-European | German | 89223707856 | 0.0543992 | 0.0363 | | es | Indo-European | Spanish | 87303083105 | 0.0532282 | 0.0353 | | fr | Indo-European | French | 77419639775 | 0.0472023 | 0.0313 | | ja | Japonic | Japanese | 66054364513 | 0.040273 | 0.0269 | | it | Indo-European | Italian | 41930465338 | 0.0255648 | 0.0171 | | pt | Indo-European | Portuguese | 36586032444 | 0.0223063 | 0.0297 | | el | Indo-European | Greek (modern) | 28762166159 | 0.0175361 | 0.0233 | | ko | Koreanic | Korean | 20002244535 | 0.0121953 | 0.0811 | | fi | Uralic | Finnish | 16804309722 | 0.0102455 | 0.0681 | | id | Austronesian | Indonesian | 15423541953 | 0.00940365 | 0.0125 | | tr | Turkic | Turkish | 12413166065 | 0.00756824 | 0.0101 | | ar | Afro-Asiatic | Arabic | 12248607345 | 0.00746791 | 0.0099 | | vi | Austroasiatic | Vietnamese | 11199121869 | 0.00682804 | 0.0091 | | th | Tai–Kadai | Thai | 10842172807 | 0.00661041 | 0.044 | | bg | Indo-European | Bulgarian | 9703797869 | 0.00591635 | 0.0393 | | ca | Indo-European | Catalan | 7075834775 | 0.0043141 | 0.0287 | | hi | Indo-European | Hindi | 3448390110 | 0.00210246 | 0.014 | | et | Uralic | Estonian | 3286873851 | 0.00200399 | 0.0133 | | bn | Indo-European | Bengali, Bangla | 1627447450 | 0.000992245 | 0.0066 | | ta | Dravidian | Tamil | 1476973397 | 0.000900502 | 0.006 | | ur | Indo-European | Urdu | 1351891969 | 0.000824241 | 0.0055 | | sw | Niger–Congo | Swahili | 907516139 | 0.000553307 | 0.0037 | | te | Dravidian | Telugu | 689316485 | 0.000420272 | 0.0028 | | eu | Language isolate | Basque | 105304423 | 6.42035e-05 | 0.0043 | | my | Sino-Tibetan | Burmese | 101358331 | 6.17976e-05 | 0.003 | | ht | Creole | Haitian, Haitian Creole | 86584697 | 5.27902e-05 | 0.0035 | | qu | Quechuan | Quechua | 3236108 | 1.97304e-06 | 0.0001 | ## Model card For intended usage of the model, please refer to the [model card](https://github.com/pytorch/fairseq/blob/main/examples/xglm/model_card.md) released by the XGLM-7.5B development team. ## Example (COPA) The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi. ```python import torch import torch.nn.functional as F from transformers import XGLMTokenizer, XGLMForCausalLM tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-7.5B") model = XGLMForCausalLM.from_pretrained("facebook/xglm-7.5B") data_samples = { 'en': [ { "premise": "I wanted to conserve energy.", "choice1": "I swept the floor in the unoccupied room.", "choice2": "I shut off the light in the unoccupied room.", "question": "effect", "label": "1" }, { "premise": "The flame on the candle went out.", "choice1": "I blew on the wick.", "choice2": "I put a match to the wick.", "question": "cause", "label": "0" } ], 'zh': [ { "premise": "我想节约能源。", "choice1": "我在空着的房间里扫了地板。", "choice2": "我把空房间里的灯关了。", "question": "effect", "label": "1" }, { "premise": "蜡烛上的火焰熄灭了。", "choice1": "我吹灭了灯芯。", "choice2": "我把一根火柴放在灯芯上。", "question": "cause", "label": "0" } ], 'hi': [ { "premise": "M te vle konsève enèji.", "choice1": "Mwen te fin baleye chanm lib la.", "choice2": "Mwen te femen limyè nan chanm lib la.", "question": "effect", "label": "1" }, { "premise": "Flam bouji a te etenn.", "choice1": "Mwen te soufle bouji a.", "choice2": "Mwen te limen mèch bouji a.", "question": "cause", "label": "0" } ] } def get_logprobs(prompt): inputs = tokenizer(prompt, return_tensors="pt") input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:] outputs = model(**inputs, labels=input_ids) logits = outputs.logits logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2)) return logprobs # Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task. # A return value of 0 indicates that the first alternative is more plausible, # while 1 indicates that the second alternative is more plausible. def COPA_eval(prompt, alternative1, alternative2): lprob1 = get_logprobs(prompt + "\n" + alternative1).sum() lprob2 = get_logprobs(prompt + "\n" + alternative2).sum() return 0 if lprob1 > lprob2 else 1 for lang in data_samples_long: for idx, example in enumerate(data_samples_long[lang]): predict = COPA_eval(example["premise"], example["choice1"], example["choice2"]) print(f'{lang}-{idx}', predict, example['label']) # en-0 1 1 # en-1 0 0 # zh-0 1 1 # zh-1 0 0 # hi-0 1 1 # hi-1 0 0 ```
sadakmed/distiluse-base-multilingual-cased-v2
d4e9bba5ac7e7bb5a86e3b97e8150e8fc1fbd931
2021-09-22T09:37:21.000Z
[ "pytorch", "distilbert", "feature-extraction", "multilingual", "sentence-transformers", "DistilBert", "Universal Sentence Encoder", "sentence-embeddings", "sentence-similarity", "license:apache-2.0" ]
feature-extraction
false
sadakmed
null
sadakmed/distiluse-base-multilingual-cased-v2
478
null
sentence-transformers
2,381
--- language: multilingual tags: - DistilBert - Universal Sentence Encoder - sentence-embeddings - sentence-transformers - sentence-similarity license: apache-2.0 --- While v1 model supports 15 languages, this version supports 50+ languages. However, performance on the 15 languages mentioned in v1 are reported to be a bit lower. Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG.
ethanyt/guwen-quote
a5a28406ac0e3ab13727a3295c15f84f425ac9e8
2021-06-17T08:22:56.000Z
[ "pytorch", "roberta", "token-classification", "zh", "transformers", "chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "quotation detection", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
ethanyt
null
ethanyt/guwen-quote
477
null
transformers
2,382
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" - "quotation detection" license: "apache-2.0" pipeline_tag: "token-classification" widget: - text: "子曰学而时习之不亦说乎有朋自远方来不亦乐乎人不知而不愠不亦君子乎有子曰其为人也孝弟而好犯上者鲜矣不好犯上而好作乱者未之有也君子务本本立而道生孝弟也者其为仁之本与子曰巧言令色鲜矣仁曾子曰吾日三省吾身为人谋而不忠乎与朋友交而不信乎传不习乎子曰道千乘之国敬事而信节用而爱人使民以时" --- # Guwen Quote A Classical Chinese Quotation Detector. Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to [Guwen Models](https://github.com/ethan-yt/guwen-models). See also: <a href="https://github.com/ethan-yt/guwen-models"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/cclue/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/guwenbert/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a>
google/pegasus-arxiv
8d68b512ac8f83bd6ecfb651a793a35e71fdc402
2020-10-22T16:33:20.000Z
[ "pytorch", "pegasus", "text2text-generation", "en", "arxiv:1912.08777", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
google
null
google/pegasus-arxiv
477
1
transformers
2,383
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner
aeb8f1a4908c7f21676dd7c1572e303a685056e1
2022-05-25T08:55:03.000Z
[ "pytorch", "distilbert", "token-classification", "en", "de", "nl", "es", "multilingual", "dataset:conll2003", "transformers", "model-index", "autotrain_compatible" ]
token-classification
false
gunghio
null
gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner
477
null
transformers
2,384
--- metrics: - precision: 0.936 - recall: 0.9458 - f1: 0.9409 - accuracy: 0.9902 datasets: - conll2003 language: - en - de - nl - es - multilingual model-index: - name: gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner results: - task: type: ner name: Named Entity Recognition dataset: type: conll2003 name: ConLL 2003 metrics: - type: f1-score value: 0.9409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner This model was trained from scratch on an conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0388 - Precision: 0.9360 - Recall: 0.9458 - F1: 0.9409 - Accuracy: 0.9902 ## Model description It is based on distilbert-base-multilingual-cased ## Intended uses & limitations More information needed ## Training and evaluation data Training dataset: [conll2003](https://huggingface.co/datasets/conll2003) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1653 | 1.0 | 878 | 0.0465 | 0.9267 | 0.9300 | 0.9283 | 0.9883 | | 0.0322 | 2.0 | 1756 | 0.0404 | 0.9360 | 0.9431 | 0.9396 | 0.9897 | | 0.0185 | 3.0 | 2634 | 0.0388 | 0.9360 | 0.9458 | 0.9409 | 0.9902 | ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.6.2 - Tokenizers 0.10.2
nsi319/legal-pegasus
54ef2872d33bbff28eb09544bdecbf6699f5b0b8
2021-03-11T08:50:52.000Z
[ "pytorch", "pegasus", "text2text-generation", "en", "transformers", "summarization", "license:mit", "autotrain_compatible" ]
summarization
false
nsi319
null
nsi319/legal-pegasus
477
null
transformers
2,385
--- language: en tags: summarization metrics: - rouge - precision inference: false license: mit --- ## PEGASUS for legal document summarization **legal-pegasus** is a finetuned version of ([**google/pegasus-cnn_dailymail**](https://huggingface.co/google/pegasus-cnn_dailymail)) for the **legal domain**, trained to perform **abstractive summarization** task. The maximum length of input sequence is 1024 tokens. ## Training data This model was trained on [**sec-litigation-releases**](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints. ## How to use ```Python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-pegasus") model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-pegasus") text = """On March 5, 2021, the Securities and Exchange Commission charged AT&T, Inc. with repeatedly violating Regulation FD, and three of its Investor Relations executives with aiding and abetting AT&T's violations, by selectively disclosing material nonpublic information to research analysts. According to the SEC's complaint, AT&T learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause AT&T's revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, AT&T Investor Relations executives Christopher Womack, Michael Black, and Kent Evans made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the AT&T executives allegedly disclosed AT&T's internal smartphone sales data and the impact of that data on internal revenue metrics, despite the fact that internal documents specifically informed Investor Relations personnel that AT&T's revenue and sales of smartphones were types of information generally considered "material" to AT&T investors, and therefore prohibited from selective disclosure under Regulation FD. The complaint further alleges that as a result of what they were told on these calls, the analysts substantially reduced their revenue forecasts, leading to the overall consensus revenue estimate falling to just below the level that AT&T ultimately reported to the public on April 26, 2016. The SEC's complaint, filed in federal district court in Manhattan, charges AT&T with violations of the disclosure provisions of Section 13(a) of the Securities Exchange Act of 1934 and Regulation FD thereunder, and charges Womack, Evans and Black with aiding and abetting these violations. The complaint seeks permanent injunctive relief and civil monetary penalties against each defendant. The SEC's investigation was conducted by George N. Stepaniuk, Thomas Peirce, and David Zetlin-Jones of the SEC's New York Regional Office. The SEC's litigation will be conducted by Alexander M. Vasilescu, Victor Suthammanont, and Mr. Zetlin-Jones. The case is being supervised by Sanjay Wadhwa.""" input_tokenized = tokenizer.encode(text, return_tensors='pt',max_length=1024,truncation=True) summary_ids = model.generate(input_tokenized, num_beams=9, no_repeat_ngram_size=3, length_penalty=2.0, min_length=150, max_length=250, early_stopping=True) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0] ### Summary Output # The Securities and Exchange Commission today charged AT&T, Inc. and three of its Investor Relations executives with aiding and abetting the company's violations of the antifraud provisions of Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. According to the SEC's complaint, the company learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause its revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, the executives made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the SEC alleges that Christopher Womack, Michael Black, and Kent Evans allegedly disclosed internal smartphone sales data and the impact of that data on internal revenue metrics. The SEC further alleges that as a result of what they were told, the analysts substantially reduced their revenue forecasts, leading to the overall consensus Revenue Estimate falling to just below the level that AT&t ultimately reported to the public on April 26, 2016. The SEC is seeking permanent injunctive relief and civil monetary penalties against each defendant. ``` ## Evaluation results | Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | legal-pegasus | **57.39** | **62.97** | **26.85** | **28.42** | **30.91** | **33.22** | | pegasus-cnn_dailymail | 43.16 | 45.68 | 13.75 | 14.56 | 18.82 | 20.07 |
hfl/chinese-electra-180g-large-discriminator
d017e219578df8e4885484edbc8969dbdea9cbe0
2021-03-03T01:29:12.000Z
[ "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
null
false
hfl
null
hfl/chinese-electra-180g-large-discriminator
476
3
transformers
2,386
--- language: - zh license: "apache-2.0" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
Visual-Attention-Network/van-base
569d1d8e1323ad5baefa8c00b11d82de0e42cfad
2022-03-31T12:45:44.000Z
[ "pytorch", "van", "image-classification", "dataset:imagenet-1k", "arxiv:2202.09741", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
Visual-Attention-Network
null
Visual-Attention-Network/van-base
476
null
transformers
2,387
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Van Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification). Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, VanForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base") >>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
DTAI-KULeuven/robbert-v2-dutch-sentiment
bb4e1466d94f15534e792fc6870040e024000432
2022-06-29T13:11:28.000Z
[ "pytorch", "roberta", "text-classification", "nl", "dataset:dbrd", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "license:mit", "model-index" ]
text-classification
false
DTAI-KULeuven
null
DTAI-KULeuven/robbert-v2-dutch-sentiment
476
null
transformers
2,388
--- language: nl license: mit datasets: - dbrd model-index: - name: robbert-v2-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy value: 0.93325 widget: - text: "Ik erken dat dit een boek is, daarmee is alles gezegd." - text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT finetuned for sentiment analysis on DBRD This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](https://hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing. We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff: | Model | Identifier | Layers | #Params. | Accuracy | |----------------|------------------------------------------------------------------------|--------|-----------|-----------| | RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* | | RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 | *The results of RobBERT are of a different run than the one reported in the paper. # Training data and setup We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019). Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️). We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy. The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps. The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file. # Limitations and biases - The domain of the reviews is limited to book reviews. - Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292). - This is _not_ the same model as we discussed in our paper, due to some conversion issues between the original training two years ago and now, it was easier to retrain this model. The accuracy is slightly lower, but the model was trained on the beginning of the reviews instead of the end of the reviews. ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or models, you can use the following BibTeX: ``` @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
facebook/wmt21-dense-24-wide-en-x
ee254716c52331df63a08ac929da96c59e68b057
2022-05-26T22:23:33.000Z
[ "pytorch", "m2m_100", "text2text-generation", "multilingual", "ha", "is", "ja", "cs", "ru", "zh", "de", "en", "arxiv:2108.03265", "transformers", "translation", "wmt21", "license:mit", "autotrain_compatible" ]
translation
false
facebook
null
facebook/wmt21-dense-24-wide-en-x
475
9
transformers
2,389
--- language: - multilingual - ha - is - ja - cs - ru - zh - de - en license: mit tags: - translation - wmt21 --- # WMT 21 En-X WMT 21 En-X is a 4.7B multilingual encoder-decoder (seq-to-seq) model trained for one-to-many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2108.03265) and first released in [this](https://github.com/pytorch/fairseq/tree/main/examples/wmt21) repository. The model can directly translate English text into 7 other languages: Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de). To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` Since the model was trained with domain tags, you should prepend them to the input as well. * "wmtdata newsdomain": Use for sentences in the news domain * "wmtdata otherdomain": Use for sentences in all other domain ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt21-dense-24-wide-en-x") tokenizer = AutoTokenizer.from_pretrained("facebook/wmt21-dense-24-wide-en-x") inputs = tokenizer("wmtdata newsdomain One model for many languages.", return_tensors="pt") # translate English to German generated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.get_lang_id("de")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Ein Modell für viele Sprachen." # translate English to Icelandic generated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.get_lang_id("is")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Ein fyrirmynd fyrir mörg tungumál." ``` See the [model hub](https://huggingface.co/models?filter=wmt21) to look for more fine-tuned versions. ## Languages covered English (en), Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de) ## BibTeX entry and citation info ``` @inproceedings{tran2021facebook title={Facebook AI’s WMT21 News Translation Task Submission}, author={Chau Tran and Shruti Bhosale and James Cross and Philipp Koehn and Sergey Edunov and Angela Fan}, booktitle={Proc. of WMT}, year={2021}, } ```
alistair7/bbt-diagpt2-model
2539b4c94eccb5f0ee1d9d86b191f492c70d4fa8
2021-06-06T21:49:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
alistair7
null
alistair7/bbt-diagpt2-model
474
null
transformers
2,390
--- tags: - conversational --- # A conversational model based on the character of Sheldon Cooper from Big Bang Theory.
impyadav/GPT2-FineTuned-Hinglish-Song-Generation
7c5694e0b1ec8dab4f17a857b3778911af56609a
2022-01-03T11:33:54.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
impyadav
null
impyadav/GPT2-FineTuned-Hinglish-Song-Generation
474
1
transformers
2,391
GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist) language: - Hindi - Hinglish
JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k
e37a839cfaa3ce1e0c04d93a0e242d8ec8a694ed
2021-09-23T15:49:18.000Z
[ "pytorch", "dataset:Libri1Mix", "dataset:enh_single", "asteroid", "audio", "DPRNNTasNet", "audio-to-audio", "license:cc-by-sa-4.0" ]
audio-to-audio
false
JorisCos
null
JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k
471
null
asteroid
2,392
--- tags: - asteroid - audio - DPRNNTasNet - audio-to-audio datasets: - Libri1Mix - enh_single license: cc-by-sa-4.0 --- ## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. Training config: ```yml data: n_src: 1 sample_rate: 16000 segment: 1 task: enh_single train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 2 n_filters: 64 stride: 1 masknet: bidirectional: true bn_chan: 128 chunk_size: 250 dropout: 0 hid_size: 128 hop_size: 125 in_chan: 64 mask_act: sigmoid n_repeats: 6 n_src: 1 out_chan: 64 optim: lr: 0.001 optimizer: adam weight_decay: 1.0e-05 training: batch_size: 2 early_stop: true epochs: 200 gradient_clipping: 5 half_lr: true num_workers: 4 ``` Results: On Libri1Mix min test set : ```yml si_sdr: 14.7228101708889 si_sdr_imp: 11.2730288650292 sdr: 15.35661405197161 sdr_imp: 11.853951252758595 sir: Infinity sir_imp: NaN sar: 15.35661405197161 sar_imp: 11.853951252758595 stoi: 0.9300461826351578 stoi_imp: 0.13412635909461715 ``` License notice: This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
TransQuest/monotransquest-da-en_zh-wiki
fefd083a71d9be578d7d98191b880d4578898619
2021-06-03T19:04:32.000Z
[ "pytorch", "xlm-roberta", "text-classification", "en-zh", "transformers", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-da-en_zh-wiki
471
null
transformers
2,393
--- language: en-zh tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_zh-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
neuralspace-reverie/indic-transformers-bn-distilbert
4662cb6d6dd900f8dff05896cd1494a8ed0e1ecf
2020-12-11T21:57:07.000Z
[ "pytorch", "tf", "distilbert", "fill-mask", "bn", "transformers", "MaskedLM", "Bengali", "DistilBERT", "Question-Answering", "Token Classification", "Text Classification", "autotrain_compatible" ]
fill-mask
false
neuralspace-reverie
null
neuralspace-reverie/indic-transformers-bn-distilbert
471
null
transformers
2,394
--- language: - bn tags: - MaskedLM - Bengali - DistilBERT - Question-Answering - Token Classification - Text Classification --- # Indic-Transformers Bengali DistilBERT ## Model description This is a DistilBERT language model pre-trained on ~6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/). This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training. ## Intended uses & limitations #### How to use ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-distilbert') model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-distilbert') text = "আপনি কেমন আছেন?" input_ids = tokenizer(text, return_tensors='pt')['input_ids'] out = model(input_ids)[0] print(out.shape) # out = [1, 5, 768] ``` #### Limitations and bias The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
HooshvareLab/bert-fa-zwnj-base-ner
17d4928f28c36fd74864c221a27134da8b6bf9bc
2021-05-18T21:04:35.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "fa", "transformers", "autotrain_compatible" ]
token-classification
false
HooshvareLab
null
HooshvareLab/bert-fa-zwnj-base-ner
470
3
transformers
2,395
--- language: fa --- # BertNER This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities: - Date (DAT) - Event (EVE) - Facility (FAC) - Location (LOC) - Money (MON) - Organization (ORG) - Percent (PCT) - Person (PER) - Product (PRO) - Time (TIM) ## Dataset Information | | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM | |:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 | | Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 | | Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. **Overall** | Model | accuracy | precision | recall | f1 | |:----------:|:--------:|:---------:|:--------:|:--------:| | Bert | 0.995086 | 0.953454 | 0.961113 | 0.957268 | **Per entities** | | number | precision | recall | f1 | |:---: |:------: |:---------: |:--------: |:--------: | | DAT | 407 | 0.860636 | 0.864865 | 0.862745 | | EVE | 256 | 0.969582 | 0.996094 | 0.982659 | | FAC | 248 | 0.976190 | 0.991935 | 0.984000 | | LOC | 2884 | 0.970232 | 0.971914 | 0.971072 | | MON | 98 | 0.905263 | 0.877551 | 0.891192 | | ORG | 3216 | 0.939125 | 0.954602 | 0.946800 | | PCT | 94 | 1.000000 | 0.968085 | 0.983784 | | PER | 2645 | 0.965244 | 0.965974 | 0.965608 | | PRO | 318 | 0.981481 | 1.000000 | 0.990654 | | TIM | 43 | 0.692308 | 0.837209 | 0.757895 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "HooshvareLab/bert-fa-zwnj-base-ner" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
KoboldAI/GPT-Neo-2.7B-Janeway
56b0950204eafb4673c78595669cf8b04e413ab4
2022-03-20T12:57:50.000Z
[ "pytorch", "gpt_neo", "text-generation", "en", "transformers", "license:mit" ]
text-generation
false
KoboldAI
null
KoboldAI/GPT-Neo-2.7B-Janeway
469
2
transformers
2,396
--- language: en license: mit --- # GPT-Neo 2.7B - Janeway ## Model Description GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
nvidia/segformer-b3-finetuned-cityscapes-1024-1024
74ff1cf1357f4bfa962660c491282dfc3e7c72c2
2022-07-20T09:53:50.000Z
[ "pytorch", "tf", "segformer", "dataset:cityscapes", "arxiv:2105.15203", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
nvidia
null
nvidia/segformer-b3-finetuned-cityscapes-1024-1024
469
null
transformers
2,397
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - cityscapes widget: - src: https://www.researchgate.net/profile/Anurag-Arnab/publication/315881952/figure/fig5/AS:667673876779033@1536197265755/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.jpg example_title: Road --- # SegFormer (b3-sized) model fine-tuned on CityScapes SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cardiffnlp/bertweet-base-emotion
89c1f1de95e4ae3979c82155d9a8f00be45c1668
2021-05-20T14:45:11.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/bertweet-base-emotion
468
null
transformers
2,398
ricardo-filho/bert-portuguese-cased-nli-assin-assin-2
17efd936dc233255fe5c95474813a51e9c3be9f8
2021-08-04T13:24:42.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
ricardo-filho
null
ricardo-filho/bert-portuguese-cased-nli-assin-assin-2
468
3
sentence-transformers
2,399
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 701 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 71, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->