modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
Daivakai/DialoGPT-small-saitama
6d462fb980c369c42910e4ad704c8b67306eb4e9
2021-11-13T19:17:48.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Daivakai
null
Daivakai/DialoGPT-small-saitama
299
null
transformers
3,000
--- tags: - conversational --- #Saitama DialoGPT model
MadhanKumar/HarryPotter-Bot
5444e3bfd1b793083b689acdb05a3214a750af8d
2021-08-29T14:41:37.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
MadhanKumar
null
MadhanKumar/HarryPotter-Bot
299
null
transformers
3,001
--- tags: - conversational --- #Harry Potter Bot Model
cahya/roberta-base-indonesian-522M
88447f4cf0e27ca82cb25b7d841f9add236b08f3
2021-05-20T14:41:00.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "id", "dataset:Indonesian Wikipedia", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
cahya
null
cahya/roberta-base-indonesian-522M
299
2
transformers
3,002
--- language: "id" license: "mit" datasets: - Indonesian Wikipedia widget: - text: "Ibu ku sedang bekerja <mask> supermarket." --- # Indonesian RoBERTa base model (uncased) ## Model description It is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/roberta-base-indonesian-522M') >>> unmasker("Ibu ku sedang bekerja <mask> supermarket") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = RobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import RobertaTokenizer, TFRobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = TFRobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```<s> Sentence A </s> Sentence B </s>```
chan030609/DialoGPT-medium-JAB
92527cd82932ffa1f677f0b25da8f46533e39441
2022-02-11T20:14:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
chan030609
null
chan030609/DialoGPT-medium-JAB
299
null
transformers
3,003
--- tags: - conversational --- # DialoGPT Medium JAB
flax-sentence-embeddings/all_datasets_v3_roberta-large
8046a280a8edaf23f09931ae54b2b1ca1d35bc33
2021-07-23T15:45:17.000Z
[ "pytorch", "roberta", "fill-mask", "en", "arxiv:2104.08727", "arxiv:1810.09305", "arxiv:2102.07033", "arxiv:1904.06472", "sentence-transformers", "feature-extraction", "sentence-similarity" ]
sentence-similarity
false
flax-sentence-embeddings
null
flax-sentence-embeddings/all_datasets_v3_roberta-large
299
8
sentence-transformers
3,004
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_roberta-large') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
flax-sentence-embeddings/stackoverflow_mpnet-base
50529c31da1e14dc9f311ff898cc9c35abe58513
2021-07-26T01:36:33.000Z
[ "pytorch", "mpnet", "fill-mask", "sentence-transformers", "feature-extraction", "sentence-similarity" ]
sentence-similarity
false
flax-sentence-embeddings
null
flax-sentence-embeddings/stackoverflow_mpnet-base
299
null
sentence-transformers
3,005
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # stackoverflow_mpnet-base This is a microsoft/mpnet-base model trained on 18,562,443 (title, body) pairs from StackOverflow. SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. 18,562,443 (title, body) pairs from StackOverflow was used as training data. For this model, mean pooling of hidden states were used as sentence embeddings. See data_config.json and train_script.py in this respository how the model was trained and which datasets have been used. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/stackoverflow_mpnet-base') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used 18,562,443 (title, body) pairs from StackOverflow as training data. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | StackOverflow title body pairs | - | 18,562,443 |
hfl/chinese-legal-electra-small-generator
daafca6671dc00a23a4647a86b483f8e98c4ec49
2021-10-30T23:49:00.000Z
[ "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
null
false
hfl
null
hfl/chinese-legal-electra-small-generator
298
3
transformers
3,006
--- language: - zh license: "apache-2.0" --- # This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
peamjo/DialoGPT-small-morty
f4a0569e0e62895cf58a35ffae5d95aafab59381
2021-09-05T17:45:09.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
peamjo
null
peamjo/DialoGPT-small-morty
298
null
transformers
3,007
--- tags: - conversational --- # Morty DialoGPT Model
sonoisa/clip-vit-b-32-japanese-v1
3a955d5fef102d4fd8e317df0c003f2a3c083514
2022-04-19T14:18:58.000Z
[ "pytorch", "bert", "feature-extraction", "ja", "transformers", "clip", "sentence-similarity", "license:cc-by-sa-4.0" ]
feature-extraction
false
sonoisa
null
sonoisa/clip-vit-b-32-japanese-v1
298
5
transformers
3,008
--- language: ja license: cc-by-sa-4.0 tags: - clip - feature-extraction - sentence-similarity --- # 日本語版[CLIP](https://github.com/openai/CLIP)モデル This is a [CLIP](https://github.com/openai/CLIP) text/image encoder model for Japanese. 英語版CLIPモデルのテキストエンコーダーを一種の蒸留を用いて日本語化したモデルです。 作り方や精度、使い方、サンプルコードは下記の解説記事をご参照ください。 - 解説記事: - 概要: [【日本語モデル付き】2022年にマルチモーダル処理をする人にお勧めしたい事前学習済みモデル](https://qiita.com/sonoisa/items/00e8e2861147842f0237) - 使い方の解説: [【日本語CLIP】画像とテキストの類似度計算、画像やテキストの埋め込み計算、類似画像検索](https://qiita.com/sonoisa/items/d6db2f130fa9a4ce0c2c) - (公開準備中) 応用解説: いらすとや画像のマルチモーダル検索(ゼロショット編) - (公開準備中) 応用解説: いらすとや画像のマルチモーダル検索(ファインチューニング編) - (公開準備中) 応用解説: 画像とテキストの両方を用いたマルチモーダル分類 - サンプルコードのリポジトリ: https://github.com/sonoisa/clip-japanese - デモ: - [いらすとや画像のマルチモーダル検索(ゼロショット)](https://huggingface.co/spaces/sonoisa/Irasuto_search_CLIP_zero-shot)
BotterHax/DialoGPT-small-harrypotter
da1e538a11f1d1845155f645e90ea7917865a431
2022-02-19T18:23:15.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
BotterHax
null
BotterHax/DialoGPT-small-harrypotter
297
null
transformers
3,009
--- tags: - conversational --- # DialoGPT Model for Penny
Paradocx/Dialogpt-mid-hpai
0276836bb89ca6de39fa263290ba20b089badfe6
2021-09-11T03:23:11.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Paradocx
null
Paradocx/Dialogpt-mid-hpai
297
null
transformers
3,010
--- tags: - conversational --- #Harry Potter AI bot
ZAFuzzy/DialoGPT-medium-Fatty
4d1f03b270c34a13cf56ac63a44011c7f340656c
2021-09-24T08:20:59.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ZAFuzzy
null
ZAFuzzy/DialoGPT-medium-Fatty
297
null
transformers
3,011
--- tags: - conversational --- #Fatty DialoGPT Model
allenai/tailor
2d8a2501356b23e13731b4bcb16d8d82216e0a00
2021-07-16T17:43:52.000Z
[ "pytorch", "t5", "text2text-generation", "en", "arxiv:2107.07150", "transformers", "controlled generation", "perturbation", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/tailor
297
1
transformers
3,012
--- language: "en" tags: - controlled generation - perturbation widget: - text: "[VERB+passive+past: break | PATIENT+partial: cup] <extra_id_0> <extra_id_1> <extra_id_2> ." - max_length: --- # Tailor ## Model description This is a ported version of [Tailor](https://homes.cs.washington.edu/~wtshuang/static/papers/2021-arxiv-tailor.pdf), the general-purpose counterfactual generator. For more code release, please refer to [this github page](https://github.com/allenai/tailor). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM model_path = "allenai/tailor" generator = pipeline("text2text-generation", model=AutoModelForSeq2SeqLM.from_pretrained(model_path), tokenizer=AutoTokenizer.from_pretrained(model_path), framework="pt", device=0) prompt_text = "[VERB+active+past: comfort | AGENT+complete: the doctor | PATIENT+partial: athlete | LOCATIVE+partial: in] <extra_id_0> , <extra_id_1> <extra_id_2> <extra_id_3> ." generator(prompt_text, max_length=200) ``` ### BibTeX entry and citation info ```bibtex @misc{ross2021tailor, title={Tailor: Generating and Perturbing Text with Semantic Controls}, author={Alexis Ross and Tongshuang Wu and Hao Peng and Matthew E. Peters and Matt Gardner}, year={2021}, eprint={2107.07150}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2107.07150}, } ```
nvidia/mit-b4
c6b324bf64d4f93b872c365f13df9c54761edb16
2022-07-29T13:15:54.000Z
[ "pytorch", "tf", "segformer", "image-classification", "dataset:imagenet_1k", "arxiv:2105.15203", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
nvidia
null
nvidia/mit-b4
297
null
transformers
3,013
--- license: apache-2.0 tags: - vision datasets: - imagenet_1k widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b4-sized) encoder pre-trained-only SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes. ## Intended uses & limitations You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b4") model = SegformerForImageClassification.from_pretrained("nvidia/mit-b4") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
laxya007/gpt2_bd2
92cf1b3a812d597077a02d3b734294bff30d0840
2022-03-19T13:11:52.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
laxya007
null
laxya007/gpt2_bd2
297
null
transformers
3,014
Entry not found
captainswiftfox/rickandmorty
c201ead94df184617858b191b8b8147e5791ed75
2022-05-09T14:35:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
captainswiftfox
null
captainswiftfox/rickandmorty
297
null
transformers
3,015
--- tags: - conversational --- # RickS AI bot
pablocosta/bert-tweet-br-base
fe3f758c6875d74f9887ba207d43331084d3d8f0
2022-07-25T16:10:28.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
pablocosta
null
pablocosta/bert-tweet-br-base
297
1
transformers
3,016
We are still testing this model. The risks are all yours.
TofuBoy/DialoGPT-medium-Yubin2
8af6356cc374c90a2b38059c4386730b9d632c20
2022-01-21T10:11:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
TofuBoy
null
TofuBoy/DialoGPT-medium-Yubin2
296
null
transformers
3,017
--- tags: - conversational --- # DialoGPT Model
addy88/t5-qa-genrate-explain-context
bd2d8a462e739fe41e6a1e152b9ea75787859713
2022-01-02T06:31:18.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
addy88
null
addy88/t5-qa-genrate-explain-context
296
null
transformers
3,018
Entry not found
sberbank-ai/rugpt2large
4a0e15e7fcfb00082ba7db6e582a407499d9674a
2021-09-21T19:34:11.000Z
[ "pytorch", "ru", "transformers", "PyTorch", "Transformers" ]
null
false
sberbank-ai
null
sberbank-ai/rugpt2large
296
1
transformers
3,019
--- language: - ru tags: - PyTorch - Transformers thumbnail: "https://github.com/sberbank-ai/ru-gpts" --- # rugpt2large Model was trained with sequence length 1024 using transformers by [SberDevices](https://sberdevices.ru/) team on 170Gb data on 64 GPUs 3 weeks.
softcatala/julibert
7243cb6c51150409d18feda1d35cfbadb34b2bf4
2021-05-20T17:19:38.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "ca", "transformers", "autotrain_compatible" ]
fill-mask
false
softcatala
null
softcatala/julibert
296
1
transformers
3,020
--- language: ca --- ## Introduction Download the model here: * Catalan Roberta model: [julibert-2020-11-10.zip](https://www.softcatala.org/pub/softcatala/julibert/julibert-2020-11-10.zip) ## What's this? Source code: https://github.com/Softcatala/julibert * Corpus: Oscar Catalan Corpus (3,8G) * Model type: Roberta * Vocabulary size: 50265 * Steps: 500000
voidful/wav2vec2-large-xlsr-53-tw-gpt
10a7d9ad1efc12381248db9bc0c289b741480ce3
2022-03-24T23:08:57.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "zh-TW", "dataset:common_voice", "transformers", "audio", "hf-asr-leaderboard", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
voidful
null
voidful/wav2vec2-large-xlsr-53-tw-gpt
296
2
transformers
3,021
--- language: zh-TW datasets: - common_voice tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Taiwanese Mandarin(zh-tw) by Voidful results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-TW type: common_voice args: zh-TW metrics: - name: Test CER type: cer value: 18.36 --- # Wav2Vec2-Large-XLSR-53-tw-gpt Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on zh-tw using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage [Colab trial](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing) ``` import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, AutoTokenizer, AutoModelWithLMHead ) import torch import re import sys model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") gpt_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def load_file_to_data(file): batch = {} speech, _ = torchaudio.load(file) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq return batch def predict(data): features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) gpt_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0) gpt_prob = torch.nn.functional.softmax(gpt_model(gpt_input).logits, dim=-1)[:voice_prob.size()[0],:] comb_pred_ids = torch.argmax(gpt_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) return decoded_results ``` Predict ```python predict(load_file_to_data('voice file path')) ``` ## Evaluation The model can be evaluated as follows on the zh-tw test data of Common Voice. CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese env setup: ``` !pip install editdistance !pip install torchaudio !pip install datasets transformers ``` ## Evaluation without LM: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER: 28.70`. `TIME: 04:08 min` ## Evaluation with GPT: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0) lm_prob = torch.nn.functional.softmax(lm_model(lm_input).logits, dim=-1)[:voice_prob.size()[0],:] comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 25.70`. `TIME: 06:04 min` ## Evaluation with GPT + beam search: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelWithLMHead from datasets import Audio from math import log model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese") lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) def map_to_array(batch): audio = batch["audio"] batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["sampling_rate"] = audio["sampling_rate"] batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: sequences = [[[], 1.0]] pred_ids = torch.argmax(logit, dim=-1) mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) while True: all_candidates = list() exceed = False for seq in sequences: tokens, score = seq gpt_input = torch.tensor([tokenizer.cls_token_id]+tokens).to(device) gpt_prob = torch.nn.functional.softmax(lm_model(gpt_input).logits, dim=-1)[:len(gpt_input),:] if len(gpt_input) >= len(voice_prob): exceed = True comb_pred_ids = gpt_prob*voice_prob[:len(gpt_input)] v,i = torch.topk(comb_pred_ids,50,dim=-1) for tok_id,tok_prob in zip(i.tolist()[-1],v.tolist()[-1]): candidate = [tokens + [tok_id], score + -log(tok_prob)] all_candidates.append(candidate) ordered = sorted(all_candidates, key=lambda tup: tup[1]) sequences = ordered[:10] if exceed: break decoded_results.append(processor.decode(sequences[0][0])) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 18.36`. ## Evaluation with BERT: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from transformers import AutoTokenizer, AutoModelForMaskedLM model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese").to(device) model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0) mask_lm_prob = voice_prob.clone() for i in range(lm_input.shape[-1]): masked_lm_input = lm_input.clone() masked_lm_input[0][i] = torch.tensor(tokenizer.mask_token_id).to('cuda') lm_prob = torch.nn.functional.softmax(lm_model(masked_lm_input).logits, dim=-1).squeeze(0) mask_lm_prob[i] = lm_prob[i] comb_pred_ids = torch.argmax(mask_lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER 25.57`. `TIME: 09:49 min` ## Evaluation with T-TA: setup ``` !git clone https://github.com/voidful/pytorch-tta.git !mv ./pytorch-tta/tta ./tta !wget https://github.com/voidful/pytorch-tta/releases/download/wiki_zh/wiki_zh.pt ``` ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys from tta.modeling_tta import TTALMModel from transformers import AutoTokenizer import torch model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" device = "cuda" processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt" chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]" tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model = TTALMModel("bert-base-chinese") tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") lm_model.load_state_dict(torch.load("./wiki_zh.pt",map_location=torch.device('cuda'))) lm_model.to('cuda') lm_model.eval() model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(processor_name) ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits decoded_results = [] for logit in logits: pred_ids = torch.argmax(logit, dim=-1) mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size()) vocab_size = logit.size()[-1] voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1) lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0) lm_prob = torch.nn.functional.softmax(lm_model.forward(lm_input)[0], dim=-1).squeeze(0) comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1) decoded_results.append(processor.decode(comb_pred_ids)) batch["predicted"] = decoded_results batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) def cer_cal(groundtruth, hypothesis): err = 0 tot = 0 for p, t in zip(hypothesis, groundtruth): err += float(ed.eval(p.lower(), t.lower())) tot += len(t) return err / tot print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"]))) ``` `CER: 25.77`. `TIME: 06:01 min`
nabin19677/small-cartman
13ed605514f19be277b31def8c06438ef2e63d61
2022-03-10T10:50:44.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
nabin19677
null
nabin19677/small-cartman
296
null
transformers
3,022
--- tags: - conversational --- # My Awesome Model
CopymySkill/DialoGPT-medium-atakan
6ab2da60dc96c0725c0f16b22fcd9ea510caffa0
2021-09-23T17:21:59.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
CopymySkill
null
CopymySkill/DialoGPT-medium-atakan
295
null
transformers
3,023
--- tags: - conversational --- # Atakan DialoGPT Model
HackyHackyMan/DialoGPT-small-harrypotter
bdb8208f3f7a22daa500c059977aeb617254be33
2021-09-28T21:39:35.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
HackyHackyMan
null
HackyHackyMan/DialoGPT-small-harrypotter
295
null
transformers
3,024
--- tags: - conversational --- #Harry Potter DialoGPT Model
ItoYagura/DialoGPT-medium-tohru
c1d730fc47e4971e3c13c63371050d4e66d2eded
2021-08-29T18:58:33.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ItoYagura
null
ItoYagura/DialoGPT-medium-tohru
295
null
transformers
3,025
--- tags: - conversational --- # Tohru DialoGPT model
alankar/DialoGPT-small-rick
ccf563857d8e047181507990ff1bc53445d70e06
2021-08-28T18:43:35.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
alankar
null
alankar/DialoGPT-small-rick
295
null
transformers
3,026
--- tags: - conversational --- # Rick Sanchez DialoGPT Model
pritamdeka/S-PubMedBert-MS-MARCO-SCIFACT
9b60669ad88f629a850e6dbf6ff0c23d828069a2
2022-03-04T12:12:40.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
pritamdeka
null
pritamdeka/S-PubMedBert-MS-MARCO-SCIFACT
295
null
sentence-transformers
3,027
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # S-PubMedBert-MS-MARCO-SCIFACT This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('S-PubMedBert-MS-MARCO-SCIFACT') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('S-PubMedBert-MS-MARCO-SCIFACT') model = AutoModel.from_pretrained('S-PubMedBert-MS-MARCO-SCIFACT') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 560 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 10000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 56, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa
99771655242e7adea26691ceeff2a11d06d3d087
2021-10-31T16:33:32.000Z
[ "pytorch", "tensorboard", "layoutlmv2", "question-answering", "transformers", "generated_from_trainer", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
tiennvcs
null
tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa
295
2
transformers
3,028
--- license: cc-by-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased-finetuned-docvqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased-finetuned-docvqa This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 250500 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.463 | 0.27 | 1000 | 1.6272 | | 0.9447 | 0.53 | 2000 | 1.3646 | | 0.7725 | 0.8 | 3000 | 1.2560 | | 0.5762 | 1.06 | 4000 | 1.3582 | | 0.4382 | 1.33 | 5000 | 1.2490 | | 0.4515 | 1.59 | 6000 | 1.1860 | | 0.383 | 1.86 | 7000 | 1.1940 | ### Framework versions - Transformers 4.12.2 - Pytorch 1.8.0+cu101 - Datasets 1.14.0 - Tokenizers 0.10.3
DeepESP/gpt2-spanish-medium
538d6f0c548b1826417b7e05a99f780c16cac155
2021-10-19T08:53:15.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
false
DeepESP
null
DeepESP/gpt2-spanish-medium
294
3
transformers
3,029
--- language: es tags: - GPT-2 - Spanish - ebooks - nlg datasets: - ebooks widget: - text: "Quisiera saber que va a suceder" license: mit --- # GPT2-Spanish GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the medium version of the original OpenAI GPT2 model. ## Corpus This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization). ## Tokenizer The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens. This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages. Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training. ## Training The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers. ## Authors The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h). Thanks to the members of the community who collaborated with funding for the initial tests. ## Cautions The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
cardiffnlp/twitter-roberta-base-2021-124m
e80bc4e3ae8aae5255c61e1e937d3656aec73df3
2022-02-09T11:17:40.000Z
[ "pytorch", "roberta", "fill-mask", "arxiv:2202.03829", "transformers", "autotrain_compatible" ]
fill-mask
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-2021-124m
294
2
transformers
3,030
# Twitter 2021 124M (RoBERTa-base) This is a RoBERTa-base model trained on 123.86M tweets until the end of 2021. More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829). Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms). For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models). ## Preprocess Text Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data). ```python def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) ``` ## Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer MODEL = "cardiffnlp/twitter-roberta-base-2021-124m" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def print_candidates(): for i in range(5): token = tokenizer.decode(candidates[i]['token']) score = candidates[i]['score'] print("%d) %.5f %s" % (i+1, score, token)) texts = [ "So glad I'm <mask> vaccinated.", "I keep forgetting to bring a <mask>.", "Looking forward to watching <mask> Game tonight!", ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) print_candidates() ``` Output: ``` ------------------------------ So glad I'm <mask> vaccinated. 1) 0.39613 fully 2) 0.26333 getting 3) 0.18988 not 4) 0.02312 still 5) 0.02099 already ------------------------------ I keep forgetting to bring a <mask>. 1) 0.08356 mask 2) 0.05696 book 3) 0.03505 bag 4) 0.02983 backpack 5) 0.02847 blanket ------------------------------ Looking forward to watching <mask> Game tonight! 1) 0.46618 the 2) 0.24042 The 3) 0.03216 End 4) 0.02925 Squid 5) 0.02610 this ``` ## Example Tweet Embeddings ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np from scipy.spatial.distance import cosine from collections import Counter def get_embedding(text): text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) return features_mean MODEL = "cardiffnlp/twitter-roberta-base-2021-124m" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModel.from_pretrained(MODEL) query = "The book was awesome" tweets = ["I just ordered fried chicken 🐣", "The movie was great", "What time is the next game?", "Just finished reading 'Embeddings in NLP'"] sims = Counter() for tweet in tweets: sim = 1 - cosine(get_embedding(query), get_embedding(tweet)) sims[tweet] = sim print('Most similar to: ', query) print(f"{'-'*30}") for idx, (tweet, sim) in enumerate(sims.most_common()): print("%d) %.5f %s" % (idx+1, sim, tweet)) ``` Output: ``` Most similar to: The book was awesome ------------------------------ 1) 0.98969 The movie was great 2) 0.96102 Just finished reading 'Embeddings in NLP' 3) 0.95565 I just ordered fried chicken 🐣 4) 0.95041 What time is the next game? ``` ## Example Feature Extraction ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np MODEL = "cardiffnlp/twitter-roberta-base-2021-124m" tokenizer = AutoTokenizer.from_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) # Pytorch model = AutoModel.from_pretrained(MODEL) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) #features_max = np.max(features[0], axis=0) # # Tensorflow # model = TFAutoModel.from_pretrained(MODEL) # encoded_input = tokenizer(text, return_tensors='tf') # features = model(encoded_input) # features = features[0].numpy() # features_mean = np.mean(features[0], axis=0) # #features_max = np.max(features[0], axis=0) ```
nvidia/segformer-b2-finetuned-cityscapes-1024-1024
2416842a88764bd96f8dc5c7dbacd79b1aca2918
2022-07-20T09:53:56.000Z
[ "pytorch", "tf", "segformer", "dataset:cityscapes", "arxiv:2105.15203", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
nvidia
null
nvidia/segformer-b2-finetuned-cityscapes-1024-1024
294
null
transformers
3,031
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - cityscapes widget: - src: https://www.researchgate.net/profile/Anurag-Arnab/publication/315881952/figure/fig5/AS:667673876779033@1536197265755/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.jpg example_title: Road --- # SegFormer (b2-sized) model fine-tuned on CityScapes SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
DMetaSoul/sbert-chinese-general-v2
14b486c50cdb0e7cff9792488021c905abfd9a57
2022-04-04T07:22:23.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers", "semantic-search", "chinese" ]
sentence-similarity
false
DMetaSoul
null
DMetaSoul/sbert-chinese-general-v2
294
null
sentence-transformers
3,032
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese --- # DMetaSoul/sbert-chinese-general-v2 此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在百万级语义相似数据集 [SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) 上进行训练,适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好**。 注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2-distill),也已经开源啦! # Usage ## 1. Sentence-Transformers 通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装: ``` pip install -U sentence-transformers ``` 然后使用下面的代码来载入该模型并进行文本表征向量的提取: ```python from sentence_transformers import SentenceTransformer sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers 如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取: ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation 该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数: | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | | ---------------------------- | ------------ | ------------- | ---------- | ---------- | ------------ | ---------- | ---------- | | **sbert-chinese-general-v1** | **84.54%** | **82.17%** | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% | | **sbert-chinese-general-v2** | 77.20% | 72.60% | **36.80%** | **76.92%** | **49.63%** | **16.24%** | **63.16%** | 这里对比了本模型跟之前我们发布 [sbert-chinese-general-v1](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1) 之间的差异,可以看到本模型在多个任务上的泛化能力更好。 ## Citing & Authors E-mail: [email protected]
AJ-Dude/DialoGPT-small-harrypotter
4eeb993a4c143906c2510c93b417cd7af752095f
2021-10-22T08:26:19.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
AJ-Dude
null
AJ-Dude/DialoGPT-small-harrypotter
293
null
transformers
3,033
--- tags: - conversational --- # Harry Potter DialoGPT model
Helsinki-NLP/opus-mt-en-hu
c07e29c91012b3fe97df3da7c56d5eb25d4be40b
2021-09-09T21:36:04.000Z
[ "pytorch", "marian", "text2text-generation", "en", "hu", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-hu
293
null
transformers
3,034
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-hu * source languages: en * target languages: hu * OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.hu | 40.1 | 0.628 |
KringleClaus/Dialog-santa
64c61b2b57bbe633855ef72341bc1db416286111
2021-10-12T20:44:52.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
KringleClaus
null
KringleClaus/Dialog-santa
293
null
transformers
3,035
--- tags: - conversational --- # Santa Chatbot
facebook/s2t-medium-mustc-multilingual-st
5b743fb5adb8b0954c1857f343f04687b32cfe39
2022-02-07T14:58:32.000Z
[ "pytorch", "tf", "speech_to_text", "automatic-speech-recognition", "en", "de", "nl", "es", "fr", "it", "pt", "ro", "ru", "dataset:mustc", "arxiv:2010.05171", "arxiv:1904.08779", "transformers", "audio", "speech-translation", "license:mit" ]
automatic-speech-recognition
false
facebook
null
facebook/s2t-medium-mustc-multilingual-st
293
1
transformers
3,036
--- language: - en - de - nl - es - fr - it - pt - ro - ru datasets: - mustc tags: - audio - speech-translation - automatic-speech-recognition pipeline_tag: automatic-speech-recognition license: mit --- # S2T-MEDIUM-MUSTC-MULTILINGUAL-ST `s2t-medium-mustc-multilingual-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Multilingual Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) ## Model description S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. ## Intended uses & limitations This model can be used for end-to-end English speech to French text translation. See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following example shows how to transate English speech to French and German text using the `facebook/s2t-medium-mustc-multilingual-st` checkpoint. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") # translate English Speech To French Text generated_ids = model.generate( input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"] ) translation_fr = processor.batch_decode(generated_ids) # translate English Speech To German Text generated_ids = model.generate( input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=processor.tokenizer.lang_code_to_id["de"] ) translation_de = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ## Training data The s2t-medium-mustc-multilingual-st is trained on [MuST-C](https://ict.fbk.eu/must-c/). MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. ## Training procedure ### Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. ### Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate model training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token is used as target BOS. ## Evaluation results MuST-C test results (BLEU score): | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 | ### BibTeX entry and citation info ```bibtex @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, } ```
kripanshudixit/DialoGPT-small-phoenix
353c39eea9703cb55e265788ffd330f0d5e75498
2021-08-28T15:03:56.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
kripanshudixit
null
kripanshudixit/DialoGPT-small-phoenix
293
null
transformers
3,037
--- tags: - conversational --- # Phoenix DialoGPT model
mrm8488/mobilebert-finetuned-ner
3f9a1f308e4272219bd43e9127cb65fa5504d9ec
2021-01-30T11:42:05.000Z
[ "pytorch", "mobilebert", "token-classification", "en", "transformers", "ner", "license:mit", "autotrain_compatible" ]
token-classification
false
mrm8488
null
mrm8488/mobilebert-finetuned-ner
293
null
transformers
3,038
--- language: en tags: - mobilebert - ner license: mit ---
KoboldAI/fairseq-dense-13B-Nerys
2dea07a2017f755e1906cf6f0bf16fd13ff3e814
2022-06-25T11:22:58.000Z
[ "pytorch", "xglm", "text-generation", "en", "transformers", "license:mit" ]
text-generation
false
KoboldAI
null
KoboldAI/fairseq-dense-13B-Nerys
293
null
transformers
3,039
--- language: en license: mit --- # Fairseq-dense 13B - Nerys ## Model Description Fairseq-dense 13B-Nerys is a finetune created using Fairseq's MoE dense model. ## Training data The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset). Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-13B-Nerys') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info ``` Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts ```
Wise/DialogGPT-small-JC
276014e2984fdf687ebadabfe8ebe3e0931efd1d
2021-09-28T00:17:36.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Wise
null
Wise/DialogGPT-small-JC
292
null
transformers
3,040
--- tags: - conversational --- # JC DialogGPT Model
huggingtweets/dansalvato
603f18feb00fecf0c2ecdc030db4ae9b4e38a360
2021-05-22T00:40:26.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/dansalvato
292
null
transformers
3,041
--- language: en thumbnail: https://www.huggingtweets.com/dansalvato/1612858230042/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1133218052303048704/1JXz7DT8_400x400.png')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dan Salvato 🤖 AI Bot </div> <div style="font-size: 15px">@dansalvato bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@dansalvato's tweets](https://twitter.com/dansalvato). | Data | Quantity | | --- | --- | | Tweets downloaded | 3233 | | Retweets | 197 | | Short tweets | 233 | | Tweets kept | 2803 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jbu3vnq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dansalvato's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1c66e4az) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1c66e4az/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dansalvato') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sail/poolformer_s12
81dd35aba1f435d6ade0f31c0c99f02097efedf8
2022-04-08T07:48:14.000Z
[ "pytorch", "poolformer", "image-classification", "dataset:imagenet", "arxiv:2111.11418", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
sail
null
sail/poolformer_s12
292
null
transformers
3,042
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet --- # PoolFormer (S12 model) PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer). ## Model description PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling. Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s12') model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s12') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572). ### Pretraining The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | # params | URL | |---------------------------------------|-------------------------|----------|------------------------------------------------------------------| | **PoolFormer-S12** | **77.2** | **12M** | **https://huggingface.co/sail/poolformer_s12** | | PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 | | PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 | | PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 | | PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 | ### BibTeX entry and citation info ```bibtex @article{yu2021metaformer, title={MetaFormer is Actually What You Need for Vision}, author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng}, journal={arXiv preprint arXiv:2111.11418}, year={2021} } ```
sreyanghosh/DialoGPT-medium-joker
2cf5d5968a26328a6146ec1063a4ba8886479670
2021-08-27T07:24:17.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
sreyanghosh
null
sreyanghosh/DialoGPT-medium-joker
292
null
transformers
3,043
--- tags: - conversational --- # Joker DialoGPT Model
michiyasunaga/LinkBERT-large
d7bc175578ab3361bd13908e01f342e2dfbdba7c
2022-03-31T00:27:01.000Z
[ "pytorch", "bert", "feature-extraction", "en", "dataset:wikipedia", "dataset:bookcorpus", "arxiv:2203.15827", "transformers", "exbert", "linkbert", "fill-mask", "question-answering", "text-classification", "token-classification", "license:apache-2.0" ]
text-classification
false
michiyasunaga
null
michiyasunaga/LinkBERT-large
292
4
transformers
3,044
--- license: apache-2.0 language: en datasets: - wikipedia - bookcorpus tags: - bert - exbert - linkbert - feature-extraction - fill-mask - question-answering - text-classification - token-classification --- ## LinkBERT-large LinkBERT-large model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT). ## Model description LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document. LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval). ## Intended uses & limitations The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text). ### How to use To use the model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-large') model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-large') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases. ## Evaluation results When fine-tuned on downstream tasks, LinkBERT achieves the following results. **General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):** | | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE | | ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- | | | F1 | F1 | F1 | F1 | F1 | F1 | Avg score | | BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 | | **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** | | BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 | | **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** | ## Citation If you find LinkBERT useful in your project, please cite the following: ```bibtex @InProceedings{yasunaga2022linkbert, author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang}, title = {LinkBERT: Pretraining Language Models with Document Links}, year = {2022}, booktitle = {Association for Computational Linguistics (ACL)}, } ```
GeniusVoice/tinybertje-msmarco-finetuned
ba38d0d46b7b28f4e10c4db85a13d92f11a794ec
2022-06-08T21:34:50.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
GeniusVoice
null
GeniusVoice/tinybertje-msmarco-finetuned
292
null
transformers
3,045
Entry not found
google/ddpm-cifar10-32
bdd0dcf97540a6e9873cf34be64b2e7813acf67a
2022-07-21T15:00:45.000Z
[ "diffusers", "arxiv:2006.11239", "pytorch", "unconditional-image-generation", "license:apache-2.0" ]
unconditional-image-generation
false
google
null
google/ddpm-cifar10-32
292
2
diffusers
3,046
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Denoising Diffusion Probabilistic Models (DDPM) **Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) **Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel **Abstract**: *We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.* ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/ddpm-cifar10-32" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm()["sample"] # save image image[0].save("ddpm_generated_image.png") ``` For more in-detail information, please have a look at the [official inference example](hhttps://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Training If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) ## Samples 1. ![sample_1](https://huggingface.co/google/ddpm-cifar10-32/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ddpm-cifar10-32/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ddpm-cifar10-32/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ddpm-cifar10-32/resolve/main/images/generated_image_3.png)
Artem1/bert-wsd
197a34266ba6e88e7b5e972040b5a1eecf85221f
2022-07-12T05:11:17.000Z
[ "pytorch", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Artem1
null
Artem1/bert-wsd
292
null
transformers
3,047
Entry not found
Aibox/DialoGPT-small-rick
75340709dc60ab3a6e7bfc6ce5c82b6f783ba449
2021-08-31T00:01:30.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Aibox
null
Aibox/DialoGPT-small-rick
291
null
transformers
3,048
--- tags: - conversational --- # Rick DialoGPT Model
facebook/deit-base-patch16-384
16fc3691253b9b0dd02de0fcf6399724fa3bd720
2022-07-13T11:41:03.000Z
[ "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "transformers", "license:apache-2.0" ]
image-classification
false
facebook
null
facebook/deit-base-patch16-384
291
null
transformers
3,049
--- license: apache-2.0 tags: - image-classification datasets: - imagenet-1k --- # Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-384') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | **DeiT-base 384** | **82.9** | **96.2** | **87M** | **https://huggingface.co/facebook/deit-base-patch16-384** | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
c725a2b810e2e7db4e61656fd7021b553655caf9
2021-10-18T09:34:42.000Z
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
CAMeL-Lab
null
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
290
null
transformers
3,050
--- language: - ar license: apache-2.0 widget: - text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع' --- # CAMeLBERT-MSA POS-MSA Model ## Model description **CAMeLBERT-MSA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset . Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa') >>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع' >>> pos(text) [{'entity': 'noun', 'score': 0.9999764, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.99991846, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998356, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99368894, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999426, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.9999339, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99996775, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99996895, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990183, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999347, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99931145, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
Kryptone/monikAI
07a27db4047e2588fdfd9821800ac6d9568dc9c4
2021-09-02T00:32:13.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Kryptone
null
Kryptone/monikAI
290
null
transformers
3,051
--- tags: - conversational --- # Monika Discord Chatbot
Zuha/DialoGPT-small-gandalf
6b898d3228cb8cd6fc546088879c1b6bdea842e8
2021-12-29T15:46:58.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Zuha
null
Zuha/DialoGPT-small-gandalf
290
null
transformers
3,052
--- tags: - conversational --- # Gandalf DialoGPT Model
ChukSamuels/DialoGPT-small-Dr.FauciBot
0c0428ded1ae868eec3dff1958fe8bce91887697
2022-01-09T03:03:57.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ChukSamuels
null
ChukSamuels/DialoGPT-small-Dr.FauciBot
289
null
transformers
3,053
--- tags: - conversational --- # Dr. Fauci DialoGPT Model
Lovery/Aqua
78d53e4bbe841acbc07596e864502204025a5332
2021-09-09T09:13:54.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Lovery
null
Lovery/Aqua
289
null
transformers
3,054
--- tags: - conversational --- # Aqua
fhamborg/roberta-targeted-sentiment-classification-newsarticles
ca1bf21b95334c70799aa0f56321565782e540e1
2022-01-10T16:16:01.000Z
[ "pytorch", "roberta", "en", "dataset:fhamborg/news_sentiment_newsmtsc", "transformers", "text-classification", "sentiment-analysis", "sentiment-classification", "targeted-sentiment-classification", "target-depentent-sentiment-classification", "license:apache-2.0" ]
text-classification
false
fhamborg
null
fhamborg/roberta-targeted-sentiment-classification-newsarticles
289
1
transformers
3,055
--- language: - en tags: - text-classification - sentiment-analysis - sentiment-classification - targeted-sentiment-classification - target-depentent-sentiment-classification license: "apache-2.0" datasets: "fhamborg/news_sentiment_newsmtsc" --- # NewsSentiment: easy-to-use, high-quality target-dependent sentiment classification for news articles ## Important: [use our PyPI package](https://pypi.org/project/NewsSentiment/) instead of this model on the Hub The Huggingface Hub architecture currently [does not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification since you cannot provide the required inputs, i.e., sentence and target. Thus, we recommend that you use our easy-to-use [PyPI package NewsSentiment](https://pypi.org/project/NewsSentiment/). ## Description This model is the currently [best performing](https://aclanthology.org/2021.eacl-main.142.pdf) targeted sentiment classifier for news articles. In contrast to regular sentiment classification, targeted sentiment classification allows you to provide a target in a sentence. Only for this target, the sentiment is then predicted. This is more reliable in many cases, as demonstrated by the following simplistic example: "I like Bert, but I hate Robert." This model is also available as an easy-to-use PyPI package named [`NewsSentiment`](https://pypi.org/project/NewsSentiment/) and in its original GitHub repository named [`NewsMTSC`](https://github.com/fhamborg/NewsMTSC), where you will find the dataset the model was trained on, other models for sentiment classification, and a training and testing framework. More information on the model and the dataset (consisting of more than 10k sentences sampled from news articles, each labeled and agreed upon by at least 5 annotators) can be found in our [EACL paper](https://aclanthology.org/2021.eacl-main.142.pdf). The dataset, the model, and its source code can be viewed in our [GitHub repository](https://github.com/fhamborg/NewsMTSC). We recommend to use our [PyPI package](https://pypi.org/project/NewsSentiment/) for sentiment classification since the Huggingface Hub platform seems to [not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification. # How to cite If you use the dataset or model, please cite our [paper](https://www.aclweb.org/anthology/2021.eacl-main.142/) ([PDF](https://www.aclweb.org/anthology/2021.eacl-main.142.pdf)): ``` @InProceedings{Hamborg2021b, author = {Hamborg, Felix and Donnay, Karsten}, title = {NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)}, year = {2021}, month = {Apr.}, location = {Virtual Event}, } ```
knkarthick/meeting-summary-samsum
36cdb69af2d007bc8341b138fdc13d6028777006
2022-07-20T08:28:58.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:samsum", "transformers", "seq2seq", "summarization", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
knkarthick
null
knkarthick/meeting-summary-samsum
289
1
transformers
3,056
--- language: en tags: - bart - seq2seq - summarization license: apache-2.0 datasets: - samsum widget: - text: | Hannah: Hey, do you have Betty's number? Amanda: Lemme check Amanda: Sorry, can't find it. Amanda: Ask Larry Amanda: He called her last time we were at the park together Hannah: I don't know him well Amanda: Don't be shy, he's very nice Hannah: If you say so.. Hannah: I'd rather you texted him Amanda: Just text him 🙂 Hannah: Urgh.. Alright Hannah: Bye Amanda: Bye bye model-index: - name: bart-large-xsum-samsum results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization" type: samsum metrics: - name: Validation ROUGE-1 type: rouge-1 value: 54.3921 - name: Validation ROUGE-2 type: rouge-2 value: 29.8078 - name: Validation ROUGE-L type: rouge-l value: 45.1543 - name: Test ROUGE-1 type: rouge-1 value: 53.3059 - name: Test ROUGE-2 type: rouge-2 value: 28.355 - name: Test ROUGE-L type: rouge-l value: 44.0953 --- ## `bart-large-xsum-samsum` This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset. ## Usage ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/bart-large-xsum-samsum") conversation = '''Hannah: Hey, do you have Betty's number? Amanda: Lemme check Amanda: Sorry, can't find it. Amanda: Ask Larry Amanda: He called her last time we were at the park together Hannah: I don't know him well Amanda: Don't be shy, he's very nice Hannah: If you say so.. Hannah: I'd rather you texted him Amanda: Just text him 🙂 Hannah: Urgh.. Alright Hannah: Bye Amanda: Bye bye ''' summarizer(conversation) ```
recobo/chemical-bert-uncased
3dd0c575bac75dbd1185b8e71b4d5cf93520d205
2021-08-31T18:07:03.000Z
[ "pytorch", "bert", "fill-mask", "en", "transformers", "chemical-domain", "safety-datasheets", "autotrain_compatible" ]
fill-mask
false
recobo
null
recobo/chemical-bert-uncased
289
1
transformers
3,057
--- language: "en" tags: - chemical-domain - safety-datasheets widget: - text: "The removal of mercaptans, and for drying of gases and [MASK]." --- # BERT for Chemical Industry A BERT-based language model further pre-trained from the checkpoint of [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased). We used a corpus of over 40,000+ technical documents from the **Chemical Industrial domain** and combined it with 13,000 Wikipedia Chemistry articles, ranging from Safety Data Sheets and Products Information Documents, with 250,000+ tokens from the Chemical domain and pre-trained using MLM and over 9.2 million paragraphs. - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="recobo/chemical-bert-uncased", tokenizer="recobo/chemical-bert-uncased" ) fill_mask("we create [MASK]") ```
uer/chinese_roberta_L-6_H-128
fb006d847cf7e6f6e4c19d61108ab08f2bb7fe35
2022-07-15T08:12:55.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "dataset:CLUECorpusSmall", "arxiv:1909.05658", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
fill-mask
false
uer
null
uer/chinese_roberta_L-6_H-128
289
null
transformers
3,058
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese RoBERTa Miniatures ## Model description This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details. You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below: | | H=128 | H=256 | H=512 | H=768 | | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: | | **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] | | **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] | | **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] | | **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] | | **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] | | **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 | | RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 | | RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 | | RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 | | RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium): ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512') >>> unmasker("中国的首都是[MASK]京。") [ {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]', 'score': 0.8701988458633423, 'token': 1266, 'token_str': '北'}, {'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]', 'score': 0.1194809079170227, 'token': 1298, 'token_str': '南'}, {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]', 'score': 0.0037803512532263994, 'token': 691, 'token_str': '东'}, {'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]', 'score': 0.0017127094324678183, 'token': 3249, 'token_str': '普'}, {'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]', 'score': 0.001687526935711503, 'token': 3307, 'token_str': '望'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. Taking the case of RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{devlin2018bert, title={Bert: Pre-training of deep bidirectional transformers for language understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} } @article{liu2019roberta, title={Roberta: A robustly optimized bert pretraining approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128 [2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256 [2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512 [2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768 [4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128 [4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256 [4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512 [4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768 [6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128 [6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256 [6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512 [6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768 [8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128 [8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256 [8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512 [8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768 [10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128 [10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256 [10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512 [10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768 [12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128 [12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256 [12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512 [12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768
Starry/karenemoji
587a841fc85049afe1276d4755343afa08ba3e4e
2022-03-11T01:52:44.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Starry
null
Starry/karenemoji
289
null
transformers
3,059
--- tags: - conversational --- # DialoGPT model
IIC/wav2vec2-spanish-multilibrispeech
83782f8ff5b3c7dacfd0deb7de44ff62cf406a53
2022-04-02T15:04:00.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "es", "dataset:multilingual_librispeech", "arxiv:2006.13979", "transformers", "audio", "model-index" ]
automatic-speech-recognition
false
IIC
null
IIC/wav2vec2-spanish-multilibrispeech
289
3
transformers
3,060
--- language: - es tags: - audio # Example: audio - automatic-speech-recognition # Example: datasets: - multilingual_librispeech metrics: - eval_wer: 0.073 model-index: - name: wav2vec2-spanish-multilibrispeech results: - task: type: automatic-speech-recognition # Required. Example: automatic-speech-recognition name: Speech Recognition # Optional. Example: Speech Recognition dataset: type: multilingual_librispeech # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: multilingual_librispeech es # Required. Example: Common Voice zh-CN args: es # Optional. Example: zh-CN metrics: - type: wer value: 0.073 name: eval_wer - type: loss value: 0.086 name: eval_loss --- This is a model for automatic speech recognition in spanish, by using the Spanish portion of [multilingual_librispeech](https://huggingface.co/datasets/multilingual_librispeech) and the [pre-trained wav2vec2 multilingual from Facebook](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) For training the model, we used the same parameters as they recommend in [the paper](https://arxiv.org/abs/2006.13979). We trained for a total of 15 epochs, obtaining a final wer of 0.073. An example of how to use this model: ```python from transformers import Wav2Vec2Tokenizer, AutoModelForCTC tokenizer = Wav2Vec2Tokenizer.from_pretrained( "IIC/wav2vec2-spanish-multilibrispeech" ) model = AutoModelForCTC.from_pretrained( "IIC/wav2vec2-spanish-multilibrispeech" ) ``` ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model.
freedomking/prompt-uie-base
6026ce276a7947de93d6fe45687a47a4a65a798d
2022-06-29T14:46:15.000Z
[ "pytorch", "bert", "transformers" ]
null
false
freedomking
null
freedomking/prompt-uie-base
289
1
transformers
3,061
## Introduction Universal Information Extraction More detail: https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/uie
CodeDanCode/SP-KyleBot
b82397494afe3b17b2c053b8953b7d8ee91f89c3
2021-10-24T22:29:48.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
CodeDanCode
null
CodeDanCode/SP-KyleBot
288
null
transformers
3,062
--- tags: - conversational --- # SouthPark Kyle Bot
IlyaGusev/rubert_telegram_headlines
606eae9f1f6d2eb45da5ba16675486b9b0874ca0
2022-07-13T15:36:18.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "ru", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
IlyaGusev
null
IlyaGusev/rubert_telegram_headlines
288
null
transformers
3,063
--- language: - ru tags: - summarization license: apache-2.0 inference: parameters: no_repeat_ngram_size: 4 --- # RuBertTelegramHeadlines ## Model description Example model for [Headline generation competition](https://competitions.codalab.org/competitions/29905) Based on [RuBERT](http://docs.deeppavlov.ai/en/master/features/models/bert.html) model ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, EncoderDecoderModel model_name = "IlyaGusev/rubert_telegram_headlines" tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=False, do_basic_tokenize=False, strip_accents=False) model = EncoderDecoderModel.from_pretrained(model_name) article_text = "..." input_ids = tokenizer( [article_text], add_special_tokens=True, max_length=256, padding="max_length", truncation=True, return_tensors="pt", )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=64, no_repeat_ngram_size=3, num_beams=10, top_p=0.95 )[0] headline = tokenizer.decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(headline) ``` ## Training data - Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz) ## Training procedure ```python import random import torch from torch.utils.data import Dataset from tqdm.notebook import tqdm from transformers import BertTokenizer, EncoderDecoderModel, Trainer, TrainingArguments, logging def convert_to_tensors( tokenizer, text, max_text_tokens_count, max_title_tokens_count = None, title = None ): inputs = tokenizer( text, add_special_tokens=True, max_length=max_text_tokens_count, padding="max_length", truncation=True ) result = { "input_ids": torch.tensor(inputs["input_ids"]), "attention_mask": torch.tensor(inputs["attention_mask"]), } if title is not None: outputs = tokenizer( title, add_special_tokens=True, max_length=max_title_tokens_count, padding="max_length", truncation=True ) decoder_input_ids = torch.tensor(outputs["input_ids"]) decoder_attention_mask = torch.tensor(outputs["attention_mask"]) labels = decoder_input_ids.clone() labels[decoder_attention_mask == 0] = -100 result.update({ "labels": labels, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask }) return result class GetTitleDataset(Dataset): def __init__( self, original_records, sample_rate, tokenizer, max_text_tokens_count, max_title_tokens_count ): self.original_records = original_records self.sample_rate = sample_rate self.tokenizer = tokenizer self.max_text_tokens_count = max_text_tokens_count self.max_title_tokens_count = max_title_tokens_count self.records = [] for record in tqdm(original_records): if random.random() > self.sample_rate: continue tensors = convert_to_tensors( tokenizer=tokenizer, title=record["title"], text=record["text"], max_title_tokens_count=self.max_title_tokens_count, max_text_tokens_count=self.max_text_tokens_count ) self.records.append(tensors) def __len__(self): return len(self.records) def __getitem__(self, index): return self.records[index] def train( train_records, val_records, pretrained_model_path, train_sample_rate=1.0, val_sample_rate=1.0, output_model_path="models", checkpoint=None, max_text_tokens_count=256, max_title_tokens_count=64, batch_size=8, logging_steps=1000, eval_steps=10000, save_steps=10000, learning_rate=0.00003, warmup_steps=2000, num_train_epochs=3 ): logging.set_verbosity_info() tokenizer = BertTokenizer.from_pretrained( pretrained_model_path, do_lower_case=False, do_basic_tokenize=False, strip_accents=False ) train_dataset = GetTitleDataset( train_records, train_sample_rate, tokenizer, max_text_tokens_count=max_text_tokens_count, max_title_tokens_count=max_title_tokens_count ) val_dataset = GetTitleDataset( val_records, val_sample_rate, tokenizer, max_text_tokens_count=max_text_tokens_count, max_title_tokens_count=max_title_tokens_count ) model = EncoderDecoderModel.from_encoder_decoder_pretrained(pretrained_model_path, pretrained_model_path) training_args = TrainingArguments( output_dir=output_model_path, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, do_train=True, do_eval=True, overwrite_output_dir=False, logging_steps=logging_steps, eval_steps=eval_steps, evaluation_strategy="steps", save_steps=save_steps, learning_rate=learning_rate, warmup_steps=warmup_steps, num_train_epochs=num_train_epochs, max_steps=-1, save_total_limit=1, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset ) trainer.train(checkpoint) model.save_pretrained(output_model_path) ```
Naturealbe/DialoGPT-small-harrypotter
8b960ebf04da798e17bbd0cd45063f54d9f0df77
2021-09-16T15:37:28.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Naturealbe
null
Naturealbe/DialoGPT-small-harrypotter
288
null
transformers
3,064
--- tags: - conversational --- # Harry Potter DialoGPT Model
PVAbhiram2003/DialoGPT-medium-RickandMorty
000ce8b7ebe21a7e223dfe51014eb765027d5c38
2022-04-06T12:42:44.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
PVAbhiram2003
null
PVAbhiram2003/DialoGPT-medium-RickandMorty
288
null
transformers
3,065
--- tags: - conversational --- #Rick and Morty DialoGPT medium model
aggb/DialogGPT-small-AGGB-B
544850c0c4cf56841a556be8eb5fe5b438af0120
2021-10-11T19:07:31.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
aggb
null
aggb/DialogGPT-small-AGGB-B
288
null
transformers
3,066
--- tags: - conversational --- # aggb DialogGPT spanish model
chellver24/DialoGPT-medium-chizuru_ichinose
690b453b5c0521890d8b64b34e6b5444f537268b
2021-12-13T04:43:34.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
chellver24
null
chellver24/DialoGPT-medium-chizuru_ichinose
288
null
transformers
3,067
--- tags: - conversational --- #Chizuru Ichinose~ DialoGPT Model
imvladikon/wav2vec2-large-xlsr-53-hebrew
27401112929adb476321f06886923e1c4545da80
2021-07-06T06:06:52.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "he", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
imvladikon
null
imvladikon/wav2vec2-large-xlsr-53-hebrew
288
2
transformers
3,068
--- language: he datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Hebrew XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: type: args: he metrics: - name: Test WER type: wer value: --- # wav2vec2-large-xlsr-53-hebrew Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the several downloaded youtube samples. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "he", split="test[:2%]") # there is no common dataset for Hebrew, please, paste your data processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on some Hebrew test data ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "he", split="test") # there is no common dataset for Hebrew, please, paste your data wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew") model = Wav2Vec2ForCTC.from_pretrained("imvladikon/wav2vec2-large-xlsr-53-hebrew").to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: # Example Predictions
mrm8488/chEMBL_smiles_v1
7496122d635633cccc623333f6dc14d47fda7bbf
2021-05-20T18:16:53.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "en", "transformers", "drugs", "chemist", "drug design", "autotrain_compatible" ]
fill-mask
false
mrm8488
null
mrm8488/chEMBL_smiles_v1
288
1
transformers
3,069
--- language: en tags: - drugs - chemist - drug design widget: - text: "CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)<mask>" --- # *De Novo* Drug Design with MLM ## What is it? An approximation to [Generative Recurrent Networks for De Novo Drug Design](https://onlinelibrary.wiley.com/doi/full/10.1002/minf.201700111) but training a MLM (RoBERTa like) from scratch. ## Why? As mentioned in the paper: Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. They used a generative *recurrent neural network (RNN)* containing long short‐term memory (LSTM) cell to capture the syntax of molecular representations in terms of SMILES strings. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept **eliminates the need for virtual compound library enumeration** and **enables virtual compound design without requiring secondary or external activity prediction**. ## My Goal 🎯 By training a MLM from scratch on 438552 (cleaned*) SMILES I wanted to build a model that learns this kind of molecular combinations so that given a partial SMILE it can generate plausible combinations so that it can be proposed as new drugs. By cleaned SMILES I mean that I used their [SMILES cleaning script](https://github.com/topazape/LSTM_Chem/blob/master/cleanup_smiles.py) to remove duplicates, salts, and stereochemical information. You can see the detailed process of gathering the data, preprocess it and train the LSTM in their [repo](https://github.com/topazape/LSTM_Chem). ## Fast usage with ```pipelines``` 🧪 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model='/mrm8488/chEMBL_smiles_v1', tokenizer='/mrm8488/chEMBL_smiles_v1' ) # CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)cc1 Atazanavir smile1 = "CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)<mask>" fill_mask(smile1) # Output: ''' [{'score': 0.6040295958518982, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)nc</s>', 'token': 265}, {'score': 0.2185731679201126, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)N</s>', 'token': 50}, {'score': 0.0642734169960022, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)cc</s>', 'token': 261}, {'score': 0.01932266168296337, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)CCCl</s>', 'token': 452}, {'score': 0.005068355705589056, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)C</s>', 'token': 39}] ''' ``` ## More I also created a [second version](https://huggingface.co/mrm8488/chEMBL26_smiles_v2) without applying the cleaning SMILES script mentioned above. You can use it in the same way as this one. ```python fill_mask = pipeline( "fill-mask", model='/mrm8488/chEMBL26_smiles_v2', tokenizer='/mrm8488/chEMBL26_smiles_v2' ) ``` [Original paper](https://www.ncbi.nlm.nih.gov/pubmed/29095571) Authors: <details> Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir–Prelog–Weg 4, 8093, Zurich, Switzerland, Stanford University, Department of Computer Science, 450 Sierra Mall, Stanford, CA, 94305, USA, inSili.com GmbH, 8049, Zurich, Switzerland, Gisbert Schneider, Email: hc.zhte@trebsig. </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
sismetanin/sbert-ru-sentiment-rusentiment
e2eb0a37f1a861456fb98052f9b18eb6acd94114
2021-05-20T06:38:36.000Z
[ "pytorch", "jax", "bert", "text-classification", "ru", "transformers", "sentiment analysis", "Russian" ]
text-classification
false
sismetanin
null
sismetanin/sbert-ru-sentiment-rusentiment
288
null
transformers
3,070
--- language: - ru tags: - sentiment analysis - Russian --- ## SBERT-Large-Base-ru-sentiment-RuSentiment SBERT-Large-ru-sentiment-RuSentiment is a [SBERT-Large](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @inproceedings{rogers2018rusentiment, title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian}, author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex}, booktitle={Proceedings of the 27th international conference on computational linguistics}, pages={755--763}, year={2018} } ```
ancs21/xlm-roberta-large-vi-qa
99ede8b74fc10b82365e5109a55c58ba38a31c38
2021-09-21T16:01:14.000Z
[ "pytorch", "xlm-roberta", "question-answering", "vi", "transformers", "license:mit", "autotrain_compatible" ]
question-answering
false
ancs21
null
ancs21/xlm-roberta-large-vi-qa
287
3
transformers
3,071
--- language: vi tags: - vi - xlm-roberta widget: - text: Toà nhà nào cao nhất Việt Nam? context: Landmark 81 là một toà nhà chọc trời trong tổ hợp dự án Vinhomes Tân Cảng, một dự án có tổng mức đầu tư 40.000 tỷ đồng, do Công ty Cổ phần Đầu tư xây dựng Tân Liên Phát thuộc Vingroup làm chủ đầu tư. Toà tháp cao 81 tầng, hiện tại là toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018. license: mit metrics: - f1 - em --- # XLM-RoBERTa large for QA on Vietnamese languages (also support various languages) ## Overview - Language model: xlm-roberta-large - Fine-tune: [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) - Language: Vietnamese - Downstream-task: Extractive QA - Dataset: [mailong25/bert-vietnamese-question-answering](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset) - Training data: train-v2.0.json (SQuAD 2.0 format) - Eval data: dev-v2.0.json (SQuAD 2.0 format) - Infrastructure: 1x Tesla P100 (Google Colab) ## Performance Evaluated on dev-v2.0.json ``` exact: 136 / 141 f1: 0.9692671394799054 ``` Evaluated on Vietnamese XQuAD: [xquad.vi.json](https://github.com/deepmind/xquad/blob/master/xquad.vi.json) ``` exact: 604 / 1190 f1: 0.7224454217571596 ``` ## Author An Pham (ancs21.ps [at] gmail.com) ## License MIT
dkminer81/Tromm
4be11050c9111c35fa86af5848f89529ec6403cb
2021-11-10T19:53:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
dkminer81
null
dkminer81/Tromm
287
null
transformers
3,072
--- tags: - conversational --- # A certain person's AI
rahul26/DialoGPT-small-rickandmorty
e78f96edca366d9e945c71eba08d3557cb8d668d
2021-10-20T10:38:17.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
rahul26
null
rahul26/DialoGPT-small-rickandmorty
287
null
transformers
3,073
--- tags: - conversational --- # Rick and Morty DialoGPT Model
Geotrend/bert-base-10lang-cased
ec07fc88489c579e8a48bd21f7061691379a72c6
2022-06-28T08:52:30.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "multilingual", "en", "fr", "es", "de", "zh", "ar", "ru", "pt", "it", "ur", "dataset:wikipedia", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Geotrend
null
Geotrend/bert-base-10lang-cased
286
null
transformers
3,074
--- language: - multilingual - en - fr - es - de - zh - ar - ru - pt - it - ur datasets: wikipedia license: apache-2.0 widget: - text: "Google generated 46 billion [MASK] in revenue." - text: "Paris is the capital of [MASK]." - text: "Algiers is the largest city in [MASK]." - text: "Paris est la [MASK] de la France." - text: "Paris est la capitale de la [MASK]." - text: "L'élection américaine a eu [MASK] en novembre 2020." - text: "تقع سويسرا في [MASK] أوروبا" - text: "إسمي محمد وأسكن في [MASK]." --- # bert-base-10lang-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. This model handles the following languages: english, french, spanish, german, chinese, arabic, russian, portuguese, italian, and urdu. It produces the same representations as [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) while being 22.5% smaller in size. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-10lang-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-10lang-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Multilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
Koro/DialoGPT-medium-rickandmorty
cd9badcca9854d419f7ce84d98636a125f640e34
2021-09-24T21:24:04.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Koro
null
Koro/DialoGPT-medium-rickandmorty
286
null
transformers
3,075
--- tags: - conversational --- # Rick and Morty DialoGPT Model
Skywhy/DialoGPT-medium-Churchyy
4d0cddac8bb0daf0f5928437bf7d08d349e056bf
2022-01-12T21:17:19.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Skywhy
null
Skywhy/DialoGPT-medium-Churchyy
286
null
transformers
3,076
--- tags: - conversational --- # Harry Potter DialogGPT Model
dbsamu/distilbert-base-uncased-finetuned-ner
b4c3f9ebb88697c3554177c4e5d09e8bb0f31863
2022-01-20T10:30:26.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
dbsamu
null
dbsamu/distilbert-base-uncased-finetuned-ner
286
null
transformers
3,077
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: en metrics: - name: Precision type: precision value: 0.8120642485217545 - name: Recall type: recall value: 0.830235495804385 - name: F1 type: f1 value: 0.8210493441599 - name: Accuracy type: accuracy value: 0.9203828724683252 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2781 - Precision: 0.8121 - Recall: 0.8302 - F1: 0.8210 - Accuracy: 0.9204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 | | 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 | | 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
flax-community/t5-recipe-generation
712873640d0bdd20e90d6bb0d375de14f079c6a4
2021-07-26T20:19:20.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "en", "transformers", "seq2seq", "text-generation", "recipe-generation", "autotrain_compatible" ]
text2text-generation
false
flax-community
null
flax-community/t5-recipe-generation
286
6
transformers
3,078
--- language: en tags: - seq2seq - t5 - text-generation - recipe-generation pipeline_tag: text2text-generation widget: - text: "provolone cheese, bacon, bread, ginger" - text: "sugar, crunchy jif peanut butter, cornflakes" - text: "sweet butter, confectioners sugar, flaked coconut, condensed milk, nuts, vanilla, dipping chocolate" - text: "macaroni, butter, salt, bacon, milk, flour, pepper, cream corn" - text: "hamburger, sausage, onion, regular, american cheese, colby cheese" - text: "chicken breasts, onion, garlic, great northern beans, black beans, green chilies, broccoli, garlic oil, butter, cajun seasoning, salt, oregano, thyme, black pepper, basil, worcestershire sauce, chicken broth, sour cream, chardonnay wine" - text: "serrano peppers, garlic, celery, oregano, canola oil, vinegar, water, kosher salt, salt, black pepper" --- ![avatar](chef-transformer.png) # Chef Transformer (T5) > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/recipe-generation-model/7475), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. Want to give it a try? Then what's the wait, head over to Hugging Face Spaces [here](https://huggingface.co/spaces/flax-community/chef-transformer). ## Team Members - Mehrdad Farahani ([m3hrdadfi](https://huggingface.co/m3hrdadfi)) - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv)) - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk)) - Deepak Pandian ([rays2pix](https://huggingface.co/rays2pix)) - Nicholas Broad ([nbroad](https://huggingface.co/nbroad)) ## Dataset [RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://recipenlg.cs.put.poznan.pl/). This dataset contains **2,231,142** cooking recipes (>2 millions) with size of **2.14 GB**. It's processed in more careful way. ### Example ```json { "NER": [ "oyster crackers", "salad dressing", "lemon pepper", "dill weed", "garlic powder", "salad oil" ], "directions": [ "Combine salad dressing mix and oil.", "Add dill weed, garlic powder and lemon pepper.", "Pour over crackers; stir to coat.", "Place in warm oven.", "Use very low temperature for 15 to 20 minutes." ], "ingredients": [ "12 to 16 oz. plain oyster crackers", "1 pkg. Hidden Valley Ranch salad dressing mix", "1/4 tsp. lemon pepper", "1/2 to 1 tsp. dill weed", "1/4 tsp. garlic powder", "3/4 to 1 c. salad oil" ], "link": "www.cookbooks.com/Recipe-Details.aspx?id=648947", "source": "Gathered", "title": "Hidden Valley Ranch Oyster Crackers" } ``` ## How To Use ```bash # Installing requirements pip install transformers ``` ```python from transformers import FlaxAutoModelForSeq2SeqLM from transformers import AutoTokenizer MODEL_NAME_OR_PATH = "flax-community/t5-recipe-generation" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, use_fast=True) model = FlaxAutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME_OR_PATH) prefix = "items: " # generation_kwargs = { # "max_length": 512, # "min_length": 64, # "no_repeat_ngram_size": 3, # "early_stopping": True, # "num_beams": 5, # "length_penalty": 1.5, # } generation_kwargs = { "max_length": 512, "min_length": 64, "no_repeat_ngram_size": 3, "do_sample": True, "top_k": 60, "top_p": 0.95 } special_tokens = tokenizer.all_special_tokens tokens_map = { "<sep>": "--", "<section>": "\n" } def skip_special_tokens(text, special_tokens): for token in special_tokens: text = text.replace(token, "") return text def target_postprocessing(texts, special_tokens): if not isinstance(texts, list): texts = [texts] new_texts = [] for text in texts: text = skip_special_tokens(text, special_tokens) for k, v in tokens_map.items(): text = text.replace(k, v) new_texts.append(text) return new_texts def generation_function(texts): _inputs = texts if isinstance(texts, list) else [texts] inputs = [prefix + inp for inp in _inputs] inputs = tokenizer( inputs, max_length=256, padding="max_length", truncation=True, return_tensors="jax" ) input_ids = inputs.input_ids attention_mask = inputs.attention_mask output_ids = model.generate( input_ids=input_ids, attention_mask=attention_mask, **generation_kwargs ) generated = output_ids.sequences generated_recipe = target_postprocessing( tokenizer.batch_decode(generated, skip_special_tokens=False), special_tokens ) return generated_recipe ``` ```python items = [ "macaroni, butter, salt, bacon, milk, flour, pepper, cream corn", "provolone cheese, bacon, bread, ginger" ] generated = generation_function(items) for text in generated: sections = text.split("\n") for section in sections: section = section.strip() if section.startswith("title:"): section = section.replace("title:", "") headline = "TITLE" elif section.startswith("ingredients:"): section = section.replace("ingredients:", "") headline = "INGREDIENTS" elif section.startswith("directions:"): section = section.replace("directions:", "") headline = "DIRECTIONS" if headline == "TITLE": print(f"[{headline}]: {section.strip().capitalize()}") else: section_info = [f" - {i+1}: {info.strip().capitalize()}" for i, info in enumerate(section.split("--"))] print(f"[{headline}]:") print("\n".join(section_info)) print("-" * 130) ``` Output: ```text [TITLE]: Macaroni and corn [INGREDIENTS]: - 1: 2 c. macaroni - 2: 2 tbsp. butter - 3: 1 tsp. salt - 4: 4 slices bacon - 5: 2 c. milk - 6: 2 tbsp. flour - 7: 1/4 tsp. pepper - 8: 1 can cream corn [DIRECTIONS]: - 1: Cook macaroni in boiling salted water until tender. - 2: Drain. - 3: Melt butter in saucepan. - 4: Blend in flour, salt and pepper. - 5: Add milk all at once. - 6: Cook and stir until thickened and bubbly. - 7: Stir in corn and bacon. - 8: Pour over macaroni and mix well. ---------------------------------------------------------------------------------------------------------------------------------- [TITLE]: Grilled provolone and bacon sandwich [INGREDIENTS]: - 1: 2 slices provolone cheese - 2: 2 slices bacon - 3: 2 slices sourdough bread - 4: 2 slices pickled ginger [DIRECTIONS]: - 1: Place a slice of provolone cheese on one slice of bread. - 2: Top with a slice of bacon. - 3: Top with a slice of pickled ginger. - 4: Top with the other slice of bread. - 5: Heat a skillet over medium heat. - 6: Place the sandwich in the skillet and cook until the cheese is melted and the bread is golden brown. ---------------------------------------------------------------------------------------------------------------------------------- ``` ## Evaluation Since the test set is not available, we will evaluate the model based on a shared test set. This test set consists of 5% of the whole test (*= 5,000 records*), and we will generate five recipes for each input(*= 25,000 records*). The following table summarizes the scores obtained by the **Chef Transformer** and **RecipeNLG** as our baseline. | Model | COSIM | WER | ROUGE-2 | BLEU | GLEU | METEOR | |:------------------------------------------------------------------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:| | [RecipeNLG](https://huggingface.co/mbien/recipenlg) | 0.5723 | 1.2125 | 0.1354 | 0.1164 | 0.1503 | 0.2309 | | [Chef Transformer](huggingface.co/flax-community/t5-recipe-generation) * | **0.7282** | **0.7613** | **0.2470** | **0.3245** | **0.2624** | **0.4150** | *From the 5 generated recipes corresponding to each NER (food items), only the highest score was taken into account in the WER, COSIM, and ROUGE metrics. At the same time, BLEU, GLEU, Meteor were designed to have many possible references.* ## Copyright Special thanks to those who provided these fantastic materials. - [Anatomy](https://www.flaticon.com/free-icon) - [Chef Hat](https://www.vecteezy.com/members/jellyfishwater) - [Moira Nazzari](https://pixabay.com/photos/food-dessert-cake-eggs-butter-3048440/) - [Instagram Post](https://www.freepik.com/free-psd/recipes-ad-social-media-post-template_11520617.htm)
munezah/DialoGPT-small-sherlock
749a64370ed5c20c23bdb07a8ea69ed4683f1828
2021-08-28T16:59:58.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
munezah
null
munezah/DialoGPT-small-sherlock
286
null
transformers
3,079
--- tags: - conversational --- # sherlock DialoGPT Model
sudip/bot1
a65a1f4d954416e7df4f3cc7be9f9f631c5e4db5
2021-09-02T15:45:43.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
sudip
null
sudip/bot1
286
null
transformers
3,080
--- tags: - conversational --- # Harry Potter DialoGPT Model
trueto/medbert-base-wwm-chinese
abece55f104ed428100c96f0a3ae2cb4996613bd
2021-05-20T08:09:44.000Z
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
false
trueto
null
trueto/medbert-base-wwm-chinese
286
1
transformers
3,081
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
rajkumarrrk/t5-base-fine-tuned-on-cnn-dm
4adbfb55f65d1ee515dc3b73ce1a7e6625387ef4
2022-07-11T11:41:58.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
rajkumarrrk
null
rajkumarrrk/t5-base-fine-tuned-on-cnn-dm
286
null
transformers
3,082
--- license: apache-2.0 --- T5-base fine-tuned on CNN/DM Summarization dataset. Training args: ``` { "learning_rate": 0.0001, "logging_steps": 5000, "lr_scheduler_type": "cosine", "num_train_epochs": 2, "per_device_train_batch_size": 16, # total batch size of 48 "save_total_limit": 1, "weight_decay": 0.1 } ``` Generation kwargs: ``` { "do_sample": true, "max_new_tokens": 100, "min_length": 50, "temperature": 0.7, "top_k": 0 }, ```` Pre-processing: Append prompt with prefix "Summarize: " Post-processing: None Test split metrics: ``` {"lexical/meteor": 0.30857827917561603, "lexical/rouge_rouge1": 0.41099971702474514, "lexical/rouge_rouge2": 0.17676173608661166, "lexical/rouge_rougeL": 0.2759112075051335, "lexical/rouge_rougeLsum": 0.34316108028094616, "lexical/bleu": 0.10747816852428271, "semantic/bert_score": 0.8760301497472277} ```
Atchuth/DialoGPT-small-MichaelBot
ddb982edf090f702bdef26207d6070da02b3edc8
2022-02-12T09:31:27.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Atchuth
null
Atchuth/DialoGPT-small-MichaelBot
285
null
transformers
3,083
--- tags: - conversational --- # Michael Scott DialoGPT Model
TVLG/DialoGPT-small-Iroh-Bot
412c85c5effa5dd3c101c2ef67046ae4a96033bc
2021-08-31T19:01:17.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
TVLG
null
TVLG/DialoGPT-small-Iroh-Bot
285
null
transformers
3,084
--- tags: - conversational --- # Iroh DialoGPT Model
ignkai/DialoGPT-medium-spider-man-updated
dccdfb1cb990851a6dbeb7e3941bdf2488c0c90b
2021-08-26T18:35:59.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ignkai
null
ignkai/DialoGPT-medium-spider-man-updated
285
null
transformers
3,085
--- tags: - conversational --- # MCU Peter Parker DialoGPT Model
yongzx/gpt2-finetuned-oscar-ko
c0f865438702d384106f0a9bd23471d2a47feff1
2021-12-09T06:53:05.000Z
[ "pytorch", "gpt2", "feature-extraction", "ko", "dataset:oscar", "transformers", "text-generation", "license:mit" ]
feature-extraction
false
yongzx
null
yongzx/gpt2-finetuned-oscar-ko
285
null
transformers
3,086
--- language: - ko tags: - text-generation license: mit datasets: - oscar widget: - text: "모든사람은교육을 " --- # GPT-2 finetuned on Korean Dataset ### Tokenizer We first trained a tokenizer on OSCAR's `unshuffled_original_ko` Korean data subset by following the training of GPT2 tokenizer (same vocab size of 50,257). Here's the [Python file](https://github.com/bigscience-workshop/multilingual-modeling/blob/gpt2-ko/experiments/exp-001/train_tokenizer_gpt2.py) for the training. ### Model We finetuned the `wte` and `wpe` layers of GPT-2 (while freezing the parameters of all other layers) on OSCAR's `unshuffled_original_ko` Korean data subset. We used [Huggingface's code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) for fine-tuning the causal language model GPT-2, but with the following parameters changed ``` - preprocessing_num_workers: 8 - per_device_train_batch_size: 2 - gradient_accumulation_steps: 4 - per_device_eval_batch_size: 2 - eval_accumulation_steps: 4 - eval_steps: 1000 - evaluation_strategy: "steps" - max_eval_samples: 5000 ``` **Training details**: total training steps: 688000, effective train batch size per step: 32, max tokens per batch: 1024)
IDEA-CCNL/Randeng-BART-139M
e9eeeea97bf541d9df4fd670dd0e6344450fcbbe
2022-04-26T06:27:03.000Z
[ "pytorch", "bart", "text2text-generation", "zh", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
IDEA-CCNL
null
IDEA-CCNL/Randeng-BART-139M
285
null
transformers
3,087
--- language: - zh license: apache-2.0 inference: true widget: - text: "桂林市是世界闻名<mask> ,它有悠久的<mask>" --- # Randeng-BART-139M model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). The 139M million parameter Randeng-BART large model, using 180G Chinese data, 8 A100(40G) training for 3 days,which is a standard transformer structure. ## Task Description Randeng-BART-139M is pre-trained by Text-Infilling task from BART [paper](https://readpaper.com/pdf-annotate/note?noteId=675945911766249472&pdfId=550970997159968917) ## Usage ```python from transformers import BartForConditionalGeneration, AutoTokenizer, Text2TextGenerationPipeline import torch tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Randeng-BART-139M', use_fast=false) model=BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-BART-139M') text = '桂林市是世界闻名<mask> ,它有悠久的<mask>' text2text_generator = Text2TextGenerationPipeline(model, tokenizer) print(text2text_generator(text, max_length=50, do_sample=False)) ``` ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2022}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
laituan245/molt5-large-smiles2caption
e307e7a1f959d4c9ba6553ce656440cd5d3b4660
2022-05-03T18:08:31.000Z
[ "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/molt5-large-smiles2caption
285
null
transformers
3,088
--- license: apache-2.0 --- This model can be used to generate an input caption from a SMILES string. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-smiles2caption", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-smiles2caption') input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
IlyaGusev/rut5_base_sum_gazeta
f09a08cae5d74c70e55da1a6ebb49f88c26f433b
2022-07-13T15:36:04.000Z
[ "pytorch", "t5", "text2text-generation", "ru", "dataset:IlyaGusev/gazeta", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
IlyaGusev
null
IlyaGusev/rut5_base_sum_gazeta
284
null
transformers
3,089
--- language: - ru tags: - summarization - t5 datasets: - IlyaGusev/gazeta license: - apache-2.0 inference: parameters: no_repeat_ngram_size: 4 widget: - text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо." example_title: "Википедия" - text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций. У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ. Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно. Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней. При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю. Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать. Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство. В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки. Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей. Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены. По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной. В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года. Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин. Он прогнозирует, что во втором полугодии мы увидим рост показателя, когда суды рассмотрят все дела, что не смогли ранее в режиме ограничений. По его данным, уже в июне число личных банкротств выросло до 11,5 тыс., что в два раза превышает показатель аналогичного периода 2019 года." example_title: "Новости" - text: "Актуальность проблемы. Электронная информация играет все большую роль во всех сферах жизни современного общества. В последние годы объем научно-технической текстовой информации в электронном виде возрос настолько, что возникает угроза обесценивания этой информации в связи с трудностями поиска необходимых сведений среди множества доступных текстов. Развитие информационных ресурсов Интернет многократно усугубило проблему информационной перегрузки. В этой ситуации особенно актуальными становятся методы автоматизации реферирования текстовой информации, то есть методы получения сжатого представления текстовых документов–рефератов (аннотаций). Постановка проблемы автоматического реферирования текста и соответственно попытки ее решения с использованием различных подходов предпринимались многими исследователями. История применения вычислительной техники для реферирования насчитывает уже более 50 лет и связана с именами таких исследователей, как Г.П. Лун, В.Е. Берзон, И.П. Cевбо, Э.Ф. Скороходько, Д.Г. Лахути, Р.Г. Пиотровский и др. За эти годы выработаны многочисленные подходы к решению данной проблемы, которые достаточно четко подразделяются на два направления: автоматическое реферирование, основанное на экстрагировании из первичных документов с помощью определенных формальных признаков «наиболее информативных» фраз (фрагментов), совокупность которых образует некоторый экстракт; автоматическое реферирование, основанное на выделении из текстов с помощью специальных информационных языков наиболее существенной информации и порождении новых текстов (рефератов), содержательно обобщающих первичные документы." example_title: "Научная статья" --- # RuT5SumGazeta ## Model description This is the model for abstractive summarization for Russian based on [rut5-base](https://huggingface.co/cointegrated/rut5-base). ## Intended uses & limitations #### How to use Colab: [link](https://colab.research.google.com/drive/1re5E26ZIDUpAx1gOCZkbF3hcwjozmgG0) ```python from transformers import AutoTokenizer, T5ForConditionalGeneration model_name = "IlyaGusev/rut5_base_sum_gazeta" tokenizer = AutoTokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) article_text = "..." input_ids = tokenizer( [article_text], max_length=600, add_special_tokens=True, padding="max_length", truncation=True, return_tensors="pt" )["input_ids"] output_ids = model.generate( input_ids=input_ids, no_repeat_ngram_size=4 )[0] summary = tokenizer.decode(output_ids, skip_special_tokens=True) print(summary) ``` ## Training data - Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta) ## Training procedure - Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) - Config: [t5_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/t5_training_config.json) ## Eval results * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v1 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 | * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v2 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 | Predicting all summaries: ```python import json import torch from transformers import AutoTokenizer, T5ForConditionalGeneration from datasets import load_dataset def gen_batch(inputs, batch_size): batch_start = 0 while batch_start < len(inputs): yield inputs[batch_start: batch_start + batch_size] batch_start += batch_size def predict( model_name, input_records, output_file, max_source_tokens_count=600, batch_size=8 ): device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name).to(device) predictions = [] for batch in gen_batch(input_records, batch_size): texts = [r["text"] for r in batch] input_ids = tokenizer( texts, add_special_tokens=True, max_length=max_source_tokens_count, padding="max_length", truncation=True, return_tensors="pt" )["input_ids"].to(device) output_ids = model.generate( input_ids=input_ids, no_repeat_ngram_size=4 ) summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True) for s in summaries: print(s) predictions.extend(summaries) with open(output_file, "w") as w: for p in predictions: w.write(p.strip().replace("\n", " ") + "\n") gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"] predict("IlyaGusev/rut5_base_sum_gazeta", list(gazeta_test), "t5_predictions.txt") ``` Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py) Flags: --language ru --tokenize-after --lower
Narrativa/mT5-base-finetuned-tydiQA-xqa
54447b261a0bbcc1e7bb059771526ebe416d8593
2021-08-23T09:57:00.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "multilingual", "dataset:tydiqa", "arxiv:2010.11934", "transformers", "autotrain_compatible" ]
text2text-generation
false
Narrativa
null
Narrativa/mT5-base-finetuned-tydiQA-xqa
284
2
transformers
3,090
--- language: multilingual datasets: - tydiqa widget: - text: "question: what does she do? context: Sofía has a degree in Communications and public relations agency experience where she was in charge of monitoring and managing PR strategy including relations with the media and journalists." --- # mT5-base fine-tuned on TyDiQA for multilingual QA 🗺📖❓ [Google's mT5-base](https://huggingface.co/google/mt5-base) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Q&A** downstream task. ## Details of mT5 [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Details of the dataset 📚 **TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). | Dataset | Task | Split | # samples | | -------- | ----- |------| --------- | | TyDi QA | GoldP | train| 49881 | | TyDi QA | GoldP | valid| 5077 | ## Results on validation dataset 📝 | Metric | # Value | | ------ | --------- | | **EM** | **60.88** | ## Model in Action 🚀 ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained("Narrativa/mT5-base-finetuned-tydiQA-xqa") model = AutoModelForCausalLM.from_pretrained("Narrativa/mT5-base-finetuned-tydiQA-xqa").to(device) def get_response(question, context, max_length=32): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device), max_length=max_length) return tokenizer.decode(output[0]) # Some examples in different languages context = 'HuggingFace won the best Demo paper at EMNLP2020.' question = 'What won HuggingFace?' get_response(question, context) context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.' question = 'Qué ganó HuggingFace?' get_response(question, context) context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.' question = 'Что победило в HuggingFace?' get_response(question, context) ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
ayush19/rick-sanchez
6d60abf10b45df3b208b605e82a809b3671e6356
2021-09-04T06:24:59.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ayush19
null
ayush19/rick-sanchez
284
1
transformers
3,091
--- tags: - conversational --- # RudeRick discord bot
cl-tohoku/bert-large-japanese-char
f5ceff35f899334a0440b1dc818dfbfc9e23d58a
2021-09-23T13:45:39.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
cl-tohoku
null
cl-tohoku/bert-large-japanese-char
284
1
transformers
3,092
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT large Japanese (character-level tokenization with whole word masking, jawiki-20200831) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by character-level tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0). ## Model architecture The model architecture is the same as the original BERT large model; 24 layers, 1024 dimensions of hidden states, and 16 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020. The generated corpus files are 4.0GB in total, containing approximately 30M sentences. We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences. ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into characters. The vocabulary size is 6144. We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/). The training took about 5 days to finish. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
ericklasco/DialoGPT-small-erickHarryPotter
34ccd5db7af6b1d48977af8f4a9fdbb1f13fdd9e
2021-08-27T13:13:46.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ericklasco
null
ericklasco/DialoGPT-small-erickHarryPotter
284
null
transformers
3,093
--- tags: - conversational --- # Harry Potter DialoGPT Model
flax-community/dansk-gpt-wiki
b452553d01fe86746a5278641f9fc4ba1a55a02d
2021-07-17T07:46:51.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "da", "transformers" ]
text-generation
false
flax-community
null
flax-community/dansk-gpt-wiki
284
2
transformers
3,094
--- language: da widget: - text: "Jeg elsker livet" --- # GPT2-svenska-wikipedia A Danish GPT2 style model trained using Flax CLM pipeline on the Danish part of the wiki40b dataset. https://huggingface.co/datasets/wiki40b ## Model series This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge. ## Gpt models ## Swedish Gpt https://huggingface.co/birgermoell/swedish-gpt/ ## Swedish gpt wiki https://huggingface.co/flax-community/swe-gpt-wiki # Nordic gpt wiki https://huggingface.co/flax-community/nordic-gpt-wiki ## Dansk gpt wiki https://huggingface.co/flax-community/dansk-gpt-wiki ## Norsk gpt wiki https://huggingface.co/flax-community/norsk-gpt-wiki ## Roberta models ## Nordic Roberta Wiki https://huggingface.co/flax-community/nordic-roberta-wiki ## Swe Roberta Wiki Oscar https://huggingface.co/flax-community/swe-roberta-wiki-oscar ## Roberta Swedish Scandi https://huggingface.co/birgermoell/roberta-swedish-scandi ## Roberta Swedish https://huggingface.co/birgermoell/roberta-swedish ## Swedish T5 model https://huggingface.co/birgermoell/t5-base-swedish ## Data cleaning and preprocessing The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work. ```python from datasets import load_dataset def load_and_clean_wiki(): dataset = load_dataset('wiki40b', 'da', beam_runner='DirectRunner', split="train") #dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner') dataset = dataset.remove_columns(['wikidata_id', 'version_id']) filtered_dataset = dataset.map(filter_wikipedia) # filtered_dataset[:3] # print(filtered_dataset[:3]) return filtered_dataset def filter_wikipedia(batch): batch["text"] = " ".join(batch["text"].split("\ _START_SECTION_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_ARTICLE_\ ")) batch["text"] = " ".join(batch["text"].split("\ _START_PARAGRAPH_\ ")) batch["text"] = " ".join(batch["text"].split("_NEWLINE_")) batch["text"] = " ".join(batch["text"].split("\xa0")) return batch ``` ## Training script The following training script was used to train the model. ```bash ./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="da" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub ```
fleek/wav2vec-large-xlsr-korean
72f1724a95b56ea37c8f2d0a310859db09bd1db2
2021-07-06T03:24:07.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
fleek
null
fleek/wav2vec-large-xlsr-korean
284
null
transformers
3,095
Entry not found
hyunwoongko/reddit-3B
2c1fe21ab2a565756cae55c2136a8e3786701ac2
2021-06-22T15:53:45.000Z
[ "pytorch", "blenderbot", "text2text-generation", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "transformers", "convAI", "conversational", "facebook", "license:apache-2.0", "autotrain_compatible" ]
conversational
false
hyunwoongko
null
hyunwoongko/reddit-3B
284
3
transformers
3,096
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
ishraaqparvez/DialoGPT-small-harrypotter
2a97cd0ad8e72ebbb89f527358326fa5b34614e1
2021-08-27T05:54:14.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ishraaqparvez
null
ishraaqparvez/DialoGPT-small-harrypotter
284
null
transformers
3,097
--- tags: - conversational --- # Hrry Potter DialoGPT Model
lassl/bert-ko-base
51ecc13218a37096debce75fa021ed05aed64f2e
2022-02-19T09:50:35.000Z
[ "pytorch", "bert", "pretraining", "ko", "transformers", "fill-mask", "korean", "lassl", "license:apache-2.0" ]
fill-mask
false
lassl
null
lassl/bert-ko-base
284
1
transformers
3,098
--- license: apache-2.0 language: ko tags: - fill-mask - korean - lassl mask_token: "[MASK]" widget: - text: 대한민국의 수도는 [MASK] 입니다. --- # LASSL bert-ko-base ## How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("lassl/bert-ko-base") tokenizer = AutoTokenizer.from_pretrained("lassl/bert-ko-base") ``` ## Evaluation Evaulation results will be released soon. ## Corpora This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`. ```bash corpora/ ├── [707M] kowiki_latest.txt ├── [ 26M] modu_dialogue_v1.2.txt ├── [1.3G] modu_news_v1.1.txt ├── [9.7G] modu_news_v2.0.txt ├── [ 15M] modu_np_v1.1.txt ├── [1008M] modu_spoken_v1.2.txt ├── [6.5G] modu_written_v1.0.txt └── [413M] petition.txt ```
pastlecry/DialoGPT-small-harrypotter
ac92d25d237424b4122f7abd86d4e4633ba3768b
2021-12-11T16:11:06.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
pastlecry
null
pastlecry/DialoGPT-small-harrypotter
284
null
transformers
3,099
--- tags: - conversational --- #Harry Potter DialoGPT MOdel