modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-24 12:28:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
493 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-24 12:27:57
card
stringlengths
11
1.01M
TransQuest/monotransquest-da-en_zh-wiki
TransQuest
2021-06-03T19:04:32Z
13
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en-zh tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_zh-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/monotransquest-da-en_de-wiki
TransQuest
2021-06-03T19:03:21Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en-de tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/monotransquest-da-en_any
TransQuest
2021-06-03T19:01:53Z
15
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en-multilingual tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_any", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
huggingtweets/poconggg
huggingtweets
2021-06-03T12:39:27Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/poconggg/1622723963727/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1196791172561170433/vRlOuN___400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pocong</div> <div style="text-align: center; font-size: 14px;">@poconggg</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pocong. | Data | Pocong | | --- | --- | | Tweets downloaded | 3160 | | Retweets | 177 | | Short tweets | 206 | | Tweets kept | 2777 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a33zg8r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @poconggg's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jldomyb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jldomyb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/poconggg') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Wikidepia/marian-nmt-enid
Wikidepia
2021-06-03T07:27:50Z
6
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# NMT Model for English-Indonesian
BinksSachary/ShaxxBot2
BinksSachary
2021-06-03T04:37:46Z
18
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # My Awesome Model from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
google/bigbird-roberta-base
google
2021-06-02T14:30:54Z
34,067
51
transformers
[ "transformers", "pytorch", "jax", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - cc_news --- # BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Davlan/bert-base-multilingual-cased-finetuned-amharic
Davlan
2021-06-02T12:37:53Z
413
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: am datasets: --- # bert-base-multilingual-cased-finetuned-amharic ## Model description **bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning **bert-base-multilingual-cased** model on Amharic language texts. It provides **better performance** than the multilingual Amharic on named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Amharic corpus using Amharic vocabulary. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-amharic') >>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን [MASK] መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | am_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 0.0 | 60.89 ### BibTeX entry and citation info By David Adelani ``` ```
ethanyt/guwenbert-base
ethanyt
2021-06-02T03:27:16Z
259
18
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" license: "apache-2.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" widget: - text: "[MASK]太元中,武陵人捕鱼为业。" - text: "问征夫以前路,恨晨光之[MASK]微。" - text: "浔阳江头夜送客,枫叶[MASK]花秋瑟瑟。" --- # GuwenBERT ## Model description ![GuwenBERT](https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png) This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on. For more information about RoBERTa, take a look at the RoBERTa's offical repo. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-base") model = AutoModel.from_pretrained("ethanyt/guwenbert-base") ``` ## Training data The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang. 76% of them are punctuated. The total number of characters is 1.7B (1,743,337,673). All traditional Characters are converted to simplified characters. The vocabulary is constructed from this data set and the size is 23,292. ## Training procedure The models are initialized with `hfl/chinese-roberta-wwm-ext` and then pre-trained with a 2-step strategy. In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training. The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after. ## Eval results ### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation Second place in the competition. Detailed test results: | NE Type | Precision | Recall | F1 | |:----------:|:-----------:|:------:|:-----:| | Book Name | 77.50 | 73.73 | 75.57 | | Other Name | 85.85 | 89.32 | 87.55 | | Micro Avg. | 83.88 | 85.39 | 84.63 | ## About Us We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology. For more cooperation, please contact email: ethanyt [at] qq.com > Created with ❤️ by Tan Yan [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/Ethan-yt) and Zewen Chi [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/CZWin32768)
tensorspeech/tts-fastspeech2-baker-ch
tensorspeech
2021-06-02T02:51:55Z
0
6
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "dataset:Baker", "arxiv:2006.04558", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: chinese license: apache-2.0 datasets: - Baker widget: - text: "这是一个开源的端到端中文语音合成系统" --- # FastSpeech2 trained on Baker (Chinese) This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on Baker dataset (Ch). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-baker-ch") fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-baker-ch") text = "这是一个开源的端到端中文语音合成系统" input_ids = processor.text_to_sequence(text, inference=True) mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech2 ``` @misc{ren2021fastspeech, title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech}, author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu}, year={2021}, eprint={2006.04558}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-baker-ch
tensorspeech
2021-06-02T02:50:59Z
0
5
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "ch", "dataset:Baker", "arxiv:2005.05106", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ch license: apache-2.0 datasets: - Baker widget: - text: "这是一个开源的端到端中文语音合成系统" --- # Multi-band MelGAN trained on Baker (Ch) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on Baker dataset (ch). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-baker-ch") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-baker-ch") text = "这是一个开源的端到端中文语音合成系统" input_ids = processor.text_to_sequence(text, inference=True) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-tacotron2-kss-ko
tensorspeech
2021-06-01T09:56:01Z
0
5
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "ko", "dataset:kss", "arxiv:1712.05884", "arxiv:1710.08969", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: ko license: apache-2.0 datasets: - kss widget: - text: "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." --- # Tacotron 2 with Guided Attention trained on KSS (Korean) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on KSS dataset (KO). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") text = "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-fastspeech2-ljspeech-en
tensorspeech
2021-06-01T09:54:05Z
0
1
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "text-to-mel", "eng", "dataset:LJSpeech", "arxiv:2006.04558", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - text-to-mel language: eng license: apache-2.0 datasets: - LJSpeech widget: - text: "How are you?" --- # FastSpeech2 trained on LJSpeech (Eng) This repository provides a pretrained [FastSpeech2](https://arxiv.org/abs/2006.04558) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en") fastspeech2 = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech2-ljspeech-en") text = "How are you?" input_ids = processor.text_to_sequence(text) mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32), ) ``` #### Referencing FastSpeech2 ``` @misc{ren2021fastspeech, title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech}, author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu}, year={2021}, eprint={2006.04558}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-thorsten-ger
tensorspeech
2021-06-01T09:07:00Z
0
1
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "ger", "dataset:Thorsten", "arxiv:2005.05106", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ger license: apache-2.0 datasets: - Thorsten widget: - text: "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." --- # Multi-band MelGAN trained on Thorsten (Ger) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on Thorsten dataset (ger). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-thorsten-ger") text = "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
tensorspeech/tts-mb_melgan-kss-ko
tensorspeech
2021-06-01T09:06:04Z
0
2
tensorflowtts
[ "tensorflowtts", "audio", "text-to-speech", "mel-to-wav", "ko", "dataset:KSS", "arxiv:2005.05106", "license:apache-2.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - tensorflowtts - audio - text-to-speech - mel-to-wav language: ko license: apache-2.0 datasets: - KSS widget: - text: "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." --- # Multi-band MelGAN trained on KSS (Korean) This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on KSS dataset (ko). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Wav ```python import soundfile as sf import numpy as np import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-kss-ko") mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-kss-ko") text = "신은 우리의 수학 문제에는 관심이 없다. 신은 다만 경험적으로 통합할 뿐이다." input_ids = processor.text_to_sequence(text) # tacotron2 inference (text-to-mel) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) # melgan inference (mel-to-wav) audio = mb_melgan.inference(mel_outputs)[0, :, 0] # save to file sf.write('./audio.wav', audio, 22050, "PCM_16") ``` #### Referencing Multi-band MelGAN ``` @misc{yang2020multiband, title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech}, author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie}, year={2020}, eprint={2005.05106}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
DeepChem/SmilesTokenizer_PubChem_1M
DeepChem
2021-05-31T20:54:05Z
282
3
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
RoBERTa model trained on 1M SMILES from PubChem 77M set in MoleculeNet. Uses Smiles-Tokenizer
huggingtweets/marknorm
huggingtweets
2021-05-31T19:21:46Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/marknorm/1622488902602/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/903769803768217600/EKtan_aM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">mark normand</div> <div style="text-align: center; font-size: 14px;">@marknorm</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from mark normand. | Data | mark normand | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 136 | | Short tweets | 522 | | Tweets kept | 2591 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25e2ma2z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marknorm's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17zjqoal) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17zjqoal/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/marknorm') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nyust-eb210/braslab-bert-drcd-384
nyust-eb210
2021-05-31T14:47:20Z
12
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "dataset:DRCD", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: zh-tw datasets: DRCD tasks: Question Answering --- # BERT DRCD 384 This model is a fine-tune checkpoint of [bert-base-chinese](https://huggingface.co/bert-base-chinese), fine-tuned on DRCD dataset. This model reaches a F1 score of 86. This model reaches a EM score of 83. Training Arguments: - length: 384 - stride: 128 - learning_rate: 3e-5 - batch_size: 10 - epoch: 3 [Colab for detailed](https://colab.research.google.com/drive/1kZv7ZRmvUdCKEhQg8MBrKljGWvV2X3CP?usp=sharing) ## Deployment Deploy [BERT-DRCD-QuestionAnswering](https://github.com/pleomax0730/BERT-DRCD-QuestionAnswering) model using `FastAPI` and containerized using `Docker`. ## Usage ### In Transformers ```python text = "鴻海科技集團是由臺灣企業家郭台銘創辦的跨國企業,總部位於臺灣新北市土城區,主要生產地則在中國大陸,以富士康做為商標名稱。其專注於電子產品的代工服務,研發生產精密電氣元件、機殼、準系統、系統組裝、光通訊元件、液晶顯示件等3C產品上、下游產品及服務。" query = "鴻海集團總部位於哪裡?" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") tokenizer = BertTokenizerFast.from_pretrained("nyust-eb210/braslab-bert-drcd-384") model = BertForQuestionAnswering.from_pretrained("nyust-eb210/braslab-bert-drcd-384").to(device) encoded_input = tokenizer(text, query, return_tensors="pt").to(device) qa_outputs = model(**encoded_input) start = torch.argmax(qa_outputs.start_logits).item() end = torch.argmax(qa_outputs.end_logits).item() answer = encoded_input.input_ids.tolist()[0][start : end + 1] answer = "".join(tokenizer.decode(answer).split()) start_prob = torch.max(torch.nn.Softmax(dim=-1)(qa_outputs.start_logits)).item() end_prob = torch.max(torch.nn.Softmax(dim=-1)(qa_outputs.end_logits)).item() confidence = (start_prob + end_prob) / 2 print(answer, confidence) # 臺灣新北市土城區, 0.92 ```
orzhan/rugpt3-simplify-large
orzhan
2021-05-31T14:31:36Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
Text simplification model for Russian. Fine-tuned ruGPT3-large https://github.com/orzhan/rusimscore --- language: ru
huggingtweets/dallaswentdown-jwgrieve-shanselman
huggingtweets
2021-05-31T14:01:34Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dallaswentdown-jwgrieve-shanselman/1622469689056/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1110842842254139392/ZOE_oJVk_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1133122333290291200/xV9gO-D6_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1202410649403428864/ARbH2iRC_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jack Grieve & Scott Hanselman & Marc Miller</div> <div style="text-align: center; font-size: 14px;">@dallaswentdown-jwgrieve-shanselman</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jack Grieve & Scott Hanselman & Marc Miller. | Data | Jack Grieve | Scott Hanselman | Marc Miller | | --- | --- | --- | --- | | Tweets downloaded | 3241 | 3248 | 204 | | Retweets | 408 | 649 | 11 | | Short tweets | 325 | 953 | 16 | | Tweets kept | 2508 | 1646 | 177 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1szwn06m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dallaswentdown-jwgrieve-shanselman's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/umdhmmbr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/umdhmmbr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dallaswentdown-jwgrieve-shanselman') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/danielgriffinmd-jwgrieve-tactical_times
huggingtweets
2021-05-31T13:28:07Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/danielgriffinmd-jwgrieve-tactical_times/1622467683418/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1245139606317793291/KeeHxsO7_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1110842842254139392/ZOE_oJVk_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1343387459229540354/axWFzawA_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Tactical Times & Jack Grieve & Daniel Griffin MD PhD</div> <div style="text-align: center; font-size: 14px;">@danielgriffinmd-jwgrieve-tactical_times</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from The Tactical Times & Jack Grieve & Daniel Griffin MD PhD. | Data | The Tactical Times | Jack Grieve | Daniel Griffin MD PhD | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3241 | 1832 | | Retweets | 154 | 408 | 416 | | Short tweets | 102 | 325 | 181 | | Tweets kept | 2992 | 2508 | 1235 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/f0tjsov8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @danielgriffinmd-jwgrieve-tactical_times's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lmqr46i) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lmqr46i/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/danielgriffinmd-jwgrieve-tactical_times') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nreimers/albert-small-v2
nreimers
2021-05-31T12:26:52Z
26
0
transformers
[ "transformers", "pytorch", "albert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# albert-small-v2 This is a 6 layer version of [albert-base-v2](https://huggingface.co/albert-base-v2).
ZYW/squad-mbert-en-de-es-model
ZYW
2021-05-30T22:33:10Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: squad-mbert-en-de-es-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-mbert-en-de-es-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/squad-mbert-model_2
ZYW
2021-05-30T18:18:37Z
10
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: squad-mbert-model_2 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-mbert-model_2 This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/squad-mbart-model
ZYW
2021-05-30T16:12:15Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: squad-mbart-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-mbart-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/squad-mbert-model
ZYW
2021-05-30T15:15:53Z
11
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: squad-mbert-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-mbert-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
hamishs/mBART50-en-az-tr1
hamishs
2021-05-30T13:47:22Z
4
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# mBart50 for Zeroshot Azerbaijani-Turkish Translation The mBart50 model is finetuned on English-Azerbaijani-Turkish translation leaving Az<->Tr as zeroshot directions. The method of tied representations is used to enforce alignment between semantically equivalent sentences leading to superior zeroshot performance.
huggingtweets/jamz5251
huggingtweets
2021-05-30T10:30:56Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jamz5251/1622370618440/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1375502415240122373/JO1DArJT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jamila Husain</div> <div style="text-align: center; font-size: 14px;">@jamz5251</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jamila Husain. | Data | Jamila Husain | | --- | --- | | Tweets downloaded | 3234 | | Retweets | 900 | | Short tweets | 65 | | Tweets kept | 2269 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r9z40rld/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamz5251's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20gadkdv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20gadkdv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jamz5251') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/azzamameen
huggingtweets
2021-05-30T04:51:42Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/azzamameen/1622350298746/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/637912443281272832/dAsVKNNU_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Azzam Ameen</div> <div style="text-align: center; font-size: 14px;">@azzamameen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Azzam Ameen. | Data | Azzam Ameen | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 2828 | | Short tweets | 18 | | Tweets kept | 401 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4gs8vmx7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @azzamameen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yezh3jh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yezh3jh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/azzamameen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/boogie2988
huggingtweets
2021-05-29T21:24:58Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/boogie2988/1622323494391/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1366991202000777216/E5Qeu37S_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">boogie2988 😭🤡</div> <div style="text-align: center; font-size: 14px;">@boogie2988</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from boogie2988 😭🤡. | Data | boogie2988 😭🤡 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 216 | | Short tweets | 392 | | Tweets kept | 2634 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/30jummgi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boogie2988's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38yren3o) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38yren3o/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/boogie2988') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/mschuresko
huggingtweets
2021-05-29T19:54:38Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/mschuresko/1622317955447/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1296692705662078976/5S7n1Rcc_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Michael S</div> <div style="text-align: center; font-size: 14px;">@mschuresko</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Michael S. | Data | Michael S | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 436 | | Short tweets | 492 | | Tweets kept | 2312 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cb03j4o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mschuresko's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2l74wvek) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2l74wvek/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mschuresko') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ZYW/en-de-model
ZYW
2021-05-29T17:52:17Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: en-de-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-de-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/en-de-vi-zh-es-model
ZYW
2021-05-29T17:33:12Z
24
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: en-de-vi-zh-es-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-de-vi-zh-es-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/en-de-es-model
ZYW
2021-05-29T17:28:09Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: en-de-es-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-de-es-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
ZYW/squad-en-de-es-model
ZYW
2021-05-29T16:53:56Z
6
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- model-index: - name: squad-en-de-es-model --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-en-de-es-model This model was trained from scratch on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.7.0 - Tokenizers 0.10.3
huggingtweets/jessi_cata
huggingtweets
2021-05-29T06:13:02Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/jessi_cata/1622268778505/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1294479143904653312/qP7tP-nr_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jessica Taylor</div> <div style="text-align: center; font-size: 14px;">@jessi_cata</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jessica Taylor. | Data | Jessica Taylor | | --- | --- | | Tweets downloaded | 907 | | Retweets | 145 | | Short tweets | 12 | | Tweets kept | 750 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/125iwpq5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jessi_cata's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ba9qak3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ba9qak3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jessi_cata') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
prajjwal1/roberta_hellaswag
prajjwal1
2021-05-28T22:28:13Z
8
1
transformers
[ "transformers", "pytorch", "roberta", "multiple-choice", "commonsense-reasoning", "sentence-completion", "dataset:hellaswag", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- tags: - pytorch - commonsense-reasoning - sentence-completion datasets: - hellaswag --- `RoBERTa` trained on HellaSwag dataset (`MultipleChoiceModel`). HellaSwag has a multiple choice questions format. It gets around 74.99% accuracy. [@prajjwal_1](https://twitter.com/prajjwal_1/)
allegro/herbert-klej-cased-tokenizer-v1
allegro
2021-05-28T16:19:05Z
131
1
transformers
[ "transformers", "xlm", "pl", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: pl --- # HerBERT tokenizer **[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** tokenizer is a character level byte-pair encoding with vocabulary size of 50k tokens. The tokenizer was trained on [Wolne Lektury](https://wolnelektury.pl/) and a publicly available subset of [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=0) with [fastBPE](https://github.com/glample/fastBPE) library. Tokenizer utilize `XLMTokenizer` implementation from [transformers](https://github.com/huggingface/transformers). ## Tokenizer usage Herbert tokenizer should be used together with [HerBERT model](https://huggingface.co/allegro/herbert-klej-cased-v1): ```python from transformers import XLMTokenizer, RobertaModel tokenizer = XLMTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors='pt') outputs = model(encoded_input) ``` ## License CC BY-SA 4.0 ## Citation If you use this tokenizer, please cite the following paper: ``` @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for {P}olish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", doi = "10.18653/v1/2020.acl-main.111", pages = "1191--1201", } ``` ## Authors Tokenizer was created by **Allegro Machine Learning Research** team. You can contact us at: <a href="mailto:[email protected]">[email protected]</a>
Davlan/xlm-roberta-base-finetuned-yoruba
Davlan
2021-05-28T13:53:56Z
11
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: yo datasets: --- # xlm-roberta-base-finetuned-yoruba ## Model description **xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba') >>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun") [{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173, 'token': 44109, 'token_str': '▁Queen'}, {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284, 'token': 1350, 'token_str': '▁ile'}, {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546, 'token': 1053, 'token_str': '▁ti'}, {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865, 'token': 12878, 'token_str': '▁baba'}, {'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238, 'token': 82879, 'token_str': '▁Oba'}] ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | yo_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66 [BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | | ### BibTeX entry and citation info By David Adelani ``` ```
castorini/duot5-3b-msmarco
castorini
2021-05-28T11:51:36Z
9
0
transformers
[ "transformers", "pytorch", "t5", "feature-extraction", "arxiv:2101.05667", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is a T5-3B reranker, initialized with our pointwise ranker, [castorini/monot5-3b-msmarco](https://huggingface.co/castorini/monot5-3b-msmarco), and finetuned on the MS MARCO passage dataset for 50K steps (or 5 epochs) on the pairwise reranking task. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667)
nreimers/BERT-Tiny_L-2_H-128_A-2
nreimers
2021-05-28T11:05:21Z
35,579
3
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This is the BERT-Medium model from Google: https://github.com/google-research/bert#bert. A BERT model with 2 layers, 128 hidden unit size, and 2 attention heads.
nreimers/TinyBERT_L-4_H-312_v2
nreimers
2021-05-28T11:02:32Z
9,995
1
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This is the [General_TinyBERT_v2(4layer-312dim)](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) ported to Huggingface transformers.
m3hrdadfi/bert-fa-base-uncased-farstail-mean-tokens
m3hrdadfi
2021-05-28T06:03:42Z
6
0
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "fa", "arxiv:2009.08820", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # FarsTail + ParsBERT Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo. ```bibtex @article{amirkhani2020farstail, title={FarsTail: A Persian Natural Language Inference Dataset}, author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan}, journal={arXiv preprint arXiv:2009.08820}, year={2020} } ```
m3hrdadfi/bert-fa-base-uncased-farstail
m3hrdadfi
2021-05-28T06:02:52Z
11
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "fa", "arxiv:2009.08820", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # FarsTail + ParsBERT Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo ```bibtex @article{amirkhani2020farstail, title={FarsTail: A Persian Natural Language Inference Dataset}, author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan}, journal={arXiv preprint arXiv:2009.08820}, year={2020} } ```
m3hrdadfi/bert-fa-base-uncased-wikinli
m3hrdadfi
2021-05-28T06:01:35Z
20
2
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ParsBERT + Sentence Transformers Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models. ```bibtex @misc{SentenceTransformerWiki, author = {Mehrdad Farahani}, title = {Sentence Embeddings with ParsBERT}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/sentence-transformers}, } ```
m3hrdadfi/bert-fa-base-uncased-wikinli-mean-tokens
m3hrdadfi
2021-05-28T06:00:37Z
58
0
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "fa", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ParsBERT + Sentence Transformers Please follow the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo for the latest information about previous and current models. ```bibtex @misc{SentenceTransformerWiki, author = {Mehrdad Farahani}, title = {Sentence Embeddings with ParsBERT}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/m3hrdadfi/sentence-transformers}, } ```
VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base
VoVanPhuc
2021-05-28T05:45:41Z
11
1
transformers
[ "transformers", "pytorch", "roberta", "arxiv:2104.08821", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
#### Table of contents 1. [Introduction](#introduction) 2. [Pretrain model](#models) 3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers) - [Installation](#install1) - [Example usage](#usage1) 4. [Using SimeCSE_Vietnamese with `transformers`](#transformers) - [Installation](#install2) - [Example usage](#usage2) # <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese : - SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance. - SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) - SimeCSE_Vietnamese works with both unlabeled and labeled data. ## Pre-trained models <a name="models"></a> Model | #params | Arch. ---|---|--- [`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base [`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers` ### Installation <a name="install1"></a> - Install `sentence-transformers`: - `pip install -U sentence-transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage1"></a> ```python from sentence_transformers import SentenceTransformer from pyvi.ViTokenizer import tokenize model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base') sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] embeddings = model.encode(sentences) ``` ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers` ### Installation <a name="install2"></a> - Install `transformers`: - `pip install -U transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage2"></a> ```python import torch from transformers import AutoModel, AutoTokenizer from pyvi.ViTokenizer import tokenize PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output ``` ## Quick Start [Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing) ## Citation @article{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, journal={arXiv preprint arXiv:2104.08821}, year={2021} } @inproceedings{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020}, year = {2020}, pages = {1037--1042} }
huggingtweets/officialmcafee
huggingtweets
2021-05-28T04:01:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/officialmcafee/1622174431277/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1068211396712763392/7FxhjlR3_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">John McAfee</div> <div style="text-align: center; font-size: 14px;">@officialmcafee</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from John McAfee. | Data | John McAfee | | --- | --- | | Tweets downloaded | 3226 | | Retweets | 44 | | Short tweets | 1884 | | Tweets kept | 1298 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wje8sy2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @officialmcafee's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n8r5a5dd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n8r5a5dd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/officialmcafee') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tatsu_moved
huggingtweets
2021-05-28T01:37:11Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394041828245229569/GqycTopw_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tatsu Mori / MOVED TO NEW ACCOUNT</div> <div style="text-align: center; font-size: 14px;">@tatsu_moved</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tatsu Mori / MOVED TO NEW ACCOUNT. | Data | Tatsu Mori / MOVED TO NEW ACCOUNT | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 131 | | Short tweets | 729 | | Tweets kept | 2387 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yst62rv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tatsu_moved's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hn213w51) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hn213w51/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tatsu_moved') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
m3hrdadfi/icelandic-ner-distilbert
m3hrdadfi
2021-05-27T17:17:28Z
5
0
transformers
[ "transformers", "pytorch", "tf", "distilbert", "token-classification", "is", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: is license: apache-2.0 widget: - text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." - text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ." - text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ." - text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ." - text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ." --- # IcelandicNER DistilBERT This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language. The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities: - Date - Location - Miscellaneous - Money - Organization - Percent - Person - Time ## Dataset Information | | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time | |:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:| | Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 | | Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 | | Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | entity | precision | recall | f1-score | support | |:-------------:|:---------:|:--------:|:--------:|:-------:| | Date | 0.969309 | 0.973042 | 0.971172 | 779.0 | | Location | 0.941221 | 0.946929 | 0.944067 | 1319.0 | | Miscellaneous | 0.848283 | 0.819251 | 0.833515 | 935.0 | | Money | 0.928571 | 0.934641 | 0.931596 | 153.0 | | Organization | 0.874147 | 0.876806 | 0.875475 | 1315.0 | | Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 | | Person | 0.956674 | 0.972853 | 0.964695 | 2247.0 | | Time | 0.965318 | 0.970930 | 0.968116 | 172.0 | | micro avg | 0.926110 | 0.929141 | 0.927623 | 7028.0 | | macro avg | 0.935441 | 0.936807 | 0.936079 | 7028.0 | | weighted avg | 0.925578 | 0.929141 | 0.927301 | 7028.0 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "m3hrdadfi/icelandic-ner-distilbert" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
m3hrdadfi/icelandic-ner-bert
m3hrdadfi
2021-05-27T17:14:13Z
7
0
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "is", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: is license: apache-2.0 widget: - text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." - text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ." - text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ." - text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ." - text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ." --- # IcelandicNER BERT This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language. The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities: - Date - Location - Miscellaneous - Money - Organization - Percent - Person - Time ## Dataset Information | | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time | |:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:| | Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 | | Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 | | Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | entity | precision | recall | f1-score | support | |:-------------:|:---------:|:--------:|:--------:|:-------:| | Date | 0.969466 | 0.978177 | 0.973802 | 779.0 | | Location | 0.955201 | 0.953753 | 0.954476 | 1319.0 | | Miscellaneous | 0.867033 | 0.843850 | 0.855285 | 935.0 | | Money | 0.979730 | 0.947712 | 0.963455 | 153.0 | | Organization | 0.893939 | 0.897338 | 0.895636 | 1315.0 | | Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 | | Person | 0.963028 | 0.973743 | 0.968356 | 2247.0 | | Time | 0.976879 | 0.982558 | 0.979710 | 172.0 | | micro avg | 0.938158 | 0.938958 | 0.938558 | 7028.0 | | macro avg | 0.950659 | 0.947141 | 0.948840 | 7028.0 | | weighted avg | 0.937845 | 0.938958 | 0.938363 | 7028.0 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "m3hrdadfi/icelandic-ner-bert" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
huggingtweets/davidrliu
huggingtweets
2021-05-27T14:40:46Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/davidrliu/1622126441318/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/996598862189035520/7TV9Dej2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">David R. Liu</div> <div style="text-align: center; font-size: 14px;">@davidrliu</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from David R. Liu. | Data | David R. Liu | | --- | --- | | Tweets downloaded | 2124 | | Retweets | 952 | | Short tweets | 62 | | Tweets kept | 1110 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29r3m2zm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidrliu's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27i98foi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27i98foi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/davidrliu') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/johndoench
huggingtweets
2021-05-27T14:32:09Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/johndoench/1622125923840/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/779001579689115648/cvZpZYVt_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">John Doench</div> <div style="text-align: center; font-size: 14px;">@johndoench</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from John Doench. | Data | John Doench | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 529 | | Short tweets | 104 | | Tweets kept | 2609 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1h3t47j4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johndoench's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3iayyivn) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3iayyivn/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/johndoench') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tomlau
huggingtweets
2021-05-27T14:14:53Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/tomlau/1622124889137/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/523178673424572417/915RXZ65_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tom</div> <div style="text-align: center; font-size: 14px;">@tomlau</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tom. | Data | Tom | | --- | --- | | Tweets downloaded | 3208 | | Retweets | 612 | | Short tweets | 141 | | Tweets kept | 2455 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4my6fdyp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tomlau's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/d2zijq67) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/d2zijq67/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tomlau') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
GuoLiyong/snowfall_model_zoo
GuoLiyong
2021-05-27T09:26:31Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
The original link of these models is: https://zenodo.org/record/4604066#.YKtNrqgzZPY which is accessible by espnet utils The are ported to this repo for users who don't have espnet dependencies.
NtDNlp/sentence-embedding-vietnamese
NtDNlp
2021-05-27T08:51:12Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
#EmbeddingSimilarityEvaluator: Evaluating the model on STS.en-en.txt dataset in epoch 2 after 26000 steps: | Type | Pearson | Spearman | | ----------- | ----------- | ----------- | | Cosine | 0.7650 | 0.8095 | | Euclidean | 0.8089 | 0.8010 | | Cosine | 0.8075 | 0.7999 | | Euclidean | 0.7531 | 0.7680
huggingtweets/iamaaronwill
huggingtweets
2021-05-27T02:09:17Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/iamaaronwill/1622081352140/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1371138320026177536/FzLPlrhM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aaron</div> <div style="text-align: center; font-size: 14px;">@iamaaronwill</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aaron. | Data | Aaron | | --- | --- | | Tweets downloaded | 3245 | | Retweets | 102 | | Short tweets | 1332 | | Tweets kept | 1811 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/necnw243/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamaaronwill's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2500hrd9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2500hrd9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/iamaaronwill') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
philschmid/distilroberta-base-ner-wikiann-conll2003-3-class
philschmid
2021-05-26T14:13:00Z
23
2
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:wikiann-conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - token-classification datasets: - wikiann-conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilroberta-base-ner-wikiann-conll2003-3-class results: - task: name: Token Classification type: token-classification dataset: name: wikiann-conll2003 type: wikiann-conll2003 metrics: - name: Precision type: precision value: 0.9624757386241104 - name: Recall type: recall value: 0.9667497021553124 - name: F1 type: f1 value: 0.964607986167396 - name: Accuracy type: accuracy value: 0.9913626461292995 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-ner-wikiann-conll2003-3-class This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann and conll2003 dataset. It consists out of the classes of wikiann. O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6). eval F1-Score: **96,25** (merged dataset) test F1-Score: **92,41** (merged dataset) ## Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-3-class") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Philipp and live in Germany" nlp(example) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.9086903597787154e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results It achieves the following results on the evaluation set: - Loss: 0.0520 - Precision: 0.9625 - Recall: 0.9667 - F1: 0.9646 - Accuracy: 0.9914 It achieves the following results on the test set: - Loss: 0.141 - Precision: 0.917 - Recall: 0.9313 - F1: 0.9241 - Accuracy: 0.9807 ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.6.2 - Tokenizers 0.10.3
michalwilk123/distilbert-imdb-positive
michalwilk123
2021-05-25T19:59:11Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
distilbert model trained on positive imdb reviews
michalwilk123/distilbert-imdb-negative
michalwilk123
2021-05-25T12:53:19Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
distilbert trained on negative imdb reviews
kangnichaluo/mnli-5
kangnichaluo
2021-05-25T12:41:28Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 111 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
kangnichaluo/mnli-4
kangnichaluo
2021-05-25T12:36:39Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 87 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
kangnichaluo/mnli-3
kangnichaluo
2021-05-25T11:46:40Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
learning rate: 2e-5 training epochs: 3 batch size: 64 seed: 13 model: bert-base-uncased trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
huggingtweets/dathiks
huggingtweets
2021-05-25T03:48:22Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/dathiks/1621914499049/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391177469256867842/1UgWpBgb_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Dathiks the idiot || 18+</div> <div style="text-align: center; font-size: 14px;">@dathiks</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Dathiks the idiot || 18+. | Data | Dathiks the idiot || 18+ | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 160 | | Short tweets | 391 | | Tweets kept | 2696 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3na3z4pf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dathiks's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3k9x299g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3k9x299g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dathiks') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sultan/BioM-ELECTRA-Base-Generator
sultan
2021-05-24T21:08:37Z
4
0
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA # Abstract The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models. # Model Description This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 500k steps with a batch size of 1024 on TPUv3-32 unit. Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. # Acknowledgment We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units. # Citation ```bibtex @inproceedings{alrowili-shanker-2021-biom, title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}", author = "Alrowili, Sultan and Shanker, Vijay", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bionlp-1.24", pages = "221--227", abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.", } ```
huggingtweets/orogdk
huggingtweets
2021-05-24T19:51:55Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/orogdk/1621885910948/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391236434736992256/n33h-uPP_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Loggged ̘͇̣̲̃̍̅ ̺ͫͥͣ͋̇̍ ̷̥̉̏Out ̴̵̨̛͢ ̸̶̢̛͘ ͘͏ ̢̡͞</div> <div style="text-align: center; font-size: 14px;">@orogdk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Loggged ̘͇̣̲̃̍̅ ̺ͫͥͣ͋̇̍ ̷̥̉̏Out ̴̵̨̛͢ ̸̶̢̛͘ ͘͏ ̢̡͞. | Data | Loggged ̘͇̣̲̃̍̅ ̺ͫͥͣ͋̇̍ ̷̥̉̏Out ̴̵̨̛͢ ̸̶̢̛͘ ͘͏ ̢̡͞ | | --- | --- | | Tweets downloaded | 173 | | Retweets | 11 | | Short tweets | 35 | | Tweets kept | 127 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vq1365m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @orogdk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2lep1qnl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2lep1qnl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/orogdk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
RecordedFuture/Swedish-Sentiment-Fear-Targets
RecordedFuture
2021-05-24T12:47:21Z
31
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "sv", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: sv license: mit --- ## Swedish BERT models for sentiment analysis, Sentiment targets. [Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task. This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on. The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model. The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data. The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified. ### Fear targets The model can be imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets") classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets") When the model and tokenizer are initialized the model can be used for inference. #### Verification metrics During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric. | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.8361 | 0.7903 | 0.8876 | #### Swedish-Sentiment-Violence The model be can imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets") classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets") When the model and tokenizer are initialized the model can be used for inference. #### Verification metrics During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric. | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.7831| 0.9155| 0.8442 |
felflare/bert-restore-punctuation
felflare
2021-05-24T03:04:47Z
13,671
64
transformers
[ "transformers", "pytorch", "bert", "token-classification", "punctuation", "en", "dataset:yelp_polarity", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - en tags: - punctuation license: mit datasets: - yelp_polarity metrics: - f1 --- # ✨ bert-restore-punctuation [![forthebadge](https://forthebadge.com/images/badges/gluten-free.svg)]() This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews). The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation. This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks. Model restores the following punctuations -- **[! ? . , - : ; ' ]** The model also restores the upper-casing of words. ----------------------------------------------- ## 🚋 Usage **Below is a quick way to get up and running with the model.** 1. First, install the package. ```bash pip install rpunct ``` 2. Sample python code. ```python from rpunct import RestorePuncts # The default language is 'english' rpunct = RestorePuncts() rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated 3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""") # Outputs the following: # In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the # resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms # thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B. # Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more # sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves. ``` **This model works on arbitrarily large text in English language and uses GPU if available.** ----------------------------------------------- ## 📡 Training data Here is the number of product reviews we used for finetuning the model: | Language | Number of text samples| | -------- | ----------------- | | English | 560,000 | We found the best convergence around _**3 epochs**_, which is what presented here and available via a download. ----------------------------------------------- ## 🎯 Accuracy The fine-tuned model obtained the following accuracy on 45,990 held-out text samples: | Accuracy | Overall F1 | Eval Support | | -------- | ---------------------- | ------------------- | | 91% | 90% | 45,990 Below is a breakdown of the performance of the model by each label: | label | precision | recall | f1-score | support| | --------- | -------------|-------- | ----------|--------| | **!** | 0.45 | 0.17 | 0.24 | 424 | **!+Upper** | 0.43 | 0.34 | 0.38 | 98 | **'** | 0.60 | 0.27 | 0.37 | 11 | **,** | 0.59 | 0.51 | 0.55 | 1522 | **,+Upper** | 0.52 | 0.50 | 0.51 | 239 | **-** | 0.00 | 0.00 | 0.00 | 18 | **.** | 0.69 | 0.84 | 0.75 | 2488 | **.+Upper** | 0.65 | 0.52 | 0.57 | 274 | **:** | 0.52 | 0.31 | 0.39 | 39 | **:+Upper** | 0.36 | 0.62 | 0.45 | 16 | **;** | 0.00 | 0.00 | 0.00 | 17 | **?** | 0.54 | 0.48 | 0.51 | 46 | **?+Upper** | 0.40 | 0.50 | 0.44 | 4 | **none** | 0.96 | 0.96 | 0.96 |35352 | **Upper** | 0.84 | 0.82 | 0.83 | 5442 ----------------------------------------------- ## ☕ Contact Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models. -----------------------------------------------
thingsu/koDPR_context
thingsu
2021-05-24T02:46:37Z
3
3
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval Corpus : Korean Wikipedia Corpus Trained Strategy : - Pretrained Model : kykim/bert-kor-base - Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset - In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query You must need to use Korean wikipedia corpus <pre> <code> from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel class BertEncoder(BertPreTrainedModel): def __init__(self, config): super(BertEncoder, self).__init__(config) self.bert = BertModel(config) self.init_weights() def forward(self, input_ids, attention_mask=None, token_type_ids=None): outputs = self.bert(input_ids, attention_mask, token_type_ids) pooled_output = outputs[1] return pooled_output model_name = 'kykim/bert-kor-base' tokenizer = AutoTokenizer.from_pretrained(model_name) q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question") p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context") </pre> </code>
huggingtweets/eromaximus
huggingtweets
2021-05-23T18:12:56Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/eromaximus/1621793572911/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391474227602153474/wSbqLqAl_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">☆ᗪoubleᗰarshmallow☆</div> <div style="text-align: center; font-size: 14px;">@eromaximus</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ☆ᗪoubleᗰarshmallow☆. | Data | ☆ᗪoubleᗰarshmallow☆ | | --- | --- | | Tweets downloaded | 674 | | Retweets | 388 | | Short tweets | 46 | | Tweets kept | 240 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25ro381p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eromaximus's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c712ca0z) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c712ca0z/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eromaximus') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/heaven_ley
huggingtweets
2021-05-23T14:18:42Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/heaven_ley/1621532679555/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1391998269430116355/O5NJQwYC_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ashley 🌻</div> <div style="text-align: center; font-size: 14px;">@heaven_ley</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ashley 🌻. | Data | Ashley 🌻 | | --- | --- | | Tweets downloaded | 3084 | | Retweets | 563 | | Short tweets | 101 | | Tweets kept | 2420 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h9ex5ztp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heaven_ley's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rr1mtsr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rr1mtsr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/heaven_ley') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/elonmusk-mitll
huggingtweets
2021-05-23T14:16:06Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/elonmusk-mitll/1621528963873/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1383184766959120385/MM9DHPWC_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1380233126354558979/ltnN7Gl4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lincoln Laboratory</div> <div style="text-align: center; font-size: 14px;">@elonmusk-mitll</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Lincoln Laboratory. | Data | Elon Musk | Lincoln Laboratory | | --- | --- | --- | | Tweets downloaded | 221 | 2054 | | Retweets | 19 | 569 | | Short tweets | 68 | 14 | | Tweets kept | 134 | 1471 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/30mrjtcc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-mitll's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/xp75m1fp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/xp75m1fp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-mitll') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/mitll
huggingtweets
2021-05-23T14:13:26Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/mitll/1621527157401/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1380233126354558979/ltnN7Gl4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lincoln Laboratory</div> <div style="text-align: center; font-size: 14px;">@mitll</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lincoln Laboratory. | Data | Lincoln Laboratory | | --- | --- | | Tweets downloaded | 2054 | | Retweets | 569 | | Short tweets | 14 | | Tweets kept | 1471 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k9mrbjd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mitll's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/b55wa3e8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/b55wa3e8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mitll') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ushikado/yuyuyui-chatbot
ushikado
2021-05-23T13:27:10Z
10
5
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "ja", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: ja inference: false --- # yuyuyui-chatbot This model is based on [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) and finetuned on Yuyuyui scenario corpus. ## Usage The model takes a sequence of utterances (context) to generate a subsequent utterance (response). Each utterance begins with a **character token** and ends with an **EOS token**. Use the unspecified character token `<某>` for user inputs. Put a character token after your question or query to generate a response from a specific character. In this case, make sure that an EOS token is not appended automatically by the tokenizer. Otherwise the model will interpret the trailing EOS as an empty utterance and try to add another random character token. Simple example: ```python from transformers import T5Tokenizer, AutoModelForCausalLM tokenizer = T5Tokenizer.from_pretrained("ushikado/yuyuyui-chatbot") model = AutoModelForCausalLM.from_pretrained("ushikado/yuyuyui-chatbot") query_text = "<某>神樹様について教えてください。</s><上里 ひなた>" input_tensor = tokenizer.encode(query_text, add_special_tokens=False, return_tensors="pt") output_list = model.generate(input_tensor, max_length=100, do_sample=True, pad_token_id=tokenizer.eos_token_id) output_text = tokenizer.decode(output_list[0]) print(output_text) """ <某> 神樹様について教えてください。</s> <上里 ひなた> 造反神は、神樹様の分裂を煽り出して、神樹様の中の一体感を高める存在です。</s> """ ``` Accumulate dialog history to make responses more context-aware: ```python class Interlocutor(): def __init__(self, tokenizer, model, character_token, max_context_length=512, max_response_length=128): self.tokenizer = tokenizer self.model = model self.character_token = character_token self.max_context_length = max_context_length self.max_response_length = max_response_length self.context = "" return def generate(self, query): nanigashi = self.tokenizer.additional_special_tokens[0] nanigashi_id = self.tokenizer.additional_special_tokens_ids[0] self.context += nanigashi + query + self.tokenizer.eos_token + self.character_token context_tensor = self.tokenizer.encode(self.context, add_special_tokens=False, return_tensors="pt") context_length = context_tensor.size()[-1] if self.max_context_length < context_length: context_tensor = context_tensor.narrow(1, context_length - self.max_context_length, self.max_context_length) context_length = context_tensor.size()[-1] max_length = context_length + self.max_response_length context_tensor = self.model.generate(context_tensor, do_sample=True, max_length=max_length, pad_token_id=self.tokenizer.eos_token_id) self.context = re.sub(self.tokenizer.eos_token, "", self.tokenizer.decode(context_tensor[0])) response = self.context[self.context.rindex(self.character_token) + len(self.character_token) : ].strip() print(response) interlocutor = Interlocutor(tokenizer, model, "<加賀城 雀>") interlocutor.generate("何しようかな。") """ そうだなぁ。せっかく徳島に来たんだから、何か食べたいよなー。</s> """ interlocutor.generate("例えば?") """ スパムとかいう高級料理はちょっとなぁ。あとは可愛い雑貨とか、おやつとか。</s> """ interlocutor.generate("徳島ラーメンじゃないの?") """ あー、確か徳島ラーメンってのがあって、それも美味しいんだよね。</s> """ interlocutor.generate("ここから近いお店があるんだって。行ってみよう!") """ わー! 何だか賑やかでいい感じだね。</s> """ interlocutor.generate("さっそく注文するね。") """ んー! ずっーと揚げ鶏が好きだったけど、今日は初めてまるまる鶏肉を注文してみるよ。</s> """ print(interlocutor.context) """ <某> 何しようかな。</s> <加賀城 雀> そうだなぁ。せっかく徳島に来たんだから、何か食べたいよなー。</s> <某> 例えば?</s> <加賀城 雀> スパムとかいう高級料理はちょっとなぁ。あとは可愛い雑貨とか、おやつとか。</s> <某> 徳島ラーメンじゃないの?</s> <加賀城 雀> あー、確か徳島ラーメンってのがあって、それも美味しいんだよね。</s> <某> ここから近いお店があるんだって。行ってみよう!</s> <加賀城 雀> わー! 何だか賑やかでいい感じだね。</s> <某> さっそく注文するね。</s> <加賀城 雀> んー! ずっーと揚げ鶏が好きだったけど、今日は初めてまるまる鶏肉を注文してみるよ。</s> """ ``` ## List of character tokens `<某>` is _unspecified (nanigashi)_. Use for user inputs or mobs. ```plain <某> <結城 友奈> <東郷 美森> <犬吠埼 風> <犬吠埼 樹> <三好 夏凜> <乃木 園子> <鷲尾 須美> <三ノ輪 銀> <乃木 若葉> <上里 ひなた> <土居 球子> <伊予島 杏> <郡 千景> <高嶋 友奈> <白鳥 歌野> <藤森 水都> <秋原 雪花> <古波蔵 棗> <楠 芽吹> <加賀城 雀> <弥勒 夕海子> <山伏 しずく> <山伏 シズク> <国土 亜耶> <赤嶺 友奈> <弥勒 蓮華> <桐生 静> <安芸 真鈴> <花本 美佳> ``` ## Licence TBD.
shtoshni/gpt2-chess-uci
shtoshni
2021-05-23T12:53:34Z
631
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
GPT2 language model for chess in UCI notation
rjbownes/Magic-The-Generating
rjbownes
2021-05-23T12:17:20Z
5
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- widget: - text: "Even the Dwarves" - text: "The secrets of" --- # Model name Magic The Generating ## Model description This is a fine tuned GPT-2 model trained on a corpus of all available English language Magic the Gathering card flavour texts. ## Intended uses & limitations This is intended only for use in generating new, novel, and sometimes surprising, MtG like flavour texts. #### How to use ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("rjbownes/Magic-The-Generating") model = GPT2LMHeadModel.from_pretrained("rjbownes/Magic-The-Generating") ``` #### Limitations and bias The training corpus was surprisingly small, only ~29000 cards, I had suspected there were more. This might mean there is a real limit to the number of entirely original strings this will generate. This is also only based on the 117M parameter GPT2, it's a pretty obvious upgrade to retrain with medium, large or XL models. However, despite this, the outputs I tested were very convincing! ## Training data The data was 29222 MtG card flavour texts. The model was based on the "gpt2" pretrained transformer: https://huggingface.co/gpt2. ## Training procedure Only English language MtG flavour texts were scraped from the [Scryfall](https://scryfall.com/) API. Empty strings and any non-UTF-8 encoded tokens were removed leaving 29222 entries. This was trained using google Colab with a T4 instance. 4 epochs, adamW optimizer with default parameters and a batch size of 32. Token embedding lengths were capped at 98 tokens as this was the longest string and an attention mask was added to the training model to ignore all padding tokens. ## Eval results Average Training Loss: 0.44866578806635815. Validation loss: 0.5606984243444775. Sample model outputs: 1. "Every branch a crossroads, every vine a swift steed." —Gwendlyn Di Corci 2. "The secrets of this world will tell their masters where to strike if need be." —Noyan Dar, Tazeem roilmage 3. "The secrets of nature are expensive. You'd be better off just to have more freedom." 4. "Even the Dwarves knew to leave some stones unturned." 5. "The wise always keep an ear open to the whispers of power." ### BibTeX entry and citation info ```bibtex @article{BownesLM, title={Fine Tuning GPT-2 for Magic the Gathering flavour text generation.}, author={Richard J. Bownes}, journal={Medium}, year={2020} } ```
redrussianarmy/gpt2-turkish-cased
redrussianarmy
2021-05-23T12:12:42Z
1,300
13
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "turkish", "tr", "gpt2-tr", "gpt2-turkish", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: "tr" tags: - turkish - tr - gpt2-tr - gpt2-turkish --- # 🇹🇷 Turkish GPT-2 Model In this repository I release GPT-2 model, that was trained on various texts for Turkish. The model is meant to be an entry point for fine-tuning on other texts. ## Training corpora I used a Turkish corpora that is taken from oscar-corpus. It was possible to create byte-level BPE with Tokenizers library of Huggingface. With the Tokenizers library, I created a 52K byte-level BPE vocab based on the training corpora. After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs). Logs during training: https://tensorboard.dev/experiment/3AWKv8bBTaqcqZP5frtGkw/#scalars ## Model weights Both PyTorch and Tensorflow compatible weights are available. | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `redrussianarmy/gpt2-turkish-cased` | [`config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/config.json) • [`merges.txt`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/merges.txt) • [`pytorch_model.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/pytorch_model.bin) • [`special_tokens_map.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/special_tokens_map.json) • [`tf_model.h5`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tf_model.h5) • [`tokenizer_config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tokenizer_config.json) • [`traning_args.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/training_args.bin) • [`vocab.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/vocab.json) ## Using the model The model itself can be used in this way: ``` python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("redrussianarmy/gpt2-turkish-cased") model = AutoModelWithLMHead.from_pretrained("redrussianarmy/gpt2-turkish-cased") ``` Here's an example that shows how to use the great Transformers Pipelines for generating text: ``` python from transformers import pipeline pipe = pipeline('text-generation', model="redrussianarmy/gpt2-turkish-cased", tokenizer="redrussianarmy/gpt2-turkish-cased", config={'max_length':800}) text = pipe("Akşamüstü yolda ilerlerken, ")[0]["generated_text"] print(text) ``` ### How to clone the model repo? ``` git lfs install git clone https://huggingface.co/redrussianarmy/gpt2-turkish-cased ``` ## Contact (Bugs, Feedback, Contribution and more) For questions about the GPT2-Turkish model, just open an issue [here](https://github.com/redrussianarmy/gpt2-turkish/issues) 🤗
pierreguillou/gpt2-small-portuguese
pierreguillou
2021-05-23T10:59:56Z
45,373
44
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "pt", "dataset:wikipedia", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: pt widget: - text: "Quem era Jim Henson? Jim Henson era um" - text: "Em um achado chocante, o cientista descobriu um" - text: "Barack Hussein Obama II, nascido em 4 de agosto de 1961, é" - text: "Corrida por vacina contra Covid-19 já tem" license: mit datasets: - wikipedia --- # GPorTuguese-2: a Language Model for Portuguese text generation (and more NLP tasks...) ## Introduction GPorTuguese-2 (Portuguese GPT-2 small) is a state-of-the-art language model for Portuguese based on the GPT-2 small model. It was trained on Portuguese Wikipedia using **Transfer Learning and Fine-tuning techniques** in just over a day, on one GPU NVIDIA V100 32GB and with a little more than 1GB of training data. It is a proof-of-concept that it is possible to get a state-of-the-art language model in any language with low ressources. It was fine-tuned from the [English pre-trained GPT-2 small](https://huggingface.co/gpt2) using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the [fastai v2](https://dev.fast.ai/) Deep Learning framework. All the fine-tuning fastai v2 techniques were used. It is now available on Hugging Face. For further information or requests, please go to "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)". ## Model | Model | #params | Model file (pt/tf) | Arch. | Training /Validation data (text) | |-------------------------|---------|--------------------|-------------|------------------------------------------| | `gpt2-small-portuguese` | 124M | 487M / 475M | GPT-2 small | Portuguese Wikipedia (1.28 GB / 0.32 GB) | ## Evaluation results In a little more than a day (we only used one GPU NVIDIA V100 32GB; through a Distributed Data Parallel (DDP) training mode, we could have divided by three this time to 10 hours, just with 2 GPUs), we got a loss of 3.17, an **accuracy of 37.99%** and a **perplexity of 23.76** (see the validation results table below). | after ... epochs | loss | accuracy (%) | perplexity | time by epoch | cumulative time | |------------------|------|--------------|------------|---------------|-----------------| | 0 | 9.95 | 9.90 | 20950.94 | 00:00:00 | 00:00:00 | | 1 | 3.64 | 32.52 | 38.12 | 5:48:31 | 5:48:31 | | 2 | 3.30 | 36.29 | 27.16 | 5:38:18 | 11:26:49 | | 3 | 3.21 | 37.46 | 24.71 | 6:20:51 | 17:47:40 | | 4 | 3.19 | 37.74 | 24.21 | 6:06:29 | 23:54:09 | | 5 | 3.17 | 37.99 | 23.76 | 6:16:22 | 30:10:31 | ## GPT-2 *Note: information copied/pasted from [Model: gpt2 >> GPT-2](https://huggingface.co/gpt2#gpt-2)* Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at this [page](https://openai.com/blog/better-language-models/) (February 14, 2019). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description *Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)* GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## How to use GPorTuguese-2 with HuggingFace (PyTorch) The following code use PyTorch. To use TensorFlow, check the below corresponding paragraph. ### Load GPorTuguese-2 and its sub-word tokenizer (Byte-level BPE) ```python from transformers import AutoTokenizer, AutoModelWithLMHead import torch tokenizer = AutoTokenizer.from_pretrained("pierreguillou/gpt2-small-portuguese") model = AutoModelWithLMHead.from_pretrained("pierreguillou/gpt2-small-portuguese") # Get sequence length max of 1024 tokenizer.model_max_length=1024 model.eval() # disable dropout (or leave in train mode to finetune) ``` ### Generate one word ```python # input sequence text = "Quem era Jim Henson? Jim Henson era um" inputs = tokenizer(text, return_tensors="pt") # model output outputs = model(**inputs, labels=inputs["input_ids"]) loss, logits = outputs[:2] predicted_index = torch.argmax(logits[0, -1, :]).item() predicted_text = tokenizer.decode([predicted_index]) # results print('input text:', text) print('predicted text:', predicted_text) # input text: Quem era Jim Henson? Jim Henson era um # predicted text: homem ``` ### Generate one full sequence ```python # input sequence text = "Quem era Jim Henson? Jim Henson era um" inputs = tokenizer(text, return_tensors="pt") # model output using Top-k sampling text generation method sample_outputs = model.generate(inputs.input_ids, pad_token_id=50256, do_sample=True, max_length=50, # put the token number you want top_k=40, num_return_sequences=1) # generated sequence for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) # >> Generated text # Quem era Jim Henson? Jim Henson era um executivo de televisão e diretor de um grande estúdio de cinema mudo chamado Selig, # depois que o diretor de cinema mudo Georges Seuray dirigiu vários filmes para a Columbia e o estúdio. ``` ## How to use GPorTuguese-2 with HuggingFace (TensorFlow) The following code use TensorFlow. To use PyTorch, check the above corresponding paragraph. ### Load GPorTuguese-2 and its sub-word tokenizer (Byte-level BPE) ```python from transformers import AutoTokenizer, TFAutoModelWithLMHead import tensorflow as tf tokenizer = AutoTokenizer.from_pretrained("pierreguillou/gpt2-small-portuguese") model = TFAutoModelWithLMHead.from_pretrained("pierreguillou/gpt2-small-portuguese") # Get sequence length max of 1024 tokenizer.model_max_length=1024 model.eval() # disable dropout (or leave in train mode to finetune) ``` ### Generate one full sequence ```python # input sequence text = "Quem era Jim Henson? Jim Henson era um" inputs = tokenizer.encode(text, return_tensors="tf") # model output using Top-k sampling text generation method outputs = model.generate(inputs, eos_token_id=50256, pad_token_id=50256, do_sample=True, max_length=40, top_k=40) print(tokenizer.decode(outputs[0])) # >> Generated text # Quem era Jim Henson? Jim Henson era um amigo familiar da família. Ele foi contratado pelo seu pai # para trabalhar como aprendiz no escritório de um escritório de impressão, e então começou a ganhar dinheiro ``` ## Limitations and bias The training data used for this model come from Portuguese Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Author Portuguese GPT-2 small was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the computing power of the GPU (GPU NVIDIA V100 32 Go) of the [AI Lab](https://www.linkedin.com/company/ailab-unb/) (University of Brasilia) to which I am attached as an Associate Researcher in NLP and the participation of its directors in the definition of NLP strategy, Professors Fabricio Ataides Braz and Nilton Correia da Silva. ## Citation If you use our work, please cite: ```bibtex @inproceedings{pierre2020gpt2smallportuguese, title={GPorTuguese-2 (Portuguese GPT-2 small): a Language Model for Portuguese text generation (and more NLP tasks...)}, author={Pierre Guillou}, year={2020} } ```
p208p2002/gpt2-squad-qg-hl
p208p2002
2021-05-23T10:54:57Z
13
3
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "question-generation", "dataset:squad", "arxiv:1606.05250", "arxiv:1705.00106", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ncoop57/DiGPTame-medium
ncoop57
2021-05-23T10:47:17Z
37
2
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "conversational", "arxiv:1911.00536", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational license: mit --- ## Finetuned DialoGPT model on Spanish Conversations This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles). DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) Some example dialog from the model: |Role | Response | |---------|--------| |User | ¿Cómo está el clima hoy? | | Bot | Muy bien. | |User | Eso es bueno escuchar. ¿Vas a ir a la playa? | | Bot | No, también puedo regresar. | |User | Oh bien, bueno, iré a la playa más tarde. | | Bot | ¡No! | |User | ¿Qué está mal con eso? | | Bot | ¡Cómo que no! | |User | Estoy confundido, ¿por qué no puedo ir a la playa? | | Bot | ¡Cómo que no! | |User | Explícamelo por favor. | | Bot | ¡No! | ## Using the model Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card): ```python from transformers import AutoModelWithLMHead, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("ncoop57/DiGPTame-medium") model = AutoModelWithLMHead.from_pretrained("ncoop57/DiGPTame-medium") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ``` ## Training your own model If you would like to finetune your own model or finetune this Spanish model, please checkout my blog post on that exact topic! https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html
mymusise/EasternFantasyNoval
mymusise
2021-05-23T10:42:00Z
5
1
transformers
[ "transformers", "tf", "gpt2", "text-generation", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- # EasternFantasyNoval # Overview - **Language model**: GPT2-Medium - **Model size**: 1.2GiB - **Language**: Chinese
mrm8488/distilgpt2-finetuned-wsb-tweets
mrm8488
2021-05-23T10:23:17Z
14
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "wsb", "tweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en tags: - wsb - tweets widget: - text: "Come on guys this is" --- # distilGPT-2 fine-tuned on Kaggle WSB Reddit posts dataset
mrm8488/GuaPeTe-2-tiny
mrm8488
2021-05-23T10:17:59Z
58
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "spanish", "gpt-2", "spanish gpt2", "es", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: es tags: - spanish - gpt-2 - spanish gpt2 widget: - text: "Murcia es la huerta de Europa porque" --- # GuaPeTe-2-tiny: A proof of concept tiny GPT-2 like model trained on Spanish Wikipedia corpus
mrm8488/GPT-2-finetuned-common_gen
mrm8488
2021-05-23T10:12:07Z
135
3
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "dataset:common_gen", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - common_gen widget: - text: "<|endoftext|> apple, tree, pick:" --- # GPT-2 fine-tuned on CommonGen [GPT-2](https://huggingface.co/gpt2) fine-tuned on [CommonGen](https://inklab.usc.edu/CommonGen/index.html) for *Generative Commonsense Reasoning*. ## Details of GPT-2 GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Details of the dataset 📚 CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. | Dataset | Split | # samples | | -------- | ----- | --------- | | common_gen | train | 67389 | | common_gen | valid | 4018 | | common_gen | test | 1497 | ## Model fine-tuning 🏋️‍ You can find the fine-tuning script [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) ## Model in Action 🚀 ```bash python ./transformers/examples/text-generation/run_generation.py \ --model_type=gpt2 \ --model_name_or_path="mrm8488/GPT-2-finetuned-common_gen" \ --num_return_sequences 1 \ --prompt "<|endoftext|> kid, room, dance:" \ --stop_token "." ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
ml6team/gpt-2-medium-conditional-quote-generator
ml6team
2021-05-23T09:38:59Z
23
7
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
This model has been finetuned on the [`Quotes-500K`](https://github.com/ShivaliGoel/Quotes-500K) dataset to generate quotes based on given topics. To generate a quote, use the following input prompt: `Given Topics: topic 1 | topic 2 | ... | topic n. Related Quote: `
minimaxir/magic-the-gathering
minimaxir
2021-05-23T09:35:52Z
71
3
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# magic-the-gathering A small (~1M parameters) GPT-2 model trained on Magic: The Gathering cards from sets up to and including _Strixhaven_ and _Commander 2021_. The model was trained 8 hours on a V100 on about ~22k unique encoded cards, with 10 permutations of each possible card. Examples of encoded cards: ``` <|toughness|><|text|>Counter target spell unless its controller pays {X}.<|power|><|type|>Instant<|loyalty|><|manaCost|>{X}{U}<|name|>Clash of Wills ``` ``` <|loyalty|><|text|>~ enters the battlefield tapped. {T}: Add {C}. {T}: Add {U} or {R}. ~ deals 1 damage to you.<|toughness|><|name|>Caldera Lake<|power|><|manaCost|><|type|>Land ``` ``` <|loyalty|>5<|text|>+1: Scry 1, then draw a card. −2: Return target creature to its owner's hand. −8: You get an emblem with "Whenever an opponent casts their first spell each turn, counter that spell."<|name|>Jace, Unraveler of Secrets<|toughness|><|type|>Legendary Planeswalker — Jace<|manaCost|>{3}{U}{U}<|power|> ``` The generated cards follow a similar schema, however because the model learns all possible permutations of the schema, the user can prompt the generation with any combination of schema.
microsoft/DialogRPT-width
microsoft
2021-05-23T09:20:20Z
41
1
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "arxiv:2009.06978", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `width` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.701 | | I love NLP! | Me too! | 0.029 | The `width` score predicts how likely the response is getting replied. # DialogRPT-width ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | this model | | `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
microsoft/DialogRPT-human-vs-machine
microsoft
2021-05-23T09:16:47Z
20
5
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "arxiv:2009.06978", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `human_vs_machine` score | | :------ | :------- | :------------: | | I love NLP! | I'm not sure if it's a good idea. | 0.000 | | I love NLP! | Me too! | 0.605 | The `human_vs_machine` score predicts how likely the response is from a human rather than a machine. # DialogRPT-human-vs-machine ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) | | `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | this model | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
microsoft/DialogRPT-depth
microsoft
2021-05-23T09:15:24Z
57
5
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "arxiv:2009.06978", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `depth` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.724 | | I love NLP! | Me too! | 0.032 | The `depth` score predicts how likely the response is getting a long follow-up discussion thread. # DialogRPT-depth ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) | | `depth`| ... which gets longer follow-up thread? | this model | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
model-attribution-challenge/DialoGPT-large
model-attribution-challenge
2021-05-23T09:06:08Z
3
1
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "conversational", "arxiv:1911.00536", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-26T13:35:04Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Does money buy happiness? | | Bot | Depends how much money you spend on it .| |User | What is the best way to buy happiness ? | | Bot | You just have to be a millionaire by your early 20s, then you can be happy . | |User |This is so difficult ! | | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large") model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
Martian/Neo-GPT-Title-Generation-Electric-Car
Martian
2021-05-23T08:56:08Z
21
1
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - en widget: - text: Tesla range - text: Nissan Leaf is - text: Tesla is - text: The best electric car --- # Neo-GPT-Title-Generation-Electric-Car Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale". # Pipeline example ```python import pandas as pd from transformers import AutoModelForMaskedLM from transformers import GPT2Tokenizer, TrainingArguments, AutoModelForCausalLM, AutoConfig model = AutoModelForCausalLM.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car') tokenizer = GPT2Tokenizer.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car', bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>') prompt = "<|startoftext|> Electric car" input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, top_k=100, min_length = 30, max_length=150, top_p=0.90, num_return_sequences=20, skip_special_tokens=True) list_title_gen = [] for i, sample_output in enumerate(gen_tokens): title = tokenizer.decode(sample_output, skip_special_tokens=True) list_title_gen.append(title) for i in list_title_gen: try: list_title_gen[list_title_gen.index(i)] = i.split(' | ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' - ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' — ')[0] except: continue list_title_gen = [sub.replace('�', ' ').replace('\\r',' ').replace('\ ',' ').replace('\\t', ' ').replace('\\xa0', '') for sub in list_title_gen] list_title_gen = [sub if sub != '<|startoftext|> Electric car' else '' for sub in list_title_gen] for i in list_title_gen: print(i) ``` # Todo - Improve the quality of the training sample - Add more data
manav/dialogpt-large-kanye-reddit
manav
2021-05-23T08:48:22Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational --- ## Model description Finetuned version of DialogPT-large released. Finetuned on data scraped from the r/Kanye subreddit. The data wasn't thoroughly vetted so the model may display biases that I am unaware of, so tread with caution when using this model until further analysis of its biases can be performed.
lysandre/arxiv
lysandre
2021-05-23T08:44:27Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ArXiv GPT-2 checkpoint This is a GPT-2 small checkpoint for PyTorch. It is the official `gpt2-small` finetuned to ArXiv paper on physics fields. ## Training data This model was trained on a subset of ArXiv papers that were parsed from PDF to txt. The resulting data is made of 130MB of text, mostly from quantum physics (quant-ph) and other physics sub-fields.
lvwerra/gpt2-imdb
lvwerra
2021-05-23T08:38:34Z
44,499
16
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT2-IMDB ## What is it? A GPT2 (`gpt2`) language model fine-tuned on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). ## Training setting The GPT2 language model was fine-tuned for 1 epoch on the IMDB dataset. All comments were joined into a single text file separated by the EOS token: ``` import pandas as pd df = pd.read_csv("imdb-dataset.csv") imdb_str = " <|endoftext|> ".join(df['review'].tolist()) with open ('imdb.txt', 'w') as f: f.write(imdb_str) ``` To train the model the `run_language_modeling.py` script in the `transformer` library was used: ``` python run_language_modeling.py --train_data_file imdb.txt --output_dir gpt2-imdb --model_type gpt2 --model_name_or_path gpt2 ```
A-bhimany-u08/bert-base-cased-qqp
A-bhimany-u08
2021-05-23T06:58:51Z
29
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "dataset:qqp", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- inference: False datasets: - qqp --- bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are `not_duplicate` (label 0) or `duplicate` (label 1). The model achieves 89% evaluation accuracy
ktrapeznikov/gpt2-medium-topic-small-set
ktrapeznikov
2021-05-23T06:21:38Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: widget: - text: "topic climate source" --- # GPT2-medium-topic-news ## Model description GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title ## Intended uses & limitations #### How to use To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: ```python f"topic {topic} source" f"topic {topic} source {source} title" f"topic {topic} source {source} title {title} body" ``` Try the following tags for `topic: climate, weather, vaccination`. Zero shot generation works pretty well as long as `topic` is a single word and not too specific. ```python device = "cuda:0" tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set") model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set") model.to(device) topic = "climate" prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt") out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9) print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True)) ``` ## Sample Output >[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ... >[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ... >[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ... >[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ... >[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ... >[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ... ## Training data ## Training procedure
ktrapeznikov/gpt2-medium-topic-news-v2
ktrapeznikov
2021-05-23T06:14:58Z
9
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: widget: - text: "topic climate source washington post title " --- # GPT2-medium-topic-news ## Model description GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title ## Intended uses & limitations #### How to use To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: ```python f"topic {topic} source" f"topic {topic} source {source} title" f"topic {topic} source {source} title {title} body" ``` Try the following tags for `topic: climate, weather, vaccination`. Zero shot generation works pretty well as long as `topic` is a single word and not too specific. ```python device = "cuda:0" tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set") model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set") model.to(device) topic = "climate" prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt") out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9) print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True)) ```
kiri-ai/gpt2-large-quantized
kiri-ai
2021-05-23T06:13:04Z
14
0
transformers
[ "transformers", "gpt2", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en --- # Pytorch int8 quantized version of gpt2-large ## Usage Download the .bin file locally. Load with: Rest of the usage according to [original instructions](https://huggingface.co/gpt2-large). ```python import torch model = torch.load("path/to/pytorch_model_quantized.bin") ```
huggingtweets/zvisrosen
huggingtweets
2021-05-23T05:27:42Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/zvisrosen/1607051627200/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/793249343713243137/L-ZrfLj5_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Zvi S. Rosen 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@zvisrosen bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@zvisrosen's tweets](https://twitter.com/zvisrosen). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3232</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>225</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>85</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>2922</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3awttigi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zvisrosen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rths1wy) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rths1wy/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/zvisrosen'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
huggingtweets/zkarlinn
huggingtweets
2021-05-23T05:23:15Z
6
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/zkarlinn/1612670051245/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/939167998228795392/-tdbboDI_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Zev Karlin-Neumann 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@zkarlinn bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@zkarlinn's tweets](https://twitter.com/zkarlinn). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3225</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>2237</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>187</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>801</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/260jmvfw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zkarlinn's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/67rfsha0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/67rfsha0/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/zkarlinn'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)