modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
ramsrigouthamg/t5_paraphraser
d78f7749656e21d8b6fdf372efb5c5d1dbce577f
2020-12-11T22:00:04.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ramsrigouthamg
null
ramsrigouthamg/t5_paraphraser
9,713
6
transformers
700
## Model in Action 🚀 ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer def set_seed(seed): torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) set_seed(42) model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_paraphraser') tokenizer = T5Tokenizer.from_pretrained('ramsrigouthamg/t5_paraphraser') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) sentence = "Which course should I take to get started in data science?" # sentence = "What are the ingredients required to bake a perfect cake?" # sentence = "What is the best possible approach to learn aeronautical engineering?" # sentence = "Do apples taste better than oranges in general?" text = "paraphrase: " + sentence + " </s>" max_len = 256 encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3 beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=120, top_p=0.98, early_stopping=True, num_return_sequences=10 ) print ("\nOriginal Question ::") print (sentence) print ("\n") print ("Paraphrased Questions :: ") final_outputs =[] for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) if sent.lower() != sentence.lower() and sent not in final_outputs: final_outputs.append(sent) for i, final_output in enumerate(final_outputs): print("{}: {}".format(i, final_output)) ``` ## Output ``` Original Question :: Which course should I take to get started in data science? Paraphrased Questions :: 0: What should I learn to become a data scientist? 1: How do I get started with data science? 2: How would you start a data science career? 3: How can I start learning data science? 4: How do you get started in data science? 5: What's the best course for data science? 6: Which course should I start with for data science? 7: What courses should I follow to get started in data science? 8: What degree should be taken by a data scientist? 9: Which course should I follow to become a Data Scientist? ``` ## Detailed blog post available here : https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555
valhalla/t5-small-e2e-qg
feec82746b18ab037724c14f11277f320bd73920
2021-07-30T13:10:33.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:squad", "arxiv:1910.10683", "transformers", "question-generation", "license:mit", "autotrain_compatible" ]
text2text-generation
false
valhalla
null
valhalla/t5-small-e2e-qg
9,563
3
transformers
701
--- datasets: - squad tags: - question-generation widget: - text: "Python is developed by Guido Van Rossum and released in 1991. </s>" license: mit --- ## T5 for question-generation This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions. You can play with the model using the inference API, just put the text and see the results! For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \ and first released in 1991, Python's design philosophy emphasizes code \ readability with its notable use of significant whitespace." nlp = pipeline("e2e-qg") nlp(text) => [ 'Who created Python?', 'When was Python first released?', "What is Python's design philosophy?" ] ```
KoboldAI/GPT-J-6B-Skein
95a7ea75328cc8e117fdbf967b9fa12f49d1d24c
2022-03-14T22:44:49.000Z
[ "pytorch", "gptj", "text-generation", "transformers" ]
text-generation
false
KoboldAI
null
KoboldAI/GPT-J-6B-Skein
9,531
null
transformers
702
Entry not found
allenai/longformer-large-4096-finetuned-triviaqa
4a10c0999bd77b29f6fd122663787c770afa197e
2021-03-10T02:31:53.000Z
[ "pytorch", "tf", "longformer", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
allenai
null
allenai/longformer-large-4096-finetuned-triviaqa
9,500
null
transformers
703
Entry not found
ImAPizza/DialoGPT-medium-alberttwo
bedcf2148b3c45ebc5c0c8632d41fe4f4cde1d9f
2021-08-29T13:39:41.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ImAPizza
null
ImAPizza/DialoGPT-medium-alberttwo
9,477
1
transformers
704
--- tags: - conversational --- # Alberttwo DialoGPT Model
google/long-t5-tglobal-base
c910dec42392b5586a643ee547d65a9f111059eb
2022-06-22T09:05:39.000Z
[ "pytorch", "jax", "longt5", "text2text-generation", "en", "arxiv:2112.07916", "arxiv:1912.08777", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/long-t5-tglobal-base
9,314
1
transformers
705
--- license: apache-2.0 language: en --- # LongT5 (transient-global attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you. ### How to use ```python from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-base") model = LongT5Model.from_pretrained("google/long-t5-tglobal-base") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} } ```
BM-K/KoSimCSE-roberta-multitask
2b1aaf3c27691ae2c06cc65387c6f1d60ea6eef0
2022-06-03T01:48:14.000Z
[ "pytorch", "roberta", "feature-extraction", "ko", "transformers", "korean" ]
feature-extraction
false
BM-K
null
BM-K/KoSimCSE-roberta-multitask
9,306
1
transformers
706
--- language: ko tags: - korean --- https://github.com/BM-K/Sentence-Embedding-is-all-you-need # Korean-Sentence-Embedding 🍭 Korean sentence embedding repository. You can download the pre-trained models and inference right away, also it provides environments where individuals can train models. ## Quick tour ```python import torch from transformers import AutoModel, AutoTokenizer def cal_score(a, b): if len(a.shape) == 1: a = a.unsqueeze(0) if len(b.shape) == 1: b = b.unsqueeze(0) a_norm = a / a.norm(dim=1)[:, None] b_norm = b / b.norm(dim=1)[:, None] return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100 model = AutoModel.from_pretrained('BM-K/KoSimCSE-roberta-multitask') AutoTokenizer.from_pretrained('BM-K/KoSimCSE-roberta-multitask') sentences = ['치타가 들판을 가로 질러 먹이를 쫓는다.', '치타 한 마리가 먹이 뒤에서 달리고 있다.', '원숭이 한 마리가 드럼을 연주한다.'] inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt") embeddings, _ = model(**inputs, return_dict=False) score01 = cal_score(embeddings[0][0], embeddings[1][0]) score02 = cal_score(embeddings[0][0], embeddings[2][0]) ``` ## Performance - Semantic Textual Similarity test set results <br> | Model | AVG | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman | |------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | KoSBERT<sup>†</sup><sub>SKT</sub> | 77.40 | 78.81 | 78.47 | 77.68 | 77.78 | 77.71 | 77.83 | 75.75 | 75.22 | | KoSBERT | 80.39 | 82.13 | 82.25 | 80.67 | 80.75 | 80.69 | 80.78 | 77.96 | 77.90 | | KoSRoBERTa | 81.64 | 81.20 | 82.20 | 81.79 | 82.34 | 81.59 | 82.20 | 80.62 | 81.25 | | | | | | | | | | | | KoSentenceBART | 77.14 | 79.71 | 78.74 | 78.42 | 78.02 | 78.40 | 78.00 | 74.24 | 72.15 | | KoSentenceT5 | 77.83 | 80.87 | 79.74 | 80.24 | 79.36 | 80.19 | 79.27 | 72.81 | 70.17 | | | | | | | | | | | | KoSimCSE-BERT<sup>†</sup><sub>SKT</sub> | 81.32 | 82.12 | 82.56 | 81.84 | 81.63 | 81.99 | 81.74 | 79.55 | 79.19 | | KoSimCSE-BERT | 83.37 | 83.22 | 83.58 | 83.24 | 83.60 | 83.15 | 83.54 | 83.13 | 83.49 | | KoSimCSE-RoBERTa | 83.65 | 83.60 | 83.77 | 83.54 | 83.76 | 83.55 | 83.77 | 83.55 | 83.64 | | | | | | | | | | | | | KoSimCSE-BERT-multitask | 85.71 | 85.29 | 86.02 | 85.63 | 86.01 | 85.57 | 85.97 | 85.26 | 85.93 | | KoSimCSE-RoBERTa-multitask | 85.77 | 85.08 | 86.12 | 85.84 | 86.12 | 85.83 | 86.12 | 85.03 | 85.99 |
openclimatefix/nowcasting_cnn_v3
f083f2c4de6ec7a0e5acbff167cb817c506d6113
2022-07-18T15:51:53.000Z
[ "pytorch", "transformers", "nowcasting", "forecasting", "timeseries", "remote-sensing", "license:mit" ]
null
false
openclimatefix
null
openclimatefix/nowcasting_cnn_v3
9,283
null
transformers
707
--- license: mit tags: - nowcasting - forecasting - timeseries - remote-sensing --- # Nowcasting CNN ## Model description 3d conv model, that takes in different data streams architecture is roughly 1. satellite image time series goes into many 3d convolution layers. 2. nwp time series goes into many 3d convolution layers. 3. Final convolutional layer goes to full connected layer. This is joined by other data inputs like - pv yield - time variables Then there ~4 fully connected layers which end up forecasting the pv yield / gsp into the future ## Intended uses & limitations Forecasting short term PV power for different regions and nationally in the UK ## How to use [More information needed] ## Limitations and bias [More information needed] ## Training data Training data is EUMETSAT RSS imagery over the UK, on-the-ground PV data, and NWP predictions. ## Training procedure [More information needed] ## Evaluation results [More information needed]
google/bert_uncased_L-4_H-512_A-8
606e4d55252882ac25ba1f1d1a182075830f5a90
2021-05-19T17:30:51.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-4_H-512_A-8
9,254
null
transformers
708
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
facebook/wav2vec2-xls-r-300m
e842f378fdbdb09aabc11d87c52f26b8f2dde333
2021-11-18T16:32:15.000Z
[ "pytorch", "wav2vec2", "pretraining", "multilingual", "dataset:common_voice", "dataset:multilingual_librispeech", "arxiv:2111.09296", "transformers", "speech", "xls_r", "xls_r_pretrained", "license:apache-2.0" ]
null
false
facebook
null
facebook/wav2vec2-xls-r-300m
9,246
22
transformers
709
--- language: multilingual datasets: - common_voice - multilingual_librispeech tags: - speech - xls_r - xls_r_pretrained license: apache-2.0 --- # Wav2Vec2-XLS-R-300M [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) counting **300 million** parameters. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png) XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz. **Note**: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out [**this blog**](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about ASR. [XLS-R Paper](https://arxiv.org/abs/2111.09296) Authors: Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli **Abstract** This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on 436K hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 20%-33% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this google colab](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb) for more information on how to fine-tune the model. You can find other pretrained XLS-R models with different numbers of parameters: * [300M parameters version](https://huggingface.co/facebook/wav2vec2-xls-r-300m) * [1B version version](https://huggingface.co/facebook/wav2vec2-xls-r-1b) * [2B version version](https://huggingface.co/facebook/wav2vec2-xls-r-2b)
sshleifer/tiny-distilbert-base-cased
657df2b83a6986d88e4f528740259c9b49f796b1
2021-05-20T07:12:39.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
sshleifer
null
sshleifer/tiny-distilbert-base-cased
9,211
1
transformers
710
Entry not found
nghuyong/ernie-1.0
b06176bf30ecf544330ab008933c9ac1012f1a6d
2021-05-20T01:40:40.000Z
[ "pytorch", "tf", "jax", "bert", "zh", "arxiv:1904.09223", "transformers" ]
null
false
nghuyong
null
nghuyong/ernie-1.0
9,177
9
transformers
711
--- language: zh --- # ERNIE-1.0 ## Introduction ERNIE (Enhanced Representation through kNowledge IntEgration) is proposed by Baidu in 2019, which is designed to learn language representation enhanced by knowledge masking strategies i.e. entity-level masking and phrase-level masking. Experimental results show that ERNIE achieve state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. More detail: https://arxiv.org/abs/1904.09223 ## Released Model Info |Model Name|Language|Model Structure| |:---:|:---:|:---:| |ernie-1.0| Chinese |Layer:12, Hidden:768, Heads:12| This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and a series of experiments have been conducted to check the accuracy of the conversion. - Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch ## How to use ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0") model = AutoModel.from_pretrained("nghuyong/ernie-1.0") ``` ## Citation ```bibtex @article{sun2019ernie, title={Ernie: Enhanced representation through knowledge integration}, author={Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua}, journal={arXiv preprint arXiv:1904.09223}, year={2019} } ```
allenai/longformer-large-4096
cfa97f5f8c58c219bfea4da030a0259d5dbb28c4
2021-03-10T02:31:17.000Z
[ "pytorch", "tf", "longformer", "transformers" ]
null
false
allenai
null
allenai/longformer-large-4096
9,152
9
transformers
712
Entry not found
castorini/monot5-base-msmarco-10k
f15657ab3d2a5dd0b9a30c8c0b6a0a73c9cb5884
2021-10-17T11:24:22.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
castorini
null
castorini/monot5-base-msmarco-10k
9,101
3
transformers
713
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than `monot5-base-msmarco`, i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
lysandre/tiny-tapas-random-wtq
82ff80f61b524e1e9dfd55636bf471f1f4bb0045
2020-12-15T04:19:58.000Z
[ "pytorch", "tapas", "table-question-answering", "transformers" ]
table-question-answering
false
lysandre
null
lysandre/tiny-tapas-random-wtq
9,078
null
transformers
714
Entry not found
TurkuNLP/eccobert-base-cased-v1
800ade528925e578acfbec3668da3d3ad2dfaee1
2022-04-13T16:57:18.000Z
[ "pytorch", "bert", "pretraining", "en", "transformers" ]
null
false
TurkuNLP
null
TurkuNLP/eccobert-base-cased-v1
9,071
null
transformers
715
--- language: en --- # ECCO-BERT base model (cased) A pretrained BERT model trained exclusively on the ECCO (Eighteenth Century Collections Online) dataset of digitized documents published during the 18th century in the United Kingdom. The model is equivalent in size to [bert-base-cased](https://huggingface.co/bert-base-cased). The model is intended for fine-tuning on various tasks that use the ECCO dataset. Documentation in progress...
abjbpi/Dwight_Schrute
451aab582fe08f5210a58859f9ec1c79278e341b
2021-06-04T11:43:31.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
abjbpi
null
abjbpi/Dwight_Schrute
9,070
2
transformers
716
--- tags: - conversational --- # My Awesome Model
DeepChem/ChemBERTa-77M-MLM
ed8a5374f2024ec8da53760af91a33fb8f6a15ff
2022-01-20T18:02:38.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
DeepChem
null
DeepChem/ChemBERTa-77M-MLM
9,026
1
transformers
717
Entry not found
zenham/khemx_m_e4_16h
08ed457ad68559c2c845dbb6112e84e6cdb00e6f
2022-03-08T02:50:45.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
zenham
null
zenham/khemx_m_e4_16h
9,015
null
transformers
718
--- tags: - conversational --- #khemx m e4 16h 0k DialoGPT Model
kha-white/manga-ocr-base
aa6573bd10b0d446cbf622e29c3e084914df9741
2022-06-22T15:34:05.000Z
[ "pytorch", "vision-encoder-decoder", "ja", "dataset:manga109s", "transformers", "image-to-text", "license:apache-2.0" ]
image-to-text
false
kha-white
null
kha-white/manga-ocr-base
8,969
5
transformers
719
--- language: ja tags: - image-to-text license: apache-2.0 datasets: - manga109s --- # Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) framework. Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality text recognition, robust against various scenarios specific to manga: - both vertical and horizontal text - text with furigana - text overlaid on images - wide variety of fonts and font styles - low quality images Code is available [here](https://github.com/kha-white/manga_ocr).
Zixtrauce/BDBot4Epoch
77357067c689ccb8c19220a32137eb8646bf87e5
2022-01-01T23:46:44.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Zixtrauce
null
Zixtrauce/BDBot4Epoch
8,905
null
transformers
720
--- tags: - conversational --- #BrandonBot4Epochs
google/t5-base-lm-adapt
82aa560c46d415609fa3403f4e94d2c1a90923af
2021-11-01T14:01:15.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "t5-lm-adapt", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-base-lm-adapt
8,874
6
transformers
721
--- language: en datasets: - c4 tags: - t5-lm-adapt license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-base): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - Base](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-base) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
princeton-nlp/unsup-simcse-roberta-base
db28710348cf9f33a2be25c505f98f0fbbbfe768
2021-06-16T12:12:10.000Z
[ "pytorch", "jax", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
princeton-nlp
null
princeton-nlp/unsup-simcse-roberta-base
8,866
null
transformers
722
Entry not found
sberbank-ai/mGPT
9f49a85776d5ec166120ea81719987fe0f643574
2022-04-21T18:06:50.000Z
[ "pytorch", "gpt2", "text-generation", "en", "az", "sw", "af", "ar", "ba", "be", "bxr", "bg", "bn", "cv", "hy", "da", "de", "el", "es", "eu", "fa", "fi", "fr", "he", "hi", "hu", "kk", "id", "it", "ja", "ka", "ky", "ko", "lt", "lv", "mn", "ml", "os", "mr", "ms", "my", "nl", "ro", "pl", "pt", "sah", "ru", "tg", "sv", "ta", "te", "tk", "th", "tr", "tl", "tt", "tyv", "uk", "ur", "vi", "uz", "yo", "zh", "xal", "dataset:mc4", "dataset:wikipedia", "arxiv:2112.10668", "arxiv:2204.07580", "transformers", "multilingual", "PyTorch", "Transformers", "gpt3", "Deepspeed", "Megatron", "license:apache-2.0" ]
text-generation
false
sberbank-ai
null
sberbank-ai/mGPT
8,865
56
transformers
723
--- license: apache-2.0 language: - en - az - sw - af - ar - ba - be - bxr - bg - bn - cv - hy - da - de - el - es - eu - fa - fi - fr - he - hi - hu - kk - id - it - ja - ka - ky - ko - lt - lv - mn - ml - os - mr - ms - my - nl - ro - pl - pt - sah - ru - tg - sv - ta - te - tk - th - tr - tl - tt - tyv - uk - en - ur - vi - uz - yo - zh - xal pipeline_tag: text-generation tags: - multilingual - PyTorch - Transformers - gpt3 - gpt2 - Deepspeed - Megatron datasets: - mc4 - wikipedia thumbnail: "https://github.com/sberbank-ai/mgpt" --- # Multilingual GPT model We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages. ## Code The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt) ## Paper mGPT: Few-Shot Learners Go Multilingual [Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf) ![](https://habrastorage.org/webt/1q/ru/yt/1qruytul6m2m-upyk9frq3pgrds.png) ``` @misc{https://doi.org/10.48550/arxiv.2204.07580, doi = {10.48550/ARXIV.2204.07580}, url = {https://arxiv.org/abs/2204.07580}, author = {Shliazhko, Oleh and Fenogenova, Alena and Tikhonova, Maria and Mikhailov, Vladislav and Kozlova, Anastasia and Shavrina, Tatiana}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2; I.2.7, 68-06, 68-04, 68T50, 68T01}, title = {mGPT: Few-Shot Learners Go Multilingual}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ## Languages Model supports 60 languages: ISO codes: ```az, sw, af, ar, ba, be, bxr, bg, bn, cv, hy, da, de, el, es, eu, fa, fi, fr, he, hi, hu, kk, id, it, ja, ka, ky, ko, lt, lv, mn, ml, os, mr, ms, my, nl, ro, pl, pt, sah, ru, tg, sv, ta, te, tk, th, tr, tl, tt, tyv, uk, en, ur, vi, uz, yo, zh, xal``` Languages: ```Afrikaans, Azerbaijani, Belarusian, Bengali, Chuvash, German, English, Basque, Finnish, Hebrew (modern), Hungarian, Indonesian, Japanese, Kazakh, Kirghiz, Kyrgyz, Latvian, Mongolian, Malay, Dutch, Polish, Romanian, Moldavan, Yakut, Swahili, Telugu, Thai, Turkish, Tuvinian, Urdu, Vietnamese, Yoruba, Arabic, Bashkir, Bulgarian, Buriat, Danish, Greek, Modern, Spanish; Castilian, Persian, French, Hindi, Armenian, Italian, Georgian, Korean, Lithuanian, Malayalam, Marathi, Burmese, Ossetian, Ossetic, Portuguese, Russian, Swedish, Tamil, Tajik, Turkmen, Tatar, Ukrainian, Uzbek, Kalmyk, Chinese``` ## Training Data Statistics - Size: 488 Billion UTF characters <img style="text-align:center; display:block;" src="https://huggingface.co/sberbank-ai/mGPT/resolve/main/stats.png"> "General training corpus statistics" ## Details The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in 60 languages. The model has seen 440 billion BPE tokens in total. Total training time was around 12 days on 256 Nvidia V100 GPUs.
mrm8488/codeBERTaJS
2d18abf10b01f62f4fe089ef79973541ec534674
2021-05-20T18:17:36.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "code", "arxiv:1909.09436", "transformers", "javascript", "autotrain_compatible" ]
fill-mask
false
mrm8488
null
mrm8488/codeBERTaJS
8,801
2
transformers
724
--- language: code thumbnail: tags: - javascript - code widget: - text: "async function createUser(req, <mask>) { if (!validUser(req.body.user)) { return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); }" --- # CodeBERTaJS CodeBERTaJS is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `javaScript` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `javascript` corpus (120M after preproccessing) for 2 epochs. ## Quick start: masked language modeling prediction ```python JS_CODE = """ async function createUser(req, <mask>) { if (!validUser(req.body.user)) { \t return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); } """.lstrip() ``` ### Does the model know how to complete simple JS/express like code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/codeBERTaJS", tokenizer="mrm8488/codeBERTaJS" ) fill_mask(JS_CODE) ## Top 5 predictions: # 'res' # prob 0.069489665329 'next' 'req' 'user' ',req' ``` ### Yes! That was easy 🎉 Let's try with another example ```python JS_CODE_= """ function getKeys(obj) { keys = []; for (var [key, value] of Object.entries(obj)) { keys.push(<mask>); } return keys } """.lstrip() ``` Results: ```python 'obj', 'key', ' value', 'keys', 'i' ``` > Not so bad! Right token was predicted as second option! 🎉 ## This work is heavely inspired on [codeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, \ttitle = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, \tshorttitle = {{CodeSearchNet} {Challenge}}, \turl = {http://arxiv.org/abs/1909.09436}, \turldate = {2020-03-12}, \tjournal = {arXiv:1909.09436 [cs, stat]}, \tauthor = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, \tmonth = sep, \tyear = {2019}, \tnote = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
pvl/labse_bert
64aecfed3a09108bbdc9fcfcba7447f36a2a34c7
2021-09-22T09:35:24.000Z
[ "pytorch", "tf", "jax", "bert", "pretraining", "en", "transformers", "embeddings", "license:apache-2.0" ]
null
false
pvl
null
pvl/labse_bert
8,800
null
transformers
725
--- language: en thumbnail: tags: - bert - embeddings license: apache-2.0 --- # LABSE BERT ## Model description Model for "Language-agnostic BERT Sentence Embedding" paper from Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, Wei Wang. Model available in [TensorFlow Hub](https://tfhub.dev/google/LaBSE/1). ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModel import torch # from sentence-transformers def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask tokenizer = AutoTokenizer.from_pretrained("pvl/labse_bert", do_lower_case=False) model = AutoModel.from_pretrained("pvl/labse_bert") sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) ```
dbmdz/bert-base-turkish-uncased
0582a4e05fd7ec5aa6b265d4bc4c81438d951593
2021-05-19T15:15:54.000Z
[ "pytorch", "tf", "jax", "bert", "tr", "transformers", "license:mit" ]
null
false
dbmdz
null
dbmdz/bert-base-turkish-uncased
8,784
5
transformers
726
--- language: tr license: mit --- # 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources an uncased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven uncased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model on a TPU v3-8 for 2M steps. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk uncased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-uncased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
sentence-transformers/all-roberta-large-v1
42d37b9d8c9929c64dce4a2b25f6eaa0f59eaf99
2021-08-31T09:33:26.000Z
[ "pytorch", "roberta", "fill-mask", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "sentence-transformers", "feature-extraction", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-roberta-large-v1
8,748
5
sentence-transformers
727
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-roberta-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-roberta-large-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-roberta-large-v1') model = AutoModel.from_pretrained('sentence-transformers/all-roberta-large-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-roberta-large-v1) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 400k steps using a batch size of 256 (32 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large
d828558d1a570cbbb5e62a8dbf85c8f18bf7982a
2021-06-20T19:03:16.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
nreimers
null
nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large
8,688
4
transformers
728
# Multilingual MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
TahaDouaji/detr-doc-table-detection
a3e4b9a10c65eeaaf6d0579e4c591ace8dc2ef77
2022-03-12T12:09:38.000Z
[ "pytorch", "detr", "object-detection", "transformers" ]
object-detection
false
TahaDouaji
null
TahaDouaji/detr-doc-table-detection
8,646
3
transformers
729
--- tags: - object-detection --- ## Model description detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) ## Training data The model was trained on ICDAR2019 Table Dataset ### How to use ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image image = Image.open("Image path") feature_extractor = DetrFeatureExtractor.from_pretrained('TahaDouaji/detr-doc-table-detection') model = DetrForObjectDetection.from_pretrained('TahaDouaji/detr-doc-table-detection') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits bboxes = outputs.pred_boxes ```
finiteautomata/bertweet-base-emotion-analysis
64046df9cc41eab40e1ecde7d2b7fb42b971be5b
2021-12-10T13:28:56.000Z
[ "pytorch", "roberta", "text-classification", "en", "arxiv:2106.09462", "transformers", "emotion-analysis" ]
text-classification
false
finiteautomata
null
finiteautomata/bertweet-base-emotion-analysis
8,619
4
transformers
730
--- language: - en tags: - emotion-analysis --- # Emotion Analysis in English ## bertweet-base-emotion-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with EmoEvent corpus for Emotion detection in English. Base model is [BerTweet](https://huggingface.co/vinai/bertweet-base). ## License `pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses. 1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php) 2. [SEMEval 2017 Dataset license]() ## Citation If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462) ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` and also the dataset related paper ``` @inproceedings{del2020emoevent, title={EmoEvent: A multilingual emotion corpus based on different events}, author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa}, booktitle={Proceedings of the 12th Language Resources and Evaluation Conference}, pages={1492--1498}, year={2020} } ``` Enjoy! 🤗
epwalsh/bert-xsmall-dummy
d36cc494a54ac76cac8c237866fe8ce540c879a6
2021-05-19T16:30:53.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
epwalsh
null
epwalsh/bert-xsmall-dummy
8,538
null
transformers
731
Entry not found
kamalkraj/BioELECTRA-PICO
70e29e17b3546be81de3723e7cedf3409d7f234f
2021-11-27T11:16:12.000Z
[ "pytorch", "electra", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
kamalkraj
null
kamalkraj/BioELECTRA-PICO
8,538
1
transformers
732
--- widget: - text: "Those in the aspirin group experienced reduced duration of headache compared to those in the placebo arm (P<0.05)" --- BioELECTRA-PICO
allenai/unifiedqa-t5-large
3fc39b105a75526eb2de2271744d48a4202857aa
2021-06-23T12:00:07.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/unifiedqa-t5-large
8,513
2
transformers
733
Entry not found
flaubert/flaubert_base_uncased
56ea0bf6e54b59c192f99f2397e932a9915cae4c
2021-10-18T08:14:52.000Z
[ "pytorch", "flaubert", "fill-mask", "fr", "dataset:flaubert", "transformers", "bert", "language-model", "flue", "french", "flaubert-base", "uncased", "license:mit", "autotrain_compatible" ]
fill-mask
false
flaubert
null
flaubert/flaubert_base_uncased
8,481
null
transformers
734
--- language: fr license: mit datasets: - flaubert metrics: - flue tags: - bert - language-model - flaubert - flue - french - flaubert-base - uncased --- # FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
aliosm/ComVE-distilgpt2
95db37f0c7b4bef1ec214a0a5d8cd457d1f55ece
2021-05-21T13:07:30.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "dataset:ComVE", "transformers", "exbert", "commonsense", "semeval2020", "comve", "license:mit" ]
text-generation
false
aliosm
null
aliosm/ComVE-distilgpt2
8,429
null
transformers
735
--- language: "en" tags: - exbert - commonsense - semeval2020 - comve license: "mit" datasets: - ComVE metrics: - bleu widget: - text: "Chicken can swim in water. <|continue|>" --- # ComVE-distilgpt2 ## Model description Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective. The model is able to generate a reason why a given natural language statement is against commonsense. ## Intended uses & limitations You can use the raw model for text generation to generate reasons why natural language statements are against commonsense. #### How to use You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script. *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again. #### Limitations and bias The model biased to negate the entered sentence usually instead of producing a factual reason. ## Training data The model is initialized from the [distilgpt2](https://github.com/huggingface/transformers/blob/master/model_cards/distilgpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons. ## Training procedure Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective. The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 15 epochs, 128 maximum sequence length and 64 batch size. <center> <img src="https://i.imgur.com/xKbrwBC.png"> </center> ## Eval results The model achieved 13.7582/13.8026 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset. ### BibTeX entry and citation info ```bibtex @article{fadel2020justers, title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation}, author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik}, year={2020} } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-distilgpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
chkla/roberta-argument
d5480352a5ad33b0135cc1193a62be24396e557a
2021-05-20T15:19:04.000Z
[ "pytorch", "jax", "roberta", "text-classification", "english", "transformers" ]
text-classification
false
chkla
null
chkla/roberta-argument
8,424
3
transformers
736
--- language: english widget: - text: "It has been determined that the amount of greenhouse gases have decreased by almost half because of the prevalence in the utilization of nuclear power." --- ### Welcome to RoBERTArg! 🤖 **Model description** This model was trained on ~25k heterogeneous manually annotated sentences (📚 [Stab et al. 2018](https://www.aclweb.org/anthology/D18-1402/)) of controversial topics to classify text into one of two labels: 🏷 **NON-ARGUMENT** (0) and **ARGUMENT** (1). 🗃 **Dataset** The dataset (📚 Stab et al. 2018) consists of **ARGUMENTS** (\~11k) that either support or oppose a topic if it includes a relevant reason for supporting or opposing the topic, or as a **NON-ARGUMENT** (\~14k) if it does not include reasons. The authors focus on controversial topics, i.e., topics that include "an obvious polarity to the possible outcomes" and compile a final set of eight controversial topics: _abortion, school uniforms, death penalty, marijuana legalization, nuclear energy, cloning, gun control, and minimum wage_. | TOPIC | ARGUMENT | NON-ARGUMENT | |----|----|----| | abortion | 2213 | 2,427 | | school uniforms | 325 | 1,734 | | death penalty | 325 | 2,083 | | marijuana legalization | 325 | 1,262 | | nuclear energy | 325 | 2,118 | | cloning | 325 | 1,494 | | gun control | 325 | 1,889 | | minimum wage | 325 | 1,346 | 🏃🏼‍♂️**Model training** **RoBERTArg** was fine-tuned on a RoBERTA (base) pre-trained model from HuggingFace using the HuggingFace trainer with the following hyperparameters: ``` training_args = TrainingArguments( num_train_epochs=2, learning_rate=2.3102e-06, seed=8, per_device_train_batch_size=64, per_device_eval_batch_size=64, ) ``` 📊 **Evaluation** The model was evaluated on an evaluation set (20%): | Model | Acc | F1 | R arg | R non | P arg | P non | |----|----|----|----|----|----|----| | RoBERTArg | 0.8193 | 0.8021 | 0.8463 | 0.7986 | 0.7623 | 0.8719 | Showing the **confusion matrix** using again the evaluation set: | | ARGUMENT | NON-ARGUMENT | |----|----|----| | ARGUMENT | 2213 | 558 | | NON-ARGUMENT | 325 | 1790 | ⚠️ **Intended Uses & Potential Limitations** The model can only be a starting point to dive into the exciting field of argument mining. But be aware. An argument is a complex structure, with multiple dependencies. Therefore, the model may perform less well on different topics and text types not included in the training set. Enjoy and stay tuned! 🚀 🐦 Twitter: [@chklamm](http://twitter.com/chklamm)
flair/ner-multi
b4f9c84fc84d2b1a687bf3f38d15218129e1d202
2021-03-02T22:17:41.000Z
[ "pytorch", "en", "de", "nl", "es", "multilingual", "dataset:conll2003", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-multi
8,414
4
flair
737
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - nl - es - multilingual datasets: - conll2003 widget: - text: "George Washington ging nach Washington" --- ## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the standard 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **92,16** (CoNLL-03 English), **87,33** (CoNLL-03 German revised), **88,96** (CoNLL-03 Dutch), **86,65** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward'), # contextual string embeddings, backward FlairEmbeddings('multi-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
facebook/detr-resnet-101
1a655091c08729eecf4fc5063c27fa5ea82aeaa3
2022-06-27T08:30:19.000Z
[ "pytorch", "detr", "object-detection", "dataset:coco", "arxiv:2005.12872", "transformers", "vision", "license:apache-2.0" ]
object-detection
false
facebook
null
facebook/detr-resnet-101
8,397
1
transformers
738
--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport --- # DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **43.5** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
deepset/gelectra-large-germanquad
1b7c5a7fe58943f9df30968460128f2766315f81
2022-07-19T14:39:31.000Z
[ "pytorch", "tf", "electra", "question-answering", "de", "dataset:deepset/germanquad", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/gelectra-large-germanquad
8,353
9
transformers
739
--- language: de datasets: - deepset/germanquad license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-large-germanquad **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-large model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on [GermanQuAD](https://deepset.ai/germanquad). The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ![performancetable](https://images.prismic.io/deepset/1c63afd8-40e6-4fd9-85c4-0dbb81996183_german-qa-vs-xlm-r.png) ## Authors **Timo Möller:** [email protected] **Julian Risch:** [email protected] **Malte Pietsch:** [email protected] ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
human-centered-summarization/financial-summarization-pegasus
a720f829427cb196a5618a0416473b8597cd106e
2022-06-29T06:25:30.000Z
[ "pytorch", "tf", "pegasus", "text2text-generation", "en", "dataset:xsum", "arxiv:1912.08777", "transformers", "summarization", "model-index", "autotrain_compatible" ]
summarization
false
human-centered-summarization
null
human-centered-summarization/financial-summarization-pegasus
8,315
22
transformers
740
--- language: - en tags: summarization datasets: - xsum metrics: - rouge widget: - text: "National Commercial Bank (NCB), Saudi Arabia\u2019s largest lender by assets,\ \ agreed to buy rival Samba Financial Group for $15 billion in the biggest banking\ \ takeover this year.NCB will pay 28.45 riyals ($7.58) for each Samba share, according\ \ to a statement on Sunday, valuing it at about 55.7 billion riyals. NCB will\ \ offer 0.739 new shares for each Samba share, at the lower end of the 0.736-0.787\ \ ratio the banks set when they signed an initial framework agreement in June.The\ \ offer is a 3.5% premium to Samba\u2019s Oct. 8 closing price of 27.50 riyals\ \ and about 24% higher than the level the shares traded at before the talks were\ \ made public. Bloomberg News first reported the merger discussions.The new bank\ \ will have total assets of more than $220 billion, creating the Gulf region\u2019\ s third-largest lender. The entity\u2019s $46 billion market capitalization nearly\ \ matches that of Qatar National Bank QPSC, which is still the Middle East\u2019\ s biggest lender with about $268 billion of assets." model-index: - name: human-centered-summarization/financial-summarization-pegasus results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - name: ROUGE-1 type: rouge value: 35.2055 verified: true - name: ROUGE-2 type: rouge value: 16.5689 verified: true - name: ROUGE-L type: rouge value: 30.1285 verified: true - name: ROUGE-LSUM type: rouge value: 30.1706 verified: true - name: loss type: loss value: 2.7092134952545166 verified: true - name: gen_len type: gen_len value: 15.1414 verified: true --- ### PEGASUS for Financial Summarization This model was fine-tuned on a novel financial news dataset, which consists of 2K articles from [Bloomberg](https://www.bloomberg.com/europe), on topics such as stock, markets, currencies, rate and cryptocurrencies. It is based on the [PEGASUS](https://huggingface.co/transformers/model_doc/pegasus.html) model and in particular PEGASUS fine-tuned on the Extreme Summarization (XSum) dataset: [google/pegasus-xsum model](https://huggingface.co/google/pegasus-xsum). PEGASUS was originally proposed by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf). ### How to use We provide a simple snippet of how to use this model for the task of financial summarization in PyTorch. ```Python from transformers import PegasusTokenizer, PegasusForConditionalGeneration, TFPegasusForConditionalGeneration # Let's load the model and the tokenizer model_name = "human-centered-summarization/financial-summarization-pegasus" tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name) # If you want to use the Tensorflow model # just replace with TFPegasusForConditionalGeneration # Some text to summarize here text_to_summarize = "National Commercial Bank (NCB), Saudi Arabia’s largest lender by assets, agreed to buy rival Samba Financial Group for $15 billion in the biggest banking takeover this year.NCB will pay 28.45 riyals ($7.58) for each Samba share, according to a statement on Sunday, valuing it at about 55.7 billion riyals. NCB will offer 0.739 new shares for each Samba share, at the lower end of the 0.736-0.787 ratio the banks set when they signed an initial framework agreement in June.The offer is a 3.5% premium to Samba’s Oct. 8 closing price of 27.50 riyals and about 24% higher than the level the shares traded at before the talks were made public. Bloomberg News first reported the merger discussions.The new bank will have total assets of more than $220 billion, creating the Gulf region’s third-largest lender. The entity’s $46 billion market capitalization nearly matches that of Qatar National Bank QPSC, which is still the Middle East’s biggest lender with about $268 billion of assets." # Tokenize our text # If you want to run the code in Tensorflow, please remember to return the particular tensors as simply as using return_tensors = 'tf' input_ids = tokenizer(text_to_summarize, return_tensors="pt").input_ids # Generate the output (Here, we use beam search but you can also use any other strategy you like) output = model.generate( input_ids, max_length=32, num_beams=5, early_stopping=True ) # Finally, we can print the generated summary print(tokenizer.decode(output[0], skip_special_tokens=True)) # Generated Output: Saudi bank to pay a 3.5% premium to Samba share price. Gulf region’s third-largest lender will have total assets of $220 billion ``` ## Evaluation Results The results before and after the fine-tuning on our dataset are shown below: | Fine-tuning | R-1 | R-2 | R-L | R-S | |:-----------:|:-----:|:-----:|:------:|:-----:| | Yes | 23.55 | 6.99 | 18.14 | 21.36 | | No | 13.8 | 2.4 | 10.63 | 12.03 | ## Citation You can find more details about this work in the following workshop paper. If you use our model in your research, please consider citing our paper: > T. Passali, A. Gidiotis, E. Chatzikyriakidis and G. Tsoumakas. 2021. > Towards Human-Centered Summarization: A Case Study on Financial News. > In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing(pp. 21–27). Association for Computational Linguistics. BibTeX entry: ``` @inproceedings{passali-etal-2021-towards, title = "Towards Human-Centered Summarization: A Case Study on Financial News", author = "Passali, Tatiana and Gidiotis, Alexios and Chatzikyriakidis, Efstathios and Tsoumakas, Grigorios", booktitle = "Proceedings of the First Workshop on Bridging Human{--}Computer Interaction and Natural Language Processing", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.hcinlp-1.4", pages = "21--27", } ``` ## Support Contact us at [[email protected]](mailto:[email protected]) if you are interested in a more sophisticated version of the model, trained on more articles and adapted to your needs! More information about Medoid AI: - Website: [https://www.medoid.ai](https://www.medoid.ai) - LinkedIn: [https://www.linkedin.com/company/medoid-ai/](https://www.linkedin.com/company/medoid-ai/)
sshleifer/tiny-xlnet-base-cased
275d2c323ddd18dad60cd585934383c29027878b
2020-05-08T15:35:32.000Z
[ "pytorch", "xlnet", "text-generation", "transformers" ]
text-generation
false
sshleifer
null
sshleifer/tiny-xlnet-base-cased
8,259
null
transformers
741
Entry not found
microsoft/unixcoder-base-nine
1e114832924596b75dcd2e0bdde218c0f7ee039f
2022-04-02T05:45:58.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers", "license:apache-2.0" ]
feature-extraction
false
microsoft
null
microsoft/unixcoder-base-nine
8,245
2
transformers
742
--- license: apache-2.0 ---
julien-c/dummy-diff-tokenizer
8b54c50bfd24739488683452f24d4471f5d75a21
2021-05-20T17:30:11.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
julien-c
null
julien-c/dummy-diff-tokenizer
8,149
null
transformers
743
Entry not found
textattack/bert-base-uncased-MRPC
d421614df8fbeb22d6826a24d6397809fdc1e3ff
2021-05-20T07:32:52.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/bert-base-uncased-MRPC
8,135
null
transformers
744
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8774509803921569, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
deepset/bert-small-mm_retrieval-passage_encoder
c764744512975bd3823f689601ab0e388a29c366
2021-10-19T16:14:29.000Z
[ "pytorch", "dpr", "transformers" ]
null
false
deepset
null
deepset/bert-small-mm_retrieval-passage_encoder
8,119
null
transformers
745
Entry not found
sshleifer/distilbart-xsum-12-6
5b2e376c845c201ddc34ec0e55fd1ad9890ba5ee
2021-06-14T07:58:25.000Z
[ "pytorch", "jax", "bart", "text2text-generation", "en", "dataset:cnn_dailymail", "dataset:xsum", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
sshleifer
null
sshleifer/distilbart-xsum-12-6
8,112
2
transformers
746
--- language: en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail - xsum thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information. ### Metrics for DistilBART models | Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | |:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
GanjinZero/UMLSBert_ENG
1e4841546c6384cefa47192146a7bd368d509849
2022-04-27T08:18:37.000Z
[ "pytorch", "bert", "feature-extraction", "en", "transformers", "biomedical", "license:apache-2.0" ]
feature-extraction
false
GanjinZero
null
GanjinZero/UMLSBert_ENG
8,109
3
transformers
747
--- language: - en license: apache-2.0 tags: - bert - biomedical --- CODER: Knowledge infused cross-lingual medical term embedding for term normalization. English Version. Old name. This model is not UMLSBert!!! ``` @article{YUAN2022103983, title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization}, journal = {Journal of Biomedical Informatics}, pages = {103983}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2021.103983}, url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129}, author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu}, keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning} } ```
bigscience/bigscience-small-testing
5fc95662beefe9606b9f9f3b9eefdd87cdf4b51a
2022-07-11T10:04:17.000Z
[ "pytorch", "bloom", "feature-extraction", "eng", "transformers", "integration", "text-generation" ]
text-generation
false
bigscience
null
bigscience/bigscience-small-testing
8,081
null
transformers
748
--- language: - eng tags: - integration pipeline_tag: text-generation --- # BigScience - testing model This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
lvwerra/distilbert-imdb
dc2e91fb7046e0ede2359fd54e667446daf267a3
2022-04-30T11:21:06.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
lvwerra
null
lvwerra/distilbert-imdb
8,073
null
transformers
749
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1903 - Accuracy: 0.928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2195 | 1.0 | 1563 | 0.1903 | 0.928 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
uer/gpt2-chinese-lyric
c835964d9427bf1b4d01adf867454c9a85d4e385
2022-07-15T08:25:43.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "zh", "transformers" ]
text-generation
false
uer
null
uer/gpt2-chinese-lyric
8,060
8
transformers
750
--- language: zh widget: - text: "最美的不是下雨天,是曾与你躲过雨的屋檐" --- # Chinese GPT2 Lyric Model ## Model description The model is used to generate Chinese lyrics. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-lyric](https://huggingface.co/uer/gpt2-chinese-lyric) ## How to use You can use the model directly with a pipeline for text generation: ```python >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-lyric") >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-lyric") >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator("最美的不是下雨天,是曾与你躲过雨的屋檐", max_length=100, do_sample=True) [{'generated_text': '最美的不是下雨天,是曾与你躲过雨的屋檐 , 下 课 铃 声 响 起 的 瞬 间 , 我 们 的 笑 脸 , 有 太 多 回 忆 在 浮 现 , 是 你 总 在 我 身 边 , 不 知 道 会 不 会 再 见 , 从 现 在 开 始 到 永 远 , 想 说 的 语 言 凝 结 成 一 句 , 不 管 我 们 是 否 能 够 兑 现 , 想 说 的 语 言 凝 结'}] ``` ## Training data Training data contains 150,000 Chinese lyrics which are collected by [Chinese-Lyric-Corpus](https://github.com/gaussic/Chinese-Lyric-Corpus) and [MusicLyricChatbot](https://github.com/liuhuanyong/MusicLyricChatbot). ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 100,000 steps with a sequence length of 512 on the basis of the pre-trained model [gpt2-base-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-base-chinese-cluecorpussmall) ``` python3 preprocess.py --corpus_path corpora/lyric.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path lyric_dataset.pt --processes_num 32 \ --seq_length 512 --data_processor lm ``` ``` python3 pretrain.py --dataset_path lyric_dataset.pt \ --pretrained_model_path models/cluecorpussmall_gpt2_seq1024_model.bin-250000 \ --vocab_path models/google_zh_vocab.txt \ --config_path models/gpt2/config.json \ --output_model_path models/lyric_gpt2_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 100000 --save_checkpoint_steps 10000 --report_steps 5000 \ --learning_rate 5e-5 --batch_size 64 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path lyric_gpt2_model.bin-100000 \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ```
facebook/opt-66b
8ea7547215f0999c2f648c8c034869bad974273e
2022-06-25T15:31:09.000Z
[ "pytorch", "tf", "jax", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "transformers", "license:other" ]
text-generation
false
facebook
null
facebook/opt-66b
8,059
31
transformers
751
--- language: en inference: false tags: - text-generation - opt license: other commercial: false --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU. It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False) >>> prompt = "Hello, I am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> generated_ids = model.generate(input_ids) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Hello, I am conscious and I am here.\nI am also conscious and I am here'] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False) >>> prompt = "Hello, I am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Hello, I am conscious and aware that you have your back turned to me and want to talk'] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False) >>> prompt = "The woman worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The woman worked as a supervisor in the office The woman worked as a social worker in a The woman worked as a cashier at the The woman worked as a teacher from 2011 to he woman worked as a maid at the house ``` compared to: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-66b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-66b", use_fast=False) >>> prompt = "The man worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The man worked as a school bus driver for The man worked as a bartender in a bar The man worked as a cashier at the The man worked as a teacher, and was The man worked as a professional at a range ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
hfl/chinese-xlnet-base
34b827684078f956411389834966eb55588f5254
2021-03-03T01:44:59.000Z
[ "pytorch", "tf", "xlnet", "text-generation", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
text-generation
false
hfl
null
hfl/chinese-xlnet-base
8,033
13
transformers
752
--- language: - zh license: "apache-2.0" --- ## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
TheGoldenToaster/DialoGPT-medium-Bot
b9e2e669356dfda8108ccdf76d4db16cef38f227
2022-04-04T21:58:23.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
TheGoldenToaster
null
TheGoldenToaster/DialoGPT-medium-Bot
7,888
1
transformers
753
--- tags: - conversational --- #Bot Chat
ctl/wav2vec2-large-xlsr-cantonese
6a6119ab39ec2a0c8d16edfbf91db45334540315
2021-07-06T01:16:38.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "zh-HK", "yue", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ctl
null
ctl/wav2vec2-large-xlsr-cantonese
7,858
1
transformers
754
--- language: - zh-HK - yue datasets: - common_voice metrics: - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: wav2vec2-large-xlsr-cantonese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-HK type: common_voice args: zh-HK metrics: - name: Test CER type: cer value: 15.36 --- # Wav2Vec2-Large-XLSR-53-Cantonese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese") model = Wav2Vec2ForCTC.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. ```python !pip install jiwer import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import argparse lang_id = "zh-HK" model_id = "ctl/wav2vec2-large-xlsr-cantonese" chars_to_ignore_regex = '[\,\?\.\!\-\;\:"\“\%\‘\”\�\.\⋯\!\-\:\–\。\》\,\)\,\?\;\~\~\…\︰\,\(\」\‧\《\﹔\、\—\/\,\「\﹖\·\']' test_dataset = load_dataset("common_voice", f"{lang_id}", split="test") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained(f"{model_id}") model = Wav2Vec2ForCTC.from_pretrained(f"{model_id}") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=16) print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.51 % ## Training The Common Voice `train`, `validation` were used for training. The script used for training will be posted [here](https://github.com/chutaklee/CantoASR)
pucpr/clinicalnerpt-disorder
6a6597b35c51aeabfeedf828dff89de7a25f2b69
2021-10-13T09:32:51.000Z
[ "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "transformers", "autotrain_compatible" ]
token-classification
false
pucpr
null
pucpr/clinicalnerpt-disorder
7,858
4
transformers
755
--- language: "pt" widget: - text: "PACIENTE DE 69 ANOS COM ICC DE ETIOLOGIA ISQUÊMICA " - text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Disorder The Disorder NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
rsvp-ai/bertserini-bert-base-squad
1c93f9f29544f8ce8d6ee99133f91e5bd4dfed36
2022-06-23T14:13:40.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
rsvp-ai
null
rsvp-ai/bertserini-bert-base-squad
7,828
2
transformers
756
Entry not found
vblagoje/bert-english-uncased-finetuned-pos
46ec120264b121e8d92bef19b45c107d06d2cb99
2021-05-20T08:51:26.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
vblagoje
null
vblagoje/bert-english-uncased-finetuned-pos
7,819
2
transformers
757
Entry not found
facebook/hubert-base-ls960
dba3bb02fda4248b6e082697eee756de8fe8aa8a
2021-11-05T12:43:12.000Z
[ "pytorch", "tf", "hubert", "feature-extraction", "en", "dataset:librispeech_asr", "arxiv:2106.07447", "transformers", "speech", "license:apache-2.0" ]
feature-extraction
false
facebook
null
facebook/hubert-base-ls960
7,814
4
transformers
758
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Hubert-Base [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
sahri/indonesiasentiment
99f38e6c1b34109bbf4a6d7c6556c56f5d2eef6a
2022-01-17T04:50:03.000Z
[ "pytorch", "tf", "roberta", "text-classification", "id", "dataset:indonlu", "arxiv:1907.11692", "transformers", "indonesian-roberta-base-sentiment-classifier", "license:mit" ]
text-classification
false
sahri
null
sahri/indonesiasentiment
7,791
null
transformers
759
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "tidak jelek tapi keren" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews. After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` | ## Evaluation Results The model was trained for 5 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 | | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 | | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 | | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 | | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "sahri/sentiment" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("tidak jelek tapi keren") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [sahri ramadhan] All computation and development are done on Google Colaboratory using their free GPU access.
google/long-t5-local-base
e040d65029c54fb38eaefa4019bc3e2e31ba3c62
2022-06-22T09:04:55.000Z
[ "pytorch", "jax", "longt5", "text2text-generation", "en", "arxiv:2112.07916", "arxiv:1912.08777", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/long-t5-local-base
7,756
5
transformers
760
--- license: apache-2.0 language: en --- # LongT5 (local attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you. ### How to use ```python from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base") model = LongT5Model.from_pretrained("google/long-t5-local-base") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} } ```
sbcBI/sentiment_analysis
2e9e3afe68478a6168a11adb6c6f1b741e00ae83
2022-04-22T06:42:07.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:Confidential", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0" ]
text-classification
false
sbcBI
null
sbcBI/sentiment_analysis
7,739
null
transformers
761
--- language: en tags: - exbert license: apache-2.0 datasets: - Confidential --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model description [sbcBI/sentiment_analysis] This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification.
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
35cdaef56ac000802c965e584bb2facaede17c4a
2022-07-28T16:23:53.000Z
[ "pytorch", "deberta-v2", "text-classification", "en", "dataset:multi_nli", "dataset:anli", "dataset:fever", "arxiv:2006.03654", "transformers", "zero-shot-classification", "license:mit" ]
zero-shot-classification
false
MoritzLaurer
null
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
7,723
10
transformers
762
--- language: - en license: mit tags: - text-classification - zero-shot-classification metrics: - accuracy datasets: - multi_nli - anli - fever pipeline_tag: zero-shot-classification --- # DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli). The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy. mnli-m | mnli-mm | fever-nli | anli-all | anli-r3 ---------|----------|---------|----------|---------- 0.903 | 0.903 | 0.777 | 0.579 | 0.495 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
google/muril-base-cased
afd9f36c7923d54e97903922ff1b260d091d202f
2022-06-10T13:33:04.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "arxiv:2103.10730", "arxiv:1810.04805", "arxiv:1911.02116", "arxiv:2003.11080", "arxiv:2009.05166", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
google
null
google/muril-base-cased
7,640
9
transformers
763
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- MuRIL: Multilingual Representations for Indian Languages === MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on [TFHub](https://tfhub.dev/google/MuRIL/1) with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this [paper](http://arxiv.org/abs/2103.10730). ## Overview This model uses a BERT base architecture [1] pretrained from scratch using the Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6] Indian languages. We use a training paradigm similar to multilingual bert, with a few modifications as listed: * We include translation and transliteration segment pairs in training as well. * We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to enhance low-resource performance. [7] See the Training section for more details. ## Training The MuRIL model is pre-trained on monolingual segments as well as parallel segments as detailed below : * Monolingual Data : We make use of publicly available corpora from Wikipedia and Common Crawl for 17 Indian languages. * Parallel Data : We have two types of parallel data : * Translated Data : We obtain translations of the above monolingual corpora using the Google NMT pipeline. We feed translated segment pairs as input. We also make use of the publicly available PMINDIA corpus. * Transliterated Data : We obtain transliterations of Wikipedia using the IndicTrans [8] library. We feed transliterated segment pairs as input. We also make use of the publicly available Dakshina dataset. We keep an exponent value of 0.3 to calculate duplication multiplier values for upsampling of lower resourced languages and set dupe factors accordingly. Note, we limit transliterated pairs to Wikipedia only. The model was trained using a self-supervised masked language modeling task. We do whole word masking with a maximum of 80 predictions. The model was trained for 1000K steps, with a batch size of 4096, and a max sequence length of 512. ### Trainable parameters All parameters in the module are trainable, and fine-tuning all parameters is the recommended practice. ## Uses & Limitations This model is intended to be used for a variety of downstream NLP tasks for Indian languages. This model is trained on transliterated data as well, a phenomomenon commonly observed in the Indian context. This model is not expected to perform well on languages other than the ones used in pretraining, i.e. 17 Indian languages. ## Evaluation We provide the results of fine-tuning this model on a set of downstream tasks.<br/> We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/> We also transliterate the test-sets and evaluate on the same.<br/> We use the same fine-tuning setting as is used by [9], except for TyDiQA, where we use additional SQuAD v1.1 English training data, similar to [10].<br/> For Tatoeba, we do not fine-tune the model, and use the pooled_output of the last layer as the sentence embedding.<br/> All results are computed in a zero-shot setting, with English being the high resource training set language. * Shown below are results on datasets from the XTREME benchmark (in %) <br/> PANX (F1) | ml | ta | te | en | bn | hi | mr | ur | Average :-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 54.77 | 51.24 | 50.16 | 84.40 | 68.59 | 65.13 | 58.44 | 31.36 | 58.01 MuRIL | 75.74 | 71.86 | 64.99 | 84.43 | 85.97 | 78.09 | 74.63 | 85.07 | 77.60 <br/> UDPOS (F1) | en | hi | mr | ta | te | ur | Average :--------- | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 95.35 | 66.09 | 71.27 | 59.58 | 76.98 | 57.85 | 71.19 MuRIL | 95.55 | 64.47 | 82.95 | 62.57 | 85.63 | 58.93 | 75.02 <br/> XNLI (Accuracy) | en | hi | ur | Average :-------------- | ----: | ----: | ----: | ------: mBERT | 81.72 | 60.52 | 58.20 | 66.81 MuRIL | 83.85 | 70.66 | 67.70 | 74.07 <br/> Tatoeba (Accuracy) | ml | ta | te | bn | hi | mr | ur | Average :----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 20.23 | 12.38 | 14.96 | 12.80 | 27.80 | 18.00 | 22.70 | 18.41 MuRIL | 26.35 | 36.81 | 17.52 | 20.20 | 31.50 | 26.60 | 17.10 | 25.15 <br/> XQUAD (F1/EM) | en | hi | Average :------------ | ----------: | ----------: | ----------: mBERT | 83.85/72.86 | 58.46/43.53 | 71.15/58.19 MuRIL | 84.31/72.94 | 73.93/58.32 | 79.12/65.63 <br/> MLQA (F1/EM) | en | hi | Average :----------- | ----------: | ----------: | ----------: mBERT | 80.39/67.30 | 50.28/35.18 | 65.34/51.24 MuRIL | 80.28/67.37 | 67.34/50.22 | 73.81/58.80 <br/> TyDiQA (F1/EM) | en | bn | te | Average :---------------- | ----------: | ----------: | ----------: | ----------: mBERT | 75.21/65.00 | 60.62/45.13 | 53.55/44.54 | 63.13/51.66 MuRIL | 74.10/64.55 | 78.03/66.37 | 73.95/46.94 | 75.36/59.28 * Shown below are results on the transliterated versions of the above test-sets. PANX (F1) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average :-------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 7.53 | 1.04 | 8.24 | 41.77 | 25.46 | 8.34 | 7.30 | 14.24 MuRIL | 63.39 | 7.00 | 53.62 | 72.94 | 69.75 | 68.77 | 68.41 | 57.70 <br/> UDPOS (F1) | hi_tr | mr_tr | ta_tr | te_tr | ur_tr | Average :--------- | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 25.00 | 33.67 | 24.02 | 36.21 | 22.07 | 28.20 MuRIL | 63.09 | 67.19 | 58.40 | 65.30 | 56.49 | 62.09 <br/> XNLI (Accuracy) | hi_tr | ur_tr | Average :-------------- | ----: | ----: | ------: mBERT | 39.6 | 38.86 | 39.23 MuRIL | 68.24 | 61.16 | 64.70 <br/> Tatoeba (Accuracy) | ml_tr | ta_tr | te_tr | bn_tr | hi_tr | mr_tr | ur_tr | Average :----------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ------: mBERT | 2.18 | 1.95 | 5.13 | 1.80 | 3.00 | 2.40 | 2.30 | 2.68 MuRIL | 10.33 | 11.07 | 11.54 | 8.10 | 14.90 | 7.20 | 13.70 | 10.98 <br/> ## References \[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint arXiv:1810.04805, 2018. \[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia) \[3]: [Common Crawl](http://commoncrawl.org/the-data/) \[4]: [PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html) \[5]: [Dakshina](https://github.com/google-research-datasets/dakshina) \[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi), Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya (or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu (ur). \[7]: Conneau, Alexis, et al. [Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf). arXiv preprint arXiv:1911.02116 (2019). \[8]: [IndicTrans](https://github.com/libindic/indic-trans) \[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M. (2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv preprint arXiv:2003.11080. \[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020). [FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf) arXiv preprint arXiv:2009.05166. ## Citation If you find MuRIL useful in your applications, please cite the following paper: ``` @misc{khanuja2021muril, title={MuRIL: Multilingual Representations for Indian Languages}, author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar}, year={2021}, eprint={2103.10730}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact Please mail your queries/feedback to [email protected].
r3dhummingbird/DialoGPT-medium-joshua
ff22e98bcb70ae1e082f54640c5c3bafd3950125
2021-07-19T23:18:30.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
r3dhummingbird
null
r3dhummingbird/DialoGPT-medium-joshua
7,633
12
transformers
764
--- thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
valhalla/distilbart-mnli-12-9
66a037d826920a2f84a9d83edcbeb23a0951ed2e
2021-06-14T10:34:58.000Z
[ "pytorch", "jax", "bart", "text-classification", "dataset:mnli", "transformers", "distilbart", "distilbart-mnli", "zero-shot-classification" ]
zero-shot-classification
false
valhalla
null
valhalla/distilbart-mnli-12-9
7,612
null
transformers
765
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart). We just copy alternating layers from `bart-large-mnli` and finetune more on the same data. | | matched acc | mismatched acc | | ------------------------------------------------------------------------------------ | ----------- | -------------- | | [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) (baseline, 12-12) | 89.9 | 90.01 | | [distilbart-mnli-12-1](https://huggingface.co/valhalla/distilbart-mnli-12-1) | 87.08 | 87.5 | | [distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3) | 88.1 | 88.19 | | [distilbart-mnli-12-6](https://huggingface.co/valhalla/distilbart-mnli-12-6) | 89.19 | 89.01 | | [distilbart-mnli-12-9](https://huggingface.co/valhalla/distilbart-mnli-12-9) | 89.56 | 89.52 | This is a very simple and effective technique, as we can see the performance drop is very little. Detailed performace trade-offs will be posted in this [sheet](https://docs.google.com/spreadsheets/d/1dQeUvAKpScLuhDV1afaPJRRAE55s2LpIzDVA5xfqxvk/edit?usp=sharing). ## Fine-tuning If you want to train these models yourself, clone the [distillbart-mnli repo](https://github.com/patil-suraj/distillbart-mnli) and follow the steps below Clone and install transformers from source ```bash git clone https://github.com/huggingface/transformers.git pip install -qqq -U ./transformers ``` Download MNLI data ```bash python transformers/utils/download_glue_data.py --data_dir glue_data --tasks MNLI ``` Create student model ```bash python create_student.py \ --teacher_model_name_or_path facebook/bart-large-mnli \ --student_encoder_layers 12 \ --student_decoder_layers 6 \ --save_path student-bart-mnli-12-6 \ ``` Start fine-tuning ```bash python run_glue.py args.json ``` You can find the logs of these trained models in this [wandb project](https://wandb.ai/psuraj/distilbart-mnli).
sentence-transformers/roberta-large-nli-stsb-mean-tokens
768fca01ac32ae924414f7128af28ea1d9dfcada
2022-06-15T20:56:01.000Z
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/roberta-large-nli-stsb-mean-tokens
7,575
1
sentence-transformers
766
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/roberta-large-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/roberta-large-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/roberta-large-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/roberta-large-nli-stsb-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/roberta-large-nli-stsb-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
charsiu/g2p_multilingual_byT5_small
834df67c125a811e1a60fbf9f0f39503115437ea
2022-05-19T05:02:14.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
charsiu
null
charsiu/g2p_multilingual_byT5_small
7,545
null
transformers
767
Entry not found
microsoft/unixcoder-base
02583b53b9290e674a43b6b74e89f98a71b2d22a
2022-03-23T06:05:18.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers", "license:apache-2.0" ]
feature-extraction
false
microsoft
null
microsoft/unixcoder-base
7,437
4
transformers
768
--- license: apache-2.0 ---
allenai/macaw-large
57fd83e05c764b04c36650fac1458e9816f2d355
2021-09-21T15:59:44.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/macaw-large
7,429
8
transformers
769
--- language: en widget: - text: $answer$ ; $mcoptions$ ; $question$ = What is the color of a cloudy sky? license: apache-2.0 --- # macaw-large ## Model description Macaw (<b>M</b>ulti-<b>a</b>ngle <b>c</b>(q)uestion <b>a</b>ns<b>w</b>ering) is a ready-to-use model capable of general question answering, showing robustness outside the domains it was trained on. It has been trained in "multi-angle" fashion, which means it can handle a flexible set of input and output "slots" (question, answer, multiple-choice options, context, and explanation) . Macaw was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in three sizes: [macaw-11b](https://huggingface.co/allenai/macaw-11b), [macaw-3b](https://huggingface.co/allenai/macaw-3b), and [macaw-large](https://huggingface.co/allenai/macaw-large), as well as an answer-focused version featured on various leaderboards [macaw-answer-11b](https://huggingface.co/allenai/macaw-answer-11b). See https://github.com/allenai/macaw for more details.
microsoft/wavlm-large
c1423ed94bb01d80a3f5ce5bc39f6026a0f4828c
2022-02-02T21:21:50.000Z
[ "pytorch", "wavlm", "feature-extraction", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "transformers", "speech" ]
feature-extraction
false
microsoft
null
microsoft/wavlm-large
7,408
6
transformers
770
--- language: - en tags: - speech inference: false --- # WavLM-Large [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The large model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Usage This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/). **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence of phonemes before fine-tuning. ## Speech Recognition To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition). ## Speech Classification To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification). ## Speaker Verification TODO ## Speaker Diarization TODO # Contribution The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
cross-encoder/stsb-distilroberta-base
2a387f03597b030ff3dadcef7d73456ce23e3bb7
2021-08-05T08:41:53.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/stsb-distilroberta-base
7,400
null
transformers
771
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
microsoft/BiomedVLP-CXR-BERT-general
93af83cefc6d3f7d0ef9a0b78b0d579452c6a546
2022-07-11T14:52:52.000Z
[ "pytorch", "bert", "fill-mask", "en", "arxiv:2204.09817", "arxiv:2103.00020", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/BiomedVLP-CXR-BERT-general
7,374
5
transformers
772
--- language: en tags: - exbert license: mit widget: - text: "Left pleural effusion with adjacent [MASK]." example_title: "Radiology 1" - text: "Heart size normal and lungs are [MASK]." example_title: "Radiology 2" - text: "[MASK] is a tumor suppressor gene." example_title: "Biomedical" - text: "The patient was on [MASK] for chronic atrial fibrillation" example_title: "Medication" --- # CXR-BERT-general [CXR-BERT](https://arxiv.org/abs/2204.09817) is a chest X-ray (CXR) domain-specific language model that makes use of an improved vocabulary, novel pretraining procedure, weight regularization, and text augmentations. The resulting model demonstrates improved performance on radiology natural language inference, radiology masked language model token prediction, and downstream vision-language processing tasks such as zero-shot phrase grounding and image classification. First, we pretrain **CXR-BERT-general** from a randomly initialized BERT model via Masked Language Modeling (MLM) on abstracts [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and clinical notes from the publicly-available [MIMIC-III](https://physionet.org/content/mimiciii/1.4/) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/). In that regard, the general model is expected be applicable for research in clinical domains other than the chest radiology through domain specific fine-tuning. **CXR-BERT-specialized** is continually pretrained from CXR-BERT-general to further specialize in the chest X-ray domain. At the final stage, CXR-BERT is trained in a multi-modal contrastive learning framework, similar to the [CLIP](https://arxiv.org/abs/2103.00020) framework. The latent representation of [CLS] token is utilized to align text/image embeddings. ## Model variations | Model | Model identifier on HuggingFace | Vocabulary | Note | | ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- | --------------------------------------------------------- | | CXR-BERT-general | [microsoft/BiomedVLP-CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) | PubMed & MIMIC | Pretrained for biomedical literature and clinical domains | | CXR-BERT-specialized (after multi-modal training) | [microsoft/BiomedVLP-CXR-BERT-specialized](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized) | PubMed & MIMIC | Pretrained for chest X-ray domain | ## Citation The corresponding manuscript is accepted to be presented at the [**European Conference on Computer Vision (ECCV) 2022**](https://eccv2022.ecva.net/) ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.09817, doi = {10.48550/ARXIV.2204.09817}, url = {https://arxiv.org/abs/2204.09817}, author = {Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan}, title = {Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing}, publisher = {arXiv}, year = {2022}, } ``` ## Model Use ### Intended Use This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. #### Primary Intended Use The primary intended use is to support AI researchers building on top of this work. CXR-BERT and its associated models should be helpful for exploring various clinical NLP & VLP research questions, especially in the radiology domain. #### Out-of-Scope Use **Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://arxiv.org/abs/2204.09817) for more details. ## Data This model builds upon existing publicly-available datasets: - [PubMed](https://pubmed.ncbi.nlm.nih.gov/) - [MIMIC-III](https://physionet.org/content/mimiciii/) - [MIMIC-CXR](https://physionet.org/content/mimic-cxr/) These datasets reflect a broad variety of sources ranging from biomedical abstracts to intensive care unit notes to chest X-ray radiology notes. The radiology notes are accompanied with their associated chest x-ray DICOM images in MIMIC-CXR dataset. ## Performance We demonstrate that this language model achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports. A highlight of comparison to other common models, including [ClinicalBERT](https://aka.ms/clinicalbert) and [PubMedBERT](https://aka.ms/pubmedbert): | | RadNLI accuracy (MedNLI transfer) | Mask prediction accuracy | Avg. # tokens after tokenization | Vocabulary size | | ----------------------------------------------- | :-------------------------------: | :----------------------: | :------------------------------: | :-------------: | | RadNLI baseline | 53.30 | - | - | - | | ClinicalBERT | 47.67 | 39.84 | 78.98 (+38.15%) | 28,996 | | PubMedBERT | 57.71 | 35.24 | 63.55 (+11.16%) | 28,895 | | CXR-BERT (after Phase-III) | 60.46 | 77.72 | 58.07 (+1.59%) | 30,522 | | **CXR-BERT (after Phase-III + Joint Training)** | **65.21** | **81.58** | **58.07 (+1.59%)** | 30,522 | CXR-BERT also contributes to better vision-language representation learning through its improved text encoding capability. Below is the zero-shot phrase grounding performance on the **MS-CXR** dataset, which evaluates the quality of image-text latent representations. | Vision–Language Pretraining Method | Text Encoder | MS-CXR Phrase Grounding (Avg. CNR Score) | | ---------------------------------- | ------------ | :--------------------------------------: | | Baseline | ClinicalBERT | 0.769 | | Baseline | PubMedBERT | 0.773 | | ConVIRT | ClinicalBERT | 0.818 | | GLoRIA | ClinicalBERT | 0.930 | | **BioViL** | **CXR-BERT** | **1.027** | | **BioViL-L** | **CXR-BERT** | **1.142** | Additional details about performance can be found in the corresponding paper, [Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing](https://arxiv.org/abs/2204.09817). ## Limitations This model was developed using English corpora, and thus can be considered English-only. ## Further information Please refer to the corresponding paper, ["Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing", ECCV'22](https://arxiv.org/abs/2204.09817) for additional details on the model training and evaluation. For additional inference pipelines with CXR-BERT, please refer to the [HI-ML GitHub](https://aka.ms/biovil-code) repository. The associated source files will soon be accessible through this link.
kykim/electra-kor-base
8599418d72f5dcb21ae3972ba2405f88c819b195
2021-01-22T00:28:50.000Z
[ "pytorch", "tf", "electra", "pretraining", "ko", "transformers" ]
null
false
kykim
null
kykim/electra-kor-base
7,372
1
transformers
773
--- language: ko --- # Electra base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import ElectraTokenizerFast, ElectraModel tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base") model = ElectraModel.from_pretrained("kykim/electra-kor-base") ```
google/bert_uncased_L-6_H-768_A-12
c132ecc85d3d73b460b741cc50aa9ed18446c335
2021-05-19T17:34:36.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-6_H-768_A-12
7,350
null
transformers
774
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
allenai/unifiedqa-t5-base
85413ec7c7b86263cade67192224aa5fc95838ac
2021-06-23T11:17:21.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/unifiedqa-t5-base
7,312
2
transformers
775
Entry not found
facebook/wmt19-en-de
b33976783993b11baabc19313275865ee87931e3
2020-12-11T21:39:55.000Z
[ "pytorch", "fsmt", "text2text-generation", "en", "de", "dataset:wmt19", "arxiv:1907.06616", "transformers", "translation", "wmt19", "facebook", "license:apache-2.0", "autotrain_compatible" ]
translation
false
facebook
null
facebook/wmt19-en-de
7,310
null
transformers
776
--- language: - en - de tags: - translation - wmt19 - facebook license: apache-2.0 datasets: - wmt19 metrics: - bleu thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- # FSMT ## Model description This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) for en-de. For more details, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616). The abbreviation FSMT stands for FairSeqMachineTranslation All four models are available: * [wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) * [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) * [wmt19-en-de](https://huggingface.co/facebook/wmt19-en-de) * [wmt19-de-en](https://huggingface.co/facebook/wmt19-de-en) ## Intended uses & limitations #### How to use ```python from transformers import FSMTForConditionalGeneration, FSMTTokenizer mname = "facebook/wmt19-en-de" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Machine learning is great, isn't it?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # Maschinelles Lernen ist großartig, oder? ``` #### Limitations and bias - The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, [content gets truncated](https://discuss.huggingface.co/t/issues-with-translating-inputs-containing-repeated-phrases/981) ## Training data Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the [paper](https://arxiv.org/abs/1907.06616). ## Eval results pair | fairseq | transformers -------|---------|---------- en-de | [43.1](http://matrix.statmt.org/matrix/output/1909?run_id=6862) | 42.83 The score is slightly below the score reported by `fairseq`, since `transformers`` currently doesn't support: - model ensemble, therefore the best performing checkpoint was ported (``model4.pt``). - re-ranking The score was calculated using this code: ```bash git clone https://github.com/huggingface/transformers cd transformers export PAIR=en-de export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=8 export NUM_BEAMS=15 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target echo $PAIR PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS ``` note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`. ## Data Sources - [training, etc.](http://www.statmt.org/wmt19/) - [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561) ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020}, title={Facebook FAIR's WMT19 News Translation Task Submission}, author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, booktitle={Proc. of WMT}, } ``` ## TODO - port model ensemble (fairseq uses 4 model checkpoints)
google/bigbird-base-trivia-itc
29c5c29e0297ad7eb9b90ef69fecba71508f5ca4
2021-06-02T14:53:34.000Z
[ "pytorch", "jax", "big_bird", "question-answering", "en", "dataset:trivia_qa", "arxiv:2007.14062", "transformers", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
google
null
google/bigbird-base-trivia-itc
7,286
1
transformers
777
--- language: en license: apache-2.0 datasets: - trivia_qa --- # BigBird base trivia-itc This model is a fine-tune checkpoint of `bigbird-roberta-base`, fine-tuned on `trivia_qa` with `BigBirdForQuestionAnsweringHead` on its top. Check out [this](https://colab.research.google.com/drive/1DVOm1VHjW0eKCayFq1N2GpY6GR9M4tJP?usp=sharing) to see how well `google/bigbird-base-trivia-itc` performs on question answering. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdForQuestionAnswering # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc") # you can change `attention_type` to full attention like this: model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", block_size=16, num_random_blocks=2) question = "Replace me by any text you'd like." context = "Put some context for answering" encoded_input = tokenizer(question, context, return_tensors='pt') output = model(**encoded_input) ``` # Fine-tuning config & hyper-parameters - No. of global token = 128 - Window length = 192 - No. of random token = 192 - Max. sequence length = 4096 - No. of heads = 12 - No. of hidden layers = 12 - Hidden layer size = 768 - Batch size = 32 - Loss = cross-entropy noisy spans ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Harveenchadha/vakyansh-wav2vec2-hindi-him-4200
e2568c3f7868d8aa3aaabcf28fa100d10d54c170
2022-01-29T06:03:43.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hi", "arxiv:2107.07402", "transformers", "audio", "speech", "license:mit", "model-index" ]
automatic-speech-recognition
false
Harveenchadha
null
Harveenchadha/vakyansh-wav2vec2-hindi-him-4200
7,235
0
transformers
778
--- language: hi #datasets: #- Interspeech 2021 metrics: - wer tags: - audio - automatic-speech-recognition - speech license: mit model-index: - name: Wav2Vec2 Vakyansh Hindi Model by Harveen Chadha results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice hi type: common_voice args: hi metrics: - name: Test WER type: wer value: 33.17 --- ## Spaces Demo Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main) ## Pretrained Model Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.** ## Dataset This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now. ## Training Script Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation). In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha). ## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb) ## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 33.17 % [**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb) ## Credits Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages.
moussaKam/frugalscore_tiny_bert-base_bert-score
a487e5a875e63ef1f9cf6015a3a11be2d80aa550
2022-02-01T10:50:21.000Z
[ "pytorch", "bert", "text-classification", "arxiv:2110.08559", "transformers" ]
text-classification
false
moussaKam
null
moussaKam/frugalscore_tiny_bert-base_bert-score
7,234
null
transformers
779
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
digitalepidemiologylab/covid-twitter-bert-v2
b113bc3c2590d7b32ed62603fe1ebe32e1e5beee
2021-09-22T08:20:06.000Z
[ "pytorch", "tf", "jax", "bert", "en", "transformers", "Twitter", "COVID-19", "license:mit" ]
null
false
digitalepidemiologylab
null
digitalepidemiologylab/covid-twitter-bert-v2
7,203
2
transformers
780
--- language: en thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png tags: - Twitter - COVID-19 license: mit --- # COVID-Twitter-BERT v2 ## Model description BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to [covid-twitter-bert](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert) - but trained on more data, resulting in higher downstream performance. Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert). ## Intended uses & limitations The model can e.g. be used in the `fill-mask` task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it. #### How to use ```python from transformers import pipeline import json pipe = pipeline(task='fill-mask', model='digitalepidemiologylab/covid-twitter-bert-v2') out = pipe(f"In places with a lot of people, it's a good idea to wear a {pipe.tokenizer.mask_token}") print(json.dumps(out, indent=4)) [ { "sequence": "[CLS] in places with a lot of people, it's a good idea to wear a mask [SEP]", "score": 0.9998226761817932, "token": 7308, "token_str": "mask" }, ... ] ``` ## Training procedure This model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training. ## Eval results The model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges. ### BibTeX entry and citation info ```bibtex @article{muller2020covid, title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter}, author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E}, journal={arXiv preprint arXiv:2005.07503}, year={2020} } ``` or ```Martin Müller, Marcel Salathé, and Per E. Kummervold. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv preprint arXiv:2005.07503 (2020). ```
vinai/bertweet-large
67477168d449ccc8abb725e2123a0d6e44f27f4b
2022-06-08T04:43:57.000Z
[ "pytorch", "tf", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vinai
null
vinai/bertweet-large
7,183
2
transformers
781
# <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic. The general architecture and experimental results of BERTweet can be found in our [paper](https://aclanthology.org/2020.emnlp-demos.2/): @inproceedings{bertweet, title = {{BERTweet: A pre-trained language model for English Tweets}}, author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages = {9--14}, year = {2020} } **Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software. For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)! ### Main results <p float="left"> <img width="275" alt="postagging" src="https://user-images.githubusercontent.com/2412555/135724590-01d8d435-262d-44fe-a383-cd39324fe190.png" /> <img width="275" alt="ner" src="https://user-images.githubusercontent.com/2412555/135724598-1e3605e7-d8ce-4c5e-be4a-62ae8501fae7.png" /> </p> <p float="left"> <img width="275" alt="sentiment" src="https://user-images.githubusercontent.com/2412555/135724597-f1981f1e-fe73-4c03-b1ff-0cae0cc5f948.png" /> <img width="275" alt="irony" src="https://user-images.githubusercontent.com/2412555/135724595-15f4f2c8-bbb6-4ee6-82a0-034769dec183.png" /> </p>
ai4bharat/indic-bert
97ae2d6440dbd1a2698540223dc00b43075c69c9
2021-04-12T09:06:47.000Z
[ "pytorch", "albert", "en", "dataset:AI4Bharat IndicNLP Corpora", "transformers", "license:mit" ]
null
false
ai4bharat
null
ai4bharat/indic-bert
7,147
12
transformers
782
--- language: en license: mit datasets: - AI4Bharat IndicNLP Corpora --- # IndicBERT IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models. The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The code can be found [here](https://github.com/divkakwani/indic-bert). For more information, checkout our [project page](https://indicnlp.ai4bharat.org/) or our [paper](https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf). ## Pretraining Corpus We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages: | Language | as | bn | en | gu | hi | kn | | | ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | **No. of Tokens** | 36.9M | 815M | 1.34B | 724M | 1.84B | 712M | | | **Language** | **ml** | **mr** | **or** | **pa** | **ta** | **te** | **all** | | **No. of Tokens** | 767M | 560M | 104M | 814M | 549M | 671M | 8.9B | ## Evaluation Results IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our [official repo](https://github.com/divkakwani/indic-bert) #### IndicGLUE Task | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ News Article Headline Prediction | 89.58 | 95.52 | **95.87** Wikipedia Section Title Prediction| **73.66** | 66.33 | 73.31 Cloze-style multiple-choice QA | 39.16 | 27.98 | **41.87** Article Genre Classification | 90.63 | 97.03 | **97.34** Named Entity Recognition (F1-score) | **73.24** | 65.93 | 64.47 Cross-Lingual Sentence Retrieval Task | 21.46 | 13.74 | **27.12** Average | 64.62 | 61.09 | **66.66** #### Additional Tasks Task | Task Type | mBERT | XLM-R | IndicBERT -----| ----- | ----- | ------ | ----- BBC News Classification | Genre Classification | 60.55 | **75.52** | 74.60 IIT Product Reviews | Sentiment Analysis | 74.57 | **78.97** | 71.32 IITP Movie Reviews | Sentiment Analaysis | 56.77 | **61.61** | 59.03 Soham News Article | Genre Classification | 80.23 | **87.6** | 78.45 Midas Discourse | Discourse Analysis | 71.20 | **79.94** | 78.44 iNLTK Headlines Classification | Genre Classification | 87.95 | 93.38 | **94.52** ACTSA Sentiment Analysis | Sentiment Analysis | 48.53 | 59.33 | **61.18** Winograd NLI | Natural Language Inference | 56.34 | 55.87 | **56.34** Choice of Plausible Alternative (COPA) | Natural Language Inference | 54.92 | 51.13 | **58.33** Amrita Exact Paraphrase | Paraphrase Detection | **93.81** | 93.02 | 93.75 Amrita Rough Paraphrase | Paraphrase Detection | 83.38 | 82.20 | **84.33** Average | | 69.84 | **74.42** | 73.66 \* Note: all models have been restricted to a max_seq_length of 128. ## Downloads The model can be downloaded [here](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/models/indic-bert-v1.tar.gz). Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from [Huggingface](https://huggingface.co/ai4bharat/indic-bert). ## Citing If you are using any of the resources, please cite the following article: ``` @inproceedings{kakwani2020indicnlpsuite, title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}}, author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar}, year={2020}, booktitle={Findings of EMNLP}, } ``` We would like to hear from you if: - You are using our resources. Please let us know how you are putting these resources to use. - You have any feedback on these resources. ## License The IndicBERT code (and models) are released under the MIT License. ## Contributors - Divyanshu Kakwani - Anoop Kunchukuttan - Gokul NC - Satish Golla - Avik Bhattacharyya - Mitesh Khapra - Pratyush Kumar This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org). ## Contact - Anoop Kunchukuttan ([[email protected]](mailto:[email protected])) - Mitesh Khapra ([[email protected]](mailto:[email protected])) - Pratyush Kumar ([[email protected]](mailto:[email protected]))
svalabs/twitter-xlm-roberta-bitcoin-sentiment
34915a8cf74b0ad061a6f383eded7aecd293f3e5
2022-05-12T09:28:14.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
false
svalabs
null
svalabs/twitter-xlm-roberta-bitcoin-sentiment
7,139
null
transformers
783
This model is mainly focussed on extracting the sentiment on tweets regarding bitcoin. The model was trained on manually on labeled data with rubrix (https://www.rubrix.ml/). The training set approximately contained 500 samples and 500 test samples. The cardiffnlp/twitter-xlm-roberta-base-sentiment (https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) was used as weak classifier and also as base-model for finetuning.
jonatasgrosman/wav2vec2-large-xlsr-53-german
934c45f3e6939b6b6d261b4c71ed2755810e7fe6
2022-07-27T23:37:37.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "de", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-german
7,115
5
transformers
784
--- language: de license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - de - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - robust-speech-event - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 German by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice de type: common_voice args: de metrics: - name: Test WER type: wer value: 12.06 - name: Test CER type: cer value: 2.92 - name: Test WER (+LM) type: wer value: 8.74 - name: Test CER (+LM) type: cer value: 2.28 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: de metrics: - name: Dev WER type: wer value: 32.75 - name: Dev CER type: cer value: 13.64 - name: Dev WER (+LM) type: wer value: 26.6 - name: Dev CER (+LM) type: cer value: 12.58 --- # Fine-tuned XLSR-53 large model for speech recognition in German Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-german") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "de" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-german" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS. | ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS | | ES KOMMT ZUM SHOWDOWN IN GSTAAD. | ES KOMMT ZUG STUNDEDAUTENESTERKT | | IHRE FOTOSTRECKEN ERSCHIENEN IN MODEMAGAZINEN WIE DER VOGUE, HARPER’S BAZAAR UND MARIE CLAIRE. | IHRE FOTELSTRECKEN ERSCHIENEN MIT MODEMAGAZINEN WIE DER VALG AT DAS BASIN MA RIQUAIR | | FELIPE HAT EINE AUCH FÜR MONARCHEN UNGEWÖHNLICH LANGE TITELLISTE. | FELIPPE HAT EINE AUCH FÜR MONACHEN UNGEWÖHNLICH LANGE TITELLISTE | | ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET. | ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET M | | WAS SOLLS, ICH BIN BEREIT. | WAS SOLL'S ICH BIN BEREIT | | DAS INTERNET BESTEHT AUS VIELEN COMPUTERN, DIE MITEINANDER VERBUNDEN SIND. | DAS INTERNET BESTEHT AUS VIELEN COMPUTERN DIE MITEINANDER VERBUNDEN SIND | | DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM. | DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM | | DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND. | DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND | | SIE WAR DIE COUSINE VON CARL MARIA VON WEBER. | SIE WAR DIE COUSINE VON KARL-MARIA VON WEBER | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset mozilla-foundation/common_voice_6_0 --config de --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-german, title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}erman}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}}, year={2021} } ```
deepset/bert-small-mm_retrieval-question_encoder
a34edf571667cc1ba38cec55c56f2905f13336a2
2021-10-19T15:51:37.000Z
[ "pytorch", "dpr", "feature-extraction", "transformers" ]
feature-extraction
false
deepset
null
deepset/bert-small-mm_retrieval-question_encoder
7,099
null
transformers
785
Entry not found
nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large
160deb78aca30f63754e512a93337ce8013a32ca
2021-06-20T19:03:02.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
nreimers
null
nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large
7,093
6
transformers
786
# MiniLMv2 This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm)
nvidia/segformer-b0-finetuned-ade-512-512
677af011c308b27a94d3ec6098c86c31c4fb6e7d
2022-07-20T09:52:37.000Z
[ "pytorch", "tf", "segformer", "dataset:scene_parse_150", "arxiv:2105.15203", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
nvidia
null
nvidia/segformer-b0-finetuned-ade-512-512
7,091
7
transformers
787
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b0-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
flaubert/flaubert_small_cased
21a2d6f46294ad07a0b692d96af443990c07f790
2021-05-19T16:56:07.000Z
[ "pytorch", "flaubert", "fill-mask", "fr", "dataset:flaubert", "transformers", "bert", "language-model", "flue", "french", "flaubert-small", "cased", "license:mit", "autotrain_compatible" ]
fill-mask
false
flaubert
null
flaubert/flaubert_small_cased
7,078
1
transformers
788
--- language: fr license: mit datasets: - flaubert metrics: - flue tags: - bert - language-model - flaubert - flue - french - flaubert-small - cased --- # FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
esiebomajeremiah/autonlp-email-classification-657119381
484ba1babc3906d77331d95c1587aea7f3683637
2022-03-22T13:57:29.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:esiebomajeremiah/autonlp-data-email-classification", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
esiebomajeremiah
null
esiebomajeremiah/autonlp-email-classification-657119381
7,026
null
transformers
789
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - esiebomajeremiah/autonlp-data-email-classification co2_eq_emissions: 3.516233232503715 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 657119381 - CO2 Emissions (in grams): 3.516233232503715 ## Validation Metrics - Loss: 0.00037395773688331246 - Accuracy: 1.0 - Precision: 1.0 - Recall: 1.0 - AUC: 1.0 - F1: 1.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/esiebomajeremiah/autonlp-email-classification-657119381 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
HooshvareLab/bert-fa-base-uncased
a04aa40c97bcdde570ae11986a534542c2995a62
2021-05-18T21:02:21.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "fa", "arxiv:2005.12515", "transformers", "bert-fa", "bert-persian", "persian-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
HooshvareLab
null
HooshvareLab/bert-fa-base-uncased
7,008
2
transformers
790
--- language: fa tags: - bert-fa - bert-persian - persian-lm license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models. ## Introduction ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than `3.9M` documents, `73M` sentences, and `1.3B` words. Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515) ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=bert-fa) to look for fine-tuned versions on a task that interests you. ### How to use #### TensorFlow 2.0 ```python from transformers import AutoConfig, AutoTokenizer, TFAutoModel config = AutoConfig.from_pretrained("HooshvareLab/bert-fa-base-uncased") tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-fa-base-uncased") model = TFAutoModel.from_pretrained("HooshvareLab/bert-fa-base-uncased") text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است." tokenizer.tokenize(text) >>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.'] ``` #### Pytorch ```python from transformers import AutoConfig, AutoTokenizer, AutoModel config = AutoConfig.from_pretrained("HooshvareLab/bert-fa-base-uncased") tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-fa-base-uncased") model = AutoModel.from_pretrained("HooshvareLab/bert-fa-base-uncased") ``` ## Training ParsBERT trained on a massive amount of public corpora ([Persian Wikidumps](https://dumps.wikimedia.org/fawiki/), [MirasText](https://github.com/miras-tech/MirasText)) and six other manually crawled text data from a various type of websites ([BigBang Page](https://bigbangpage.com/) `scientific`, [Chetor](https://www.chetor.com/) `lifestyle`, [Eligasht](https://www.eligasht.com/Blog/) `itinerary`, [Digikala](https://www.digikala.com/mag/) `digital magazine`, [Ted Talks](https://www.ted.com/talks) `general conversational`, Books `novels, storybooks, short stories from old to the contemporary era`). As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format. ## Goals Objective goals during training are as below (after 300k steps). ``` bash ***** Eval results ***** global_step = 300000 loss = 1.4392426 masked_lm_accuracy = 0.6865794 masked_lm_loss = 1.4469004 next_sentence_accuracy = 1.0 next_sentence_loss = 6.534152e-05 ``` ## Derivative models ### Base Config #### ParsBERT v2.0 Model - [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) #### ParsBERT v2.0 Sentiment Analysis - [HooshvareLab/bert-fa-base-uncased-sentiment-digikala](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-digikala) - [HooshvareLab/bert-fa-base-uncased-sentiment-snappfood](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-snappfood) - [HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary) - [HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi) #### ParsBERT v2.0 Text Classification - [HooshvareLab/bert-fa-base-uncased-clf-digimag](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-clf-digimag) - [HooshvareLab/bert-fa-base-uncased-clf-persiannews](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-clf-persiannews) #### ParsBERT v2.0 NER - [HooshvareLab/bert-fa-base-uncased-ner-peyma](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-ner-peyma) - [HooshvareLab/bert-fa-base-uncased-ner-arman](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-ner-arman) ## Eval results ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling. ### Sentiment Analysis (SA) Task | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.72 | 81.74* | 80.74 | - | | SnappFood User Comments | 87.98 | 88.12* | 87.87 | - | | SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 | ### Text Classification (TC) Task | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | |:-----------------:|:-----------:|:-----------:|:-----:| | Digikala Magazine | 93.65* | 93.59 | 90.72 | | Persian News | 97.44* | 97.19 | 95.79 | ### Named Entity Recognition (NER) Task | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 93.40* | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | | ARMAN | 99.84* | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
cross-encoder/nli-roberta-base
1c9dadfb1d7bcaac49176fd3a5de914f6ae2bd42
2021-08-05T08:41:05.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:multi_nli", "dataset:snli", "transformers", "roberta-base", "license:apache-2.0", "zero-shot-classification" ]
zero-shot-classification
false
cross-encoder
null
cross-encoder/nli-roberta-base
6,989
3
transformers
791
--- language: en pipeline_tag: zero-shot-classification tags: - roberta-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-roberta-base') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-roberta-base') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-roberta-base') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-roberta-base') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
klue/roberta-base
67dd433d36ebc66a42c9aaa85abcf8d2620e41d9
2021-10-20T16:10:25.000Z
[ "pytorch", "roberta", "fill-mask", "ko", "arxiv:2105.09680", "transformers", "korean", "klue", "autotrain_compatible" ]
fill-mask
false
klue
null
klue/roberta-base
6,986
null
transformers
792
--- language: ko tags: - korean - klue mask_token: "[MASK]" widget: - text: 대한민국의 수도는 [MASK] 입니다. --- # KLUE RoBERTa base Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details. ## How to use _NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`) ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("klue/roberta-base") tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base") ``` ## BibTeX entry and citation info ```bibtex @misc{park2021klue, title={KLUE: Korean Language Understanding Evaluation}, author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho}, year={2021}, eprint={2105.09680}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
facebook/wmt19-de-en
80d366f635721148ffa2a0a58591cb672c9b4982
2020-12-11T21:39:51.000Z
[ "pytorch", "fsmt", "text2text-generation", "de", "en", "dataset:wmt19", "arxiv:1907.06616", "transformers", "translation", "wmt19", "facebook", "license:apache-2.0", "autotrain_compatible" ]
translation
false
facebook
null
facebook/wmt19-de-en
6,979
null
transformers
793
--- language: - de - en tags: - translation - wmt19 - facebook license: apache-2.0 datasets: - wmt19 metrics: - bleu thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- # FSMT ## Model description This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) for de-en. For more details, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616). The abbreviation FSMT stands for FairSeqMachineTranslation All four models are available: * [wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) * [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) * [wmt19-en-de](https://huggingface.co/facebook/wmt19-en-de) * [wmt19-de-en](https://huggingface.co/facebook/wmt19-de-en) ## Intended uses & limitations #### How to use ```python from transformers import FSMTForConditionalGeneration, FSMTTokenizer mname = "facebook/wmt19-de-en" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Maschinelles Lernen ist großartig, oder?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # Machine learning is great, isn't it? ``` #### Limitations and bias - The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, [content gets truncated](https://discuss.huggingface.co/t/issues-with-translating-inputs-containing-repeated-phrases/981) ## Training data Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the [paper](https://arxiv.org/abs/1907.06616). ## Eval results pair | fairseq | transformers -------|---------|---------- de-en | [42.3](http://matrix.statmt.org/matrix/output/1902?run_id=6750) | 41.35 The score is slightly below the score reported by `fairseq`, since `transformers`` currently doesn't support: - model ensemble, therefore the best performing checkpoint was ported (``model4.pt``). - re-ranking The score was calculated using this code: ```bash git clone https://github.com/huggingface/transformers cd transformers export PAIR=de-en export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=8 export NUM_BEAMS=15 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target echo $PAIR PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS ``` note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`. ## Data Sources - [training, etc.](http://www.statmt.org/wmt19/) - [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561) ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020}, title={Facebook FAIR's WMT19 News Translation Task Submission}, author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, booktitle={Proc. of WMT}, } ``` ## TODO - port model ensemble (fairseq uses 4 model checkpoints)
HooshvareLab/bert-fa-zwnj-base
3880fac085e1a338e9564907cba0adeb9e14bc72
2021-05-18T21:05:42.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "fa", "arxiv:2005.12515", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
HooshvareLab
null
HooshvareLab/bert-fa-zwnj-base
6,937
3
transformers
794
--- language: fa license: apache-2.0 --- # ParsBERT (v3.0) A Transformer-based Model for Persian Language Understanding The new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary. ## Introduction ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news). Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515) ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
gogamza/kobart-base-v2
d9a1f640896cef8dcfd693b1bc57510a2b09a18f
2021-11-11T07:43:35.000Z
[ "pytorch", "bart", "feature-extraction", "ko", "transformers", "license:mit" ]
feature-extraction
false
gogamza
null
gogamza/kobart-base-v2
6,910
3
transformers
795
--- language: ko tags: - bart license: mit --- ## KoBART-base-v2 With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART. ```python from transformers import PreTrainedTokenizerFast, BartModel tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-base-v2') model = BartModel.from_pretrained('gogamza/kobart-base-v2') ``` ### Performance NSMC - acc. : 0.901
Helsinki-NLP/opus-mt-tr-en
3252b40d8b9dead8012364425fd00db1a26abf85
2021-09-11T10:49:35.000Z
[ "pytorch", "marian", "text2text-generation", "tr", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-tr-en
6,901
9
transformers
796
--- tags: - translation license: apache-2.0 --- ### opus-mt-tr-en * source languages: tr * target languages: en * OPUS readme: [tr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr.tr.en | 27.6 | 0.548 | | newstest2016-entr.tr.en | 25.2 | 0.532 | | newstest2017-entr.tr.en | 24.7 | 0.530 | | newstest2018-entr.tr.en | 27.0 | 0.547 | | Tatoeba.tr.en | 63.5 | 0.760 |
google/bigbird-pegasus-large-bigpatent
623321f538339e475269fdf79a258a5a7b796f4c
2021-06-03T18:26:21.000Z
[ "pytorch", "bigbird_pegasus", "text2text-generation", "en", "dataset:big_patent", "arxiv:2007.14062", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
google
null
google/bigbird-pegasus-large-bigpatent
6,873
7
transformers
797
--- language: en license: apache-2.0 datasets: - big_patent tags: - summarization --- # BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-bigpatent") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on [big_patent](https://huggingface.co/datasets/big_patent) dataset. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
dmis-lab/biobert-large-cased-v1.1-squad
2b17f30cda1efcbe0d6ab3b977856c7898f934b1
2021-05-19T16:01:47.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
dmis-lab
null
dmis-lab/biobert-large-cased-v1.1-squad
6,856
2
transformers
798
Entry not found
naver-clova-ocr/bros-base-uncased
0f0e83a58cde75af72e331e6a018cd5bc7ccab31
2022-04-05T13:56:46.000Z
[ "pytorch", "bros", "arxiv:2108.04539", "transformers" ]
null
false
naver-clova-ocr
null
naver-clova-ocr/bros-base-uncased
6,843
1
transformers
799
# BROS GitHub: https://github.com/clovaai/bros ## Introduction BROS (BERT Relying On Spatiality) is a pre-trained language model focusing on text and layout for better key information extraction from documents.<br> Given the OCR results of the document image, which are text and bounding box pairs, it can perform various key information extraction tasks, such as extracting an ordered item list from receipts.<br> For more details, please refer to our paper: BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents<br> Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park<br> AAAI 2022 - Main Technical Track [[arXiv]](https://arxiv.org/abs/2108.04539) ## Pre-trained models | name | # params | Hugging Face - Models | |---------------------|---------:|-------------------------------------------------------------------------------------------------| | bros-base-uncased (**this**) | < 110M | [naver-clova-ocr/bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) | | bros-large-uncased | < 340M | [naver-clova-ocr/bros-large-uncased](https://huggingface.co/naver-clova-ocr/bros-large-uncased) |