modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
TransQuest/monotransquest-da-multilingual
cd947f301588992a749d22fc867e535bc9cb1703
2021-06-03T19:06:25.000Z
[ "pytorch", "xlm-roberta", "text-classification", "multilingual-multilingual", "transformers", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-da-multilingual
3,818
null
transformers
1,000
--- language: multilingual-multilingual tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-multilingual", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
eugenesiow/bart-paraphrase
561b9d9631d608b8c63c01ecb64b5f030cabdd73
2021-09-13T10:02:50.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:quora", "dataset:paws", "arxiv:1910.13461", "transformers", "paraphrase", "seq2seq", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
eugenesiow
null
eugenesiow/bart-paraphrase
3,805
3
transformers
1,001
--- language: en license: apache-2.0 tags: - transformers - bart - paraphrase - seq2seq datasets: - quora - paws --- # BART Paraphrase Model (Large) A large BART seq2seq (text2text generation) model fine-tuned on 3 paraphrase datasets. ## Model description The BART model was proposed in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. (2019). - Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). - The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. - BART is particularly effective when fine tuned for text generation. This model is fine-tuned on 3 paraphrase datasets (Quora, PAWS and MSR paraphrase corpus). The original BART code is from this [repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). ## Intended uses & limitations You can use the pre-trained model for paraphrasing an input sentence. ### How to use ```python import torch from transformers import BartForConditionalGeneration, BartTokenizer input_sentence = "They were there to enjoy us and they were there to pray for us." model = BartForConditionalGeneration.from_pretrained('eugenesiow/bart-paraphrase') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) tokenizer = BartTokenizer.from_pretrained('eugenesiow/bart-paraphrase') batch = tokenizer(input_sentence, return_tensors='pt') generated_ids = model.generate(batch['input_ids']) generated_sentence = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(generated_sentence) ``` ### Output ``` ['They were there to enjoy us and to pray for us.'] ``` ## Training data The model was fine-tuned on a pretrained [`facebook/bart-large`](https://huggingface.co/facebook/bart-large), using the [Quora](https://huggingface.co/datasets/quora), [PAWS](https://huggingface.co/datasets/paws) and [MSR paraphrase corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398). ## Training procedure We follow the training procedure provided in the [simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers) seq2seq [example](https://github.com/ThilinaRajapakse/simpletransformers/blob/master/examples/seq2seq/paraphrasing/train.py). ## BibTeX entry and citation info ```bibtex @misc{lewis2019bart, title={BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, author={Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Ves Stoyanov and Luke Zettlemoyer}, year={2019}, eprint={1910.13461}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
camembert/camembert-large
df7dbf53dd70551faa6b4ec45deb4a566445c7cc
2020-12-11T21:35:25.000Z
[ "pytorch", "camembert", "fr", "arxiv:1911.03894", "transformers" ]
null
false
camembert
null
camembert/camembert-large
3,801
4
transformers
1,002
--- language: fr --- # CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to [Camembert Website](https://camembert-model.fr/) ## Pre-trained models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `camembert-base` | 110M | Base | OSCAR (138 GB of text) | | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) | | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) | | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) | | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) | | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) | ## How to use CamemBERT with HuggingFace ##### Load CamemBERT and its sub-word tokenizer : ```python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-large") camembert = CamembertModel.from_pretrained("camembert/camembert-large") camembert.eval() # disable dropout (or leave in train mode to finetune) ``` ##### Filling masks using pipeline ```python from transformers import pipeline camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-large", tokenizer="camembert/camembert-large") results = camembert_fill_mask("Le camembert est <mask> :)") # results #[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.15560828149318695, 'token': 305}, #{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06821336597204208, 'token': 3497}, #{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.060438305139541626, 'token': 11661}, #{'sequence': '<s> Le camembert est ici :)</s>', 'score': 0.02023460529744625, 'token': 373}, #{'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.01778135634958744, 'token': 876}] ``` ##### Extract contextual embedding features from Camembert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") # ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6] # NB: Can be done in one step : tokenize.encode("J'aime le camembert !") # Feed tokens to Camembert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = camembert(encoded_sentence) # embeddings.detach() # torch.Size([1, 10, 1024]) #tensor([[[-0.1284, 0.2643, 0.4374, ..., 0.1627, 0.1308, -0.2305], # [ 0.4576, -0.6345, -0.2029, ..., -0.1359, -0.2290, -0.6318], # [ 0.0381, 0.0429, 0.5111, ..., -0.1177, -0.1913, -0.1121], # ..., ``` ##### Extract contextual embedding features from all Camembert layers ```python from transformers import CamembertConfig # (Need to reload the model with new config) config = CamembertConfig.from_pretrained("camembert/camembert-large", output_hidden_states=True) camembert = CamembertModel.from_pretrained("camembert/camembert-large", config=config) embeddings, _, all_layer_embeddings = camembert(encoded_sentence) # all_layer_embeddings list of len(all_layer_embeddings) == 25 (input embedding layer + 24 self attention layers) all_layer_embeddings[5] # layer 5 contextual embedding : size torch.Size([1, 10, 1024]) #tensor([[[-0.0600, 0.0742, 0.0332, ..., -0.0525, -0.0637, -0.0287], # [ 0.0950, 0.2840, 0.1985, ..., 0.2073, -0.2172, -0.6321], # [ 0.1381, 0.1872, 0.1614, ..., -0.0339, -0.2530, -0.1182], # ..., ``` ## Authors CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. ## Citation If you use our work, please cite: ```bibtex @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
sentence-transformers/msmarco-MiniLM-L-6-v3
195276c0c8647b99dfe128bd8bc4ecd1a66d41f8
2022-06-15T21:52:00.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-MiniLM-L-6-v3
3,781
3
sentence-transformers
1,003
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/msmarco-MiniLM-L-6-v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L-6-v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3') model = AutoModel.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-MiniLM-L-6-v3) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
microsoft/wavlm-base-plus
4c66d4806a428f2e922ccfa1a962776e232d487b
2021-12-22T17:23:24.000Z
[ "pytorch", "wavlm", "feature-extraction", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "transformers", "speech" ]
feature-extraction
false
microsoft
null
microsoft/wavlm-base-plus
3,775
2
transformers
1,004
--- language: - en datasets: tags: - speech inference: false --- # WavLM-Base-Plus [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The base model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Usage This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/). **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence of phonemes before fine-tuning. ## Speech Recognition To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition). ## Speech Classification To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification). ## Speaker Verification TODO ## Speaker Diarization TODO # Contribution The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
KoboldAI/GPT-J-6B-Adventure
e2c00dc99f986f2430f5d34c0214969cee786755
2021-12-24T19:32:09.000Z
[ "pytorch", "gptj", "text-generation", "transformers" ]
text-generation
false
KoboldAI
null
KoboldAI/GPT-J-6B-Adventure
3,772
2
transformers
1,005
Entry not found
flair/ner-dutch
16f9e2a2e2c6b739c723b81a8d72a923f4e46b0a
2021-03-02T22:03:57.000Z
[ "pytorch", "nl", "dataset:conll2003", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-dutch
3,769
null
flair
1,006
--- tags: - flair - token-classification - sequence-tagger-model language: nl datasets: - conll2003 widget: - text: "George Washington ging naar Washington." --- # Dutch NER in Flair (default model) This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,58** (CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on Transformer embeddings and LSTM-CRF. --- # Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-dutch") # make example sentence sentence = Sentence("George Washington ging naar Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.997)] Span [5]: "Washington" [− Labels: LOC (0.9996)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03_DUTCH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03_DUTCH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize embeddings embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased') # 5. initialize sequence tagger tagger: SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-dutch', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik-etal-2019-flair, title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}", author = "Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)", year = "2019", url = "https://www.aclweb.org/anthology/N19-4010", pages = "54--59", } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
PlanTL-GOB-ES/roberta-large-bne-sqac
49f9afb2bf305084e1c8c61046369123a60bd0c5
2022-04-06T14:43:56.000Z
[ "pytorch", "roberta", "question-answering", "es", "dataset:PlanTL-GOB-ES/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "qa", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
PlanTL-GOB-ES
null
PlanTL-GOB-ES/roberta-large-bne-sqac
3,764
2
transformers
1,007
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "qa" - "question answering" datasets: - "PlanTL-GOB-ES/SQAC" metrics: - "f1" --- # Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset. RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne ## Dataset The dataset used is the [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC). ## Evaluation and results F1 Score: 0.7993 (average of 5 runs). For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @article{gutierrezfandino2022, author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas}, title = {MarIA: Spanish Language Models}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405}, pages = {39--60} } ``` ## Funding This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
99818221720ac345078458b0b0489d61b21fe137
2021-05-20T00:22:53.000Z
[ "pytorch", "jax", "bert", "question-answering", "es", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
3,764
1
transformers
1,008
--- language: es thumbnail: https://i.imgur.com/jgBdimh.png --- # BETO (Spanish BERT) + Spanish SQuAD2.0 This model is provided by [BETO team](https://github.com/dccuchile/beto) and fine-tuned on [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) for **Q&A** downstream task. ## Details of the language model('dccuchile/bert-base-spanish-wwm-cased') Language model ([**'dccuchile/bert-base-spanish-wwm-cased'**](https://github.com/dccuchile/beto/blob/master/README.md)): BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models. ## Details of the downstream task (Q&A) - Dataset [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) | Dataset | # Q&A | | ---------------------- | ----- | | SQuAD2.0 Train | 130 K | | SQuAD2.0-es-v2.0 | 111 K | | SQuAD2.0 Dev | 12 K | | SQuAD-es-v2.0-small Dev| 69 K | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash export SQUAD_DIR=path/to/nl_squad python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path dccuchile/bert-base-spanish-wwm-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train_nl-v2.0.json \ --predict_file $SQUAD_DIR/dev_nl-v2.0.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --save_steps 5000 \ --threads 4 \ --version_2_with_negative ``` ## Results: | Metric | # Value | | ---------------------- | ----- | | **Exact** | **76.50**50 | | **F1** | **86.07**81 | ```json { "exact": 76.50501430594491, "f1": 86.07818773108252, "total": 69202, "HasAns_exact": 67.93020719738277, "HasAns_f1": 82.37912207996466, "HasAns_total": 45850, "NoAns_exact": 93.34104145255225, "NoAns_f1": 93.34104145255225, "NoAns_total": 23352, "best_exact": 76.51223953064941, "best_exact_thresh": 0.0, "best_f1": 86.08541295578848, "best_f1_thresh": 0.0 } ``` ### Model in action (in a Colab Notebook) <details> 1. Set the context and ask some questions: ![Set context and questions](https://media.giphy.com/media/mCIaBpfN0LQcuzkA2F/giphy.gif) 2. Run predictions: ![Run the model](https://media.giphy.com/media/WT453aptcbCP7hxWTZ/giphy.gif) </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
Salesforce/codet5-base-multi-sum
4c34d0047a64ff95973d49d2cc0e61ae37fc2cd0
2021-11-23T09:54:43.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:code_search_net", "arxiv:2109.00859", "arxiv:1909.09436", "arxiv:1907.11692", "arxiv:2002.08155", "transformers", "codet5", "license:bsd-3", "autotrain_compatible" ]
text2text-generation
false
Salesforce
null
Salesforce/codet5-base-multi-sum
3,753
6
transformers
1,009
--- license: BSD-3 tags: - codet5 datasets: - code_search_net inference: true --- # CodeT5-base for Code Summarization [CodeT5-base](https://huggingface.co/Salesforce/codet5-base) model fine-tuned on CodeSearchNet data in a multi-lingual training setting ( Ruby/JavaScript/Go/Python/Java/PHP) for code summarization. It was introduced in this EMNLP 2021 paper [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/abs/2109.00859) by Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi. Please check out more at [this repository](https://github.com/salesforce/CodeT5). ## How to use Here is how to use this model: ```python from transformers import RobertaTokenizer, T5ForConditionalGeneration if __name__ == '__main__': tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base-multi-sum') model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base-multi-sum') text = """def svg_to_image(string, size=None): if isinstance(string, unicode): string = string.encode('utf-8') renderer = QtSvg.QSvgRenderer(QtCore.QByteArray(string)) if not renderer.isValid(): raise ValueError('Invalid SVG data.') if size is None: size = renderer.defaultSize() image = QtGui.QImage(size, QtGui.QImage.Format_ARGB32) painter = QtGui.QPainter(image) renderer.render(painter) return image""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=20) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) # this prints: "Convert a SVG string to a QImage." ``` ## Fine-tuning data We employ the filtered version of CodeSearchNet data [[Husain et al., 2019](https://arxiv.org/abs/1909.09436)] from [CodeXGLUE](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text) benchmark for fine-tuning on code summarization. The data is tokenized with our pre-trained code-specific BPE (Byte-Pair Encoding) tokenizer. One can prepare text (or code) for the model using RobertaTokenizer with the vocab files from [codet5-base](https://huggingface.co/Salesforce/codet5-base). ### Data statistic | Programming Language | Training | Dev | Test | | :------------------- | :------: | :----: | :----: | | Python | 251,820 | 13,914 | 14,918 | | PHP | 241,241 | 12,982 | 14,014 | | Go | 167,288 | 7,325 | 8,122 | | Java | 164,923 | 5,183 | 10,955 | | JavaScript | 58,025 | 3,885 | 3,291 | | Ruby | 24,927 | 1,400 | 1,261 | ## Training procedure We fine-tune codet5-base on these six programming languages (Ruby/JavaScript/Go/Python/Java/PHP) in the multi-task learning setting. We employ the balanced sampling to avoid biasing towards high-resource tasks. Please refer to the [paper](https://arxiv.org/abs/2109.00859) for more details. ## Evaluation results Unlike the paper allowing to select different best checkpoints for different programming languages (PLs), here we employ one checkpoint for all PLs. Besides, we remove the task control prefix to specify the PL in training and inference. The results on the test set are shown as below: | Model | Ruby | Javascript | Go | Python | Java | PHP | Overall | | ----------- | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: | :-------: | | Seq2Seq | 9.64 | 10.21 | 13.98 | 15.93 | 15.09 | 21.08 | 14.32 | | Transformer | 11.18 | 11.59 | 16.38 | 15.81 | 16.26 | 22.12 | 15.56 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 11.17 | 11.90 | 17.72 | 18.14 | 16.47 | 24.02 | 16.57 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 12.16 | 14.90 | 18.07 | 19.06 | 17.65 | 25.16 | 17.83 | | [PLBART](https://aclanthology.org/2021.naacl-main.211.pdf) | 14.11 |15.56 | 18.91 | 19.30 | 18.45 | 23.58 | 18.32 | | [CodeT5-small](https://arxiv.org/abs/2109.00859) |14.87 | 15.32 | 19.25 | 20.04 | 19.92 | 25.46 | 19.14 | | [CodeT5-base](https://arxiv.org/abs/2109.00859) | **15.24** | 16.16 | 19.56 | 20.01 | **20.31** | 26.03 | 19.55 | | [CodeT5-base-multi-sum](https://arxiv.org/abs/2109.00859) | **15.24** | **16.18** | **19.95** | **20.42** | 20.26 | **26.10** | **19.69** | ## Citation ```bibtex @inproceedings{ wang2021codet5, title={CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation}, author={Yue Wang, Weishi Wang, Shafiq Joty, Steven C.H. Hoi}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021}, year={2021}, } ```
UBC-NLP/MARBERT
ef5bf8d54e104731fc045d5c76e72af8a23988cf
2022-01-19T20:37:55.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "transformers", "Arabic BERT", "MSA", "Twitter", "Masked Langauge Model", "autotrain_compatible" ]
fill-mask
false
UBC-NLP
null
UBC-NLP/MARBERT
3,747
6
transformers
1,010
--- language: - ar tags: - Arabic BERT - MSA - Twitter - Masked Langauge Model widget: - text: "اللغة العربية هي لغة [MASK]." --- <img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="200" height="200" align="right"/> **MARBERT** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**. MARBERT is a large-scale pre-trained masked language model focused on both Dialectal Arabic (DA) and MSA. Arabic has multiple varieties. To train MARBERT, we randomly sample 1B Arabic tweets from a large in-house dataset of about 6B tweets. We only include tweets with at least 3 Arabic words, based on character string matching, regardless whether the tweet has non-Arabic string or not. That is, we do not remove non-Arabic so long as the tweet meets the 3 Arabic word criterion. The dataset makes up **128GB of text** (**15.6B tokens**). We use the same network architecture as ARBERT (BERT-base), but without the next sentence prediction (NSP) objective since tweets are short. See our [repo](https://github.com/UBC-NLP/LMBERT) for modifying BERT code to remove NSP. For more information about MARBERT, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert). # BibTex If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{abdul-mageed-etal-2021-arbert, title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic", author = "Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Nagoudi, El Moatez Billah", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.551", doi = "10.18653/v1/2021.acl-long.551", pages = "7088--7105", abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
unicamp-dl/translation-en-pt-t5
8418d7e9b1837687137af06624cb3596b45c9343
2021-10-11T03:47:21.000Z
[ "pytorch", "t5", "text2text-generation", "en", "pt", "dataset:EMEA", "dataset:ParaCrawl 99k", "dataset:CAPES", "dataset:Scielo", "dataset:JRC-Acquis", "dataset:Biomedical Domain Corpora", "transformers", "translation", "autotrain_compatible" ]
translation
false
unicamp-dl
null
unicamp-dl/translation-en-pt-t5
3,743
5
transformers
1,011
--- language: - en - pt datasets: - EMEA - ParaCrawl 99k - CAPES - Scielo - JRC-Acquis - Biomedical Domain Corpora tags: - translation metrics: - bleu --- # Introduction This repository brings an implementation of T5 for translation in EN-PT tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)! # Usage Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5. You can also create a pipeline for it. An example with the phrase "I like to eat rice" is: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-en-pt-t5") model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-en-pt-t5") enpt_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer) enpt_pipeline("translate English to Portuguese: I like to eat rice.") ``` # Citation ```bibtex @inproceedings{lopes-etal-2020-lite, title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation", author = "Lopes, Alexandre and Nogueira, Rodrigo and Lotufo, Roberto and Pedrini, Helio", booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.wmt-1.90", pages = "833--840", } ```
londogard/flair-swe-ner
f7ec252c72488deafa3cec6e27d9d1e18a3376ca
2021-03-29T08:06:38.000Z
[ "pytorch", "sv", "dataset:SUC 3.0", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
londogard
null
londogard/flair-swe-ner
3,742
null
flair
1,012
--- tags: - flair - token-classification - sequence-tagger-model language: sv datasets: - SUC 3.0 widget: - text: "Hampus bor i Skåne och har levererat denna model idag." --- Published with ❤️ from [londogard](https://londogard.com). ## Swedish NER in Flair (SUC 3.0) F1-Score: **85.6** (SUC 3.0) Predicts 8 tags: |**Tag**|**Meaning**| |---|---| | PRS| person name | | ORG | organisation name| | TME | time unit | | WRK | building name | | LOC | location name | | EVN | event name | | MSR | measurement unit | | OBJ | object (like "Rolls-Royce" is a object in the form of a special car) | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("londogard/flair-swe-ner") # make example sentence sentence = Sentence("Hampus bor i Skåne och har levererat denna model idag.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [0]: "Hampus" [− Labels: PRS (1.0)] Span [3]: "Skåne" [− Labels: LOC (1.0)] Span [9]: "idag" [− Labels: TME(1.0)] ``` So, the entities "_Hampus_" (labeled as a **PRS**), "_Skåne_" (labeled as a **LOC**), "_idag_" (labeled as a **TME**) are found in the sentence "_Hampus bor i Skåne och har levererat denna model idag._". --- **Please mention londogard if using this models.**
flair/ner-english-ontonotes
4e50d09d85d60fd36e2c78175d4e405b1e3caa8c
2021-03-02T22:07:31.000Z
[ "pytorch", "en", "dataset:ontonotes", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english-ontonotes
3,728
1
flair
1,013
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "On September 1st George Washington won 1 dollar." --- ## English NER in Flair (Ontonotes default model) This is the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **89.27** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes") # make example sentence sentence = Sentence("On September 1st George Washington won 1 dollar.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (0.8824)] Span [4,5]: "George Washington" [− Labels: PERSON (0.9604)] Span [7,8]: "1 dollar" [− Labels: MONEY (0.9837)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George Washington*" (labeled as a **person**) and "*1 dollar*" (labeled as a **money**) are found in the sentence "*On September 1st George Washington won 1 dollar*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('en-crawl'), # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english-ontonotes', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
flair/upos-multi
236615d6d0770325a1870c2659899e098cf71953
2021-03-02T22:16:39.000Z
[ "pytorch", "en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", "no", "fi", "cs", "dataset:ontonotes", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/upos-multi
3,707
3
flair
1,014
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - fr - it - nl - pl - es - sv - da - no - fi - cs datasets: - ontonotes widget: - text: "Ich liebe Berlin, as they say" --- ## Multilingual Universal Part-of-Speech Tagging in Flair (default model) This is the default multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,47** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "Ich" [− Labels: PRON (0.9999)] Span [2]: "liebe" [− Labels: VERB (0.9999)] Span [3]: "Berlin" [− Labels: PROPN (0.9997)] Span [4]: "," [− Labels: PUNCT (1.0)] Span [5]: "as" [− Labels: SCONJ (0.9991)] Span [6]: "they" [− Labels: PRON (0.9998)] Span [7]: "say" [− Labels: VERB (0.9998)] Span [8]: "." [− Labels: PUNCT (1.0)] ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward'), # contextual string embeddings, backward FlairEmbeddings('multi-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
dumitrescustefan/bert-base-romanian-cased-v1
9718c77b8a4f402f3d2a9202e9c918f7fdcdcceb
2021-11-02T15:25:55.000Z
[ "pytorch", "jax", "bert", "ro", "transformers" ]
null
false
dumitrescustefan
null
dumitrescustefan/bert-base-romanian-cased-v1
3,685
4
transformers
1,015
--- language: ro --- # bert-base-romanian-cased-v1 The BERT **base**, **cased** model for Romanian, trained on a 15GB corpus, version ![v1.0](https://img.shields.io/badge/v1.0-21%20Apr%202020-ff6666) ### How to use ```python from transformers import AutoTokenizer, AutoModel import torch # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1") model = AutoModel.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1") # tokenize a sentence and run through the model input_ids = torch.tensor(tokenizer.encode("Acesta este un test.", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) # get encoding last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple ``` Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with : ``` text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș") ``` because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to <UNK>s and increased number of tokens per word. ### Evaluation Evaluation is performed on Universal Dependencies [Romanian RRT](https://universaldependencies.org/treebanks/ro_rrt/index.html) UPOS, XPOS and LAS, and on a NER task based on [RONEC](https://github.com/dumitrescustefan/ronec). Details, as well as more in-depth tests not shown here, are given in the dedicated [evaluation page](https://github.com/dumitrescustefan/Romanian-Transformers/tree/master/evaluation/README.md). The baseline is the [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) model ``bert-base-multilingual-(un)cased``, as at the time of writing it was the only available BERT model that works on Romanian. | Model | UPOS | XPOS | NER | LAS | |--------------------------------|:-----:|:------:|:-----:|:-----:| | bert-base-multilingual-cased | 97.87 | 96.16 | 84.13 | 88.04 | | bert-base-romanian-cased-v1 | **98.00** | **96.46** | **85.88** | **89.69** | ### Corpus The model is trained on the following corpora (stats in the table below are after cleaning): | Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) | |----------- |:--------: |:--------: |:--------: |:--------: | | OPUS | 55.05 | 635.04 | 4.045 | 3.8 | | OSCAR | 33.56 | 1725.82 | 11.411 | 11 | | Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 | | **Total** | **90.15** | **2421.33** | **15.867** | **15.2** | #### Acknowledgements - We'd like to thank [Sampo Pyysalo](https://github.com/spyysalo) from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
Norod78/hebrew-bad_wiki-gpt_neo-tiny
a71dae1355352449475f8cb3066e85533197603e
2022-07-19T18:11:08.000Z
[ "pytorch", "gpt_neo", "text-generation", "he", "arxiv:1910.09700", "arxiv:2105.09680", "transformers", "license:mit" ]
text-generation
false
Norod78
null
Norod78/hebrew-bad_wiki-gpt_neo-tiny
3,683
null
transformers
1,016
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - **Developed by:** [Doron Adler](https://github.com/Norod) - **Model Type:** Text Generation - **Language(s):** Hebrew - **License:** MIT - **Resources for more information:** - [GitHub Repo](https://github.com/Norod/hebrew-gpt_neo) - [HuggingFace Space](https://huggingface.co/spaces/Norod78/Hebrew-GPT-Neo-Small) ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the [hebrew-gpt_neo model github](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) * **Activation Function:** gelu * **Number_Head:** 12 * **Number_Vocab:** 50257 * **Train batch size:** 250 * **Eval batch size:** 64 * **Predict batch size:** 1 ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** [More information needed] - **Hours used:** Unknown - **Cloud Provider:** GCP tpu-v8s - **Compute Region:** europe-west4 - **Carbon Emitted:** [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available [here](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) ​​ ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") ```
rabindralamsal/BERTsent
9514b1314be823ab18e320b361247ffcd94e8d83
2022-07-01T03:51:37.000Z
[ "pytorch", "tf", "roberta", "text-classification", "arxiv:2206.10471", "transformers" ]
text-classification
false
rabindralamsal
null
rabindralamsal/BERTsent
3,681
2
transformers
1,017
# Sentiment Analysis of English Tweets (including COVID-19-specific tweets) with BERTsent **BERTsent**: A finetuned **BERT** based **sent**iment classifier for English language tweets. BERTsent is trained with SemEval 2017 corpus (39k plus tweets) and is based on [bertweet-base](https://github.com/VinAIResearch/BERTweet) that was trained on 850M English Tweets (cased) and additional 23M COVID-19 English Tweets (cased). The base model used [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. Output labels: - 0 represents "negative" sentiment - 1 represents "neutral" sentiment - 2 represents "positive" sentiment ## COVID-19 tweets specific task Eg., The model distinguishes: "covid" -> neutral sentiment, "I have covid" -> negative sentiment ## Cite If you use BERTsent in your project/research, please cite the following article: Lamsal, R., Harwood, A., & Read, M. R. (2022). [Twitter conversations predict the daily confirmed COVID-19 cases](https://arxiv.org/abs/2206.10471). arXiv preprint arXiv:2206.10471. @article{lamsal2022twitter, &nbsp;&nbsp;title={Twitter conversations predict the daily confirmed COVID-19 cases}, &nbsp;&nbsp;author={Lamsal, Rabindra and Harwood, Aaron and Read, Maria Rodriguez}, &nbsp;&nbsp;journal={arXiv preprint arXiv:2206.10471}, &nbsp;&nbsp;year={2022} } ## Using the model Install transformers and emoji, if already not installed: terminal: pip install transformers pip install emoji (for converting emoticons or emojis into text) notebooks (Colab, Kaggle): !pip install transformers !pip install emoji Import BERTsent from the transformers library: from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("rabindralamsal/finetuned-bertweet-sentiment-analysis") model = TFAutoModelForSequenceClassification.from_pretrained("rabindralamsal/finetuned-bertweet-sentiment-analysis") Import TensorFlow and numpy: import tensorflow as tf import numpy as np We have installed and imported everything that's needed for the sentiment analysis. Let's predict sentiment of an example tweet: example_tweet = "The NEET exams show our Govt in a poor light: unresponsiveness to genuine concerns; admit cards not delivered to aspirants in time; failure to provide centres in towns they reside, thus requiring unnecessary & risky travels. What a disgrace to treat our #Covid warriors like this!" #this tweet resides on Twitter with an identifier-1435793872588738560 input = tokenizer.encode(example_tweet, return_tensors="tf") output = model.predict(input)[0] prediction = tf.nn.softmax(output, axis=1).numpy() sentiment = np.argmax(prediction) print(prediction) print(sentiment) Output: [[0.972672164440155 0.023684727028012276 0.003643065458163619]] 0
Langboat/mengzi-bert-base
a685cb1101fb1ea116e8432b2e14042194e4738b
2021-10-14T09:01:34.000Z
[ "pytorch", "bert", "fill-mask", "zh", "arxiv:2110.06696", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Langboat
null
Langboat/mengzi-bert-base
3,670
15
transformers
1,018
--- language: - zh license: apache-2.0 widget: - text: "生活的真谛是[MASK]。" --- # Mengzi-BERT base model (Chinese) Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task. [Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696) ## Usage ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base") model = BertModel.from_pretrained("Langboat/mengzi-bert-base") ``` ## Scores on nine chinese tasks (without any data augmentation) | Model | AFQMC | TNEWS | IFLYTEK | CMNLI | WSC | CSL | CMRC2018 | C3 | CHID | |-|-|-|-|-|-|-|-|-|-| |RoBERTa-wwm-ext| 74.30 | 57.51 | 60.80 | 80.70 | 67.20 | 80.67 | 77.59 | 67.06 | 83.78 | |Mengzi-BERT-base| 74.58 | 57.97 | 60.68 | 82.12 | 87.50 | 85.40 | 78.54 | 71.70 | 84.16 | RoBERTa-wwm-ext scores are from CLUE baseline ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. ``` @misc{zhang2021mengzi, title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese}, author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou}, year={2021}, eprint={2110.06696}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
valhalla/t5-base-qg-hl
6b9bc6f65b1df793cd1d08674b149263b0b88515
2021-06-23T14:40:47.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:squad", "arxiv:1910.10683", "transformers", "question-generation", "license:mit", "autotrain_compatible" ]
text2text-generation
false
valhalla
null
valhalla/t5-base-qg-hl
3,665
1
transformers
1,019
--- datasets: - squad tags: - question-generation widget: - text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>" - text: "Although <hl> practicality <hl> beats purity </s>" license: mit --- ## T5 for question-generation This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl") nlp("42 is the answer to life, universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}] ```
bert-base-german-dbmdz-cased
1338901726062fab13465d4b37f0f0c55b662a78
2022-07-18T20:03:25.000Z
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-german-dbmdz-cased
3,662
null
transformers
1,020
--- language: de license: mit --- This model is the same as [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-cased) for details on the model.
gagan3012/k2t-base
1e12a3b7f8393611eba2c3db5f992cf154b9debf
2021-09-22T08:27:23.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:WebNLG", "dataset:Dart", "transformers", "keytotext", "k2t-base", "Keywords to Sentences", "license:mit", "autotrain_compatible" ]
text2text-generation
false
gagan3012
null
gagan3012/k2t-base
3,662
null
transformers
1,021
--- language: en thumbnail: Keywords to Sentences tags: - keytotext - k2t-base - Keywords to Sentences license: mit datasets: - WebNLG - Dart metrics: - NLG --- # keytotext ![keytotext (1)](https://user-images.githubusercontent.com/49101362/116334480-f5e57a00-a7dd-11eb-987c-186477f94b6e.png) Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Keytotext is powered by Huggingface 🤗 [![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/) [![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ## Model: Keytotext is based on the Amazing T5 Model: - `k2t`: [Model](https://huggingface.co/gagan3012/k2t) - `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny) - `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base) Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder ## Usage: Example usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder ``` pip install keytotext ``` ![carbon (3)](https://user-images.githubusercontent.com/49101362/116220679-90e64180-a755-11eb-9246-82d93d924a6c.png) ## UI: UI: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ``` pip install streamlit-tags ``` This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags) ![image](https://user-images.githubusercontent.com/49101362/116162205-fc042980-a6fd-11eb-892e-8f6902f193f4.png)
facebook/wav2vec2-large-xlsr-53-spanish
6efd2b0f2ca644652c1c9e24cdbb0c374126e1c9
2021-07-06T03:09:28.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "es", "dataset:common_voice", "transformers", "speech", "audio", "license:apache-2.0" ]
automatic-speech-recognition
false
facebook
null
facebook/wav2vec2-large-xlsr-53-spanish
3,660
3
transformers
1,022
--- language: es datasets: - common_voice tags: - speech - audio - automatic-speech-recognition license: apache-2.0 --- ## Evaluation on Common Voice ES Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "facebook/wav2vec2-large-xlsr-53-spanish" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "es", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 17.6 %
nreimers/TinyBERT_L-4_H-312_v2
d782507ee95c6565fe5924fcd6090999055e8db6
2021-05-28T11:02:32.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
nreimers
null
nreimers/TinyBERT_L-4_H-312_v2
3,657
null
transformers
1,023
This is the [General_TinyBERT_v2(4layer-312dim)](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) ported to Huggingface transformers.
IDEA-CCNL/Erlangshen-MegatronBert-1.3B
cba3245ff39f55f33146868c6872c7600ea24d60
2022-05-10T10:15:32.000Z
[ "pytorch", "megatron-bert", "zh", "transformers", "bert", "NLU", "FewCLUE", "license:apache-2.0" ]
null
false
IDEA-CCNL
null
IDEA-CCNL/Erlangshen-MegatronBert-1.3B
3,627
2
transformers
1,024
--- language: - zh license: apache-2.0 tags: - bert - NLU - FewCLUE inference: true --- # Erlangshen-MegatronBert-1.3B model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). Encoder structure-based Bidirection language model, focusing on solving various natural language understanding tasks. The 1.3 billion parameter Erlangshen-MegatronBert-1.3B large model, using 280G Chinese data, 32 A100 training for 14 days, is the largest open source Chinese Bert large model. On November 10, 2021, **it reached the top of the [FewCLUE](https://www.cluebenchmarks.com/fewclue.html)** list of the authoritative benchmark for Chinese language understanding. [IDEA研究院中文预训练模型二郎神登顶FewCLUE榜单](https://mp.weixin.qq.com/s/bA_9n_TlBE9P-UzCn7mKoA) Among them, **CHID (Idiom Fill in the Blank) and TNEWS (News Classification) surpass human beings, CHID (Idiom Fill in the Blank), CSLDCP (Subject Document Classification), OCNLI (Natural Language Reasoning) single task first, refreshing few-shot learning records**. The Erlangshen series will continue to be optimized in terms of model scale, knowledge integration, and supervision task assistance. ## Usage ```python from transformers import MegatronBertConfig, MegatronBertModel from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B") config = MegatronBertConfig.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B") model = MegatronBertModel.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B") ``` ## Scores on downstream chinese tasks (without any data augmentation) | Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl | | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: | | roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 | | Erlangshen-MegatronBert-1.3B | 0.7608 | 0.5996 | 0.6234 | 0.7917 | 0.81 | 0.9243 | 0.872 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
bvanaken/CORe-clinical-diagnosis-prediction
e469bc793a49547eb0cab1c5e129c914af340e19
2022-02-17T09:36:23.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers", "medical", "clinical", "diagnosis" ]
text-classification
false
bvanaken
null
bvanaken/CORe-clinical-diagnosis-prediction
3,603
2
transformers
1,025
--- language: "en" tags: - bert - medical - clinical - diagnosis - text-classification thumbnail: "https://core.app.datexis.com/static/paper.png" widget: - text: "Patient with hypertension presents to ICU." --- # CORe Model - Clinical Diagnosis Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is **fine-tuned on the task of diagnosis prediction**. The model expects patient admission notes as input and outputs multi-label ICD9-code predictions. #### Model Predictions The model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the **3-digit code predictions at inference time**, because only those have been evaluated in our work. #### How to use CORe Diagnosis Prediction You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction") ``` The following code shows an inference example: ``` input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life." tokenized_input = tokenizer(input, return_tensors="pt") output = model(**tokenized_input) import torch predictions = torch.sigmoid(output.logits) predicted_labels = [model.config.id2label[_id] for _id in (predictions > 0.3).nonzero()[:, 1].tolist()] ``` Note: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label. ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
mrm8488/t5-base-finetuned-e2m-intent
84f655dbb0f40e64e12ad1a61c125a1225fc2917
2020-12-11T21:55:39.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:event2Mind", "arxiv:1910.10683", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-e2m-intent
3,603
3
transformers
1,026
--- language: en datasets: - event2Mind --- # T5-base fine-tuned on event2Mind for **Intent Prediction** 🤔 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [event2Mind](https://huggingface.co/nlp/viewer/?dataset=event2Mind) dataset for **Intent Prediction**. ## Details of T5 📜 ➡️ 📜 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Intent Prediction) - Dataset 📚 Dataset ID: ```event2Mind``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | event2Mind | train | 46472 | | event2Mind | valid | 1960 | Events without **intent** were not used! Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent") def get_intent(event, max_length=16): input_text = "%s </s>" % event features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) event = "PersonX takes PersonY home" get_intent(event) # output: 'to be helpful' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
transformersbook/pegasus-samsum
f00170164d55821831b9396cc3da176af59f30ec
2022-02-05T17:05:28.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
transformersbook
null
transformersbook/pegasus-samsum
3,574
null
transformers
1,027
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum-test This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb). It achieves the following results on the evaluation set: - Loss: 1.4875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7012 | 0.54 | 500 | 1.4875 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
KETI-AIR/ke-t5-small
3a2efa3a340d88de8aa93be0cad7884c34a64128
2021-06-23T03:13:34.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
KETI-AIR
null
KETI-AIR/ke-t5-small
3,565
null
transformers
1,028
Entry not found
facebook/rag-sequence-base
7c7ae51878178639f47b6d416bef67a35a5a41f9
2020-12-11T21:39:37.000Z
[ "pytorch", "rag", "arxiv:2005.11401", "transformers", "license:apache-2.0" ]
null
false
facebook
null
facebook/rag-sequence-base
3,565
null
transformers
1,029
--- license: apache-2.0 thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is a non-finetuned version of the RAG-Sequence model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`. This model is a non-finetuned RAG-Sequence model and was created as follows: ```python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, AutoTokenizer model = RagSequenceForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large") question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large") tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer) model.config.use_dummy_dataset = True model.config.index_name = "exact" retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer) model.save_pretrained("./") tokenizer.save_pretrained("./") retriever.save_pretrained("./") ``` Note that the model is *uncased* so that all capital input letters are converted to lower-case. ## Usage: *Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`. The model can be fine-tuned as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-base") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-base") model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt") outputs = model(input_dict["input_ids"], labels=input_dict["labels"]) loss = outputs.loss # train on loss ```
princeton-nlp/sup-simcse-bert-large-uncased
6711247726a5d5f78c17babf57d76fa99f7b1fdf
2021-05-20T02:56:23.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
princeton-nlp
null
princeton-nlp/sup-simcse-bert-large-uncased
3,562
null
transformers
1,030
Entry not found
ainize/bart-base-cnn
b90bc9a7c93de6449a8c531ed5f957d84649b99a
2021-06-21T09:52:44.000Z
[ "pytorch", "bart", "feature-extraction", "en", "dataset:cnn_dailymail", "transformers", "summarization", "license:apache-2.0" ]
summarization
false
ainize
null
ainize/bart-base-cnn
3,549
null
transformers
1,031
--- language: en license: apache-2.0 datasets: - cnn_dailymail tags: - summarization - bart --- # BART base model fine-tuned on CNN Dailymail - This model is a [bart-base model](https://huggingface.co/facebook/bart-base) fine-tuned on the [CNN/Dailymail summarization dataset](https://huggingface.co/datasets/cnn_dailymail) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp). The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: https://github.com/pytorch/fairseq/tree/master/examples/bart ## Usage ### Python Code ```python from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration # Load Model and Tokenize tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/bart-base-cnn") model = BartForConditionalGeneration.from_pretrained("ainize/bart-base-cnn") # Encode Input Text input_text = '(CNN) -- South Korea launched an investigation Tuesday into reports of toxic chemicals being dumped at a former U.S. military base, the Defense Ministry said. The tests follow allegations of American soldiers burying chemicals on Korean soil. The first tests are being carried out by a joint military, government and civilian task force at the site of what was Camp Mercer, west of Seoul. "Soil and underground water will be taken in the areas where toxic chemicals were allegedly buried," said the statement from the South Korean Defense Ministry. Once testing is finished, the government will decide on how to test more than 80 other sites -- all former bases. The alarm was raised this month when a U.S. veteran alleged barrels of the toxic herbicide Agent Orange were buried at an American base in South Korea in the late 1970s. Two of his fellow soldiers corroborated his story about Camp Carroll, about 185 miles (300 kilometers) southeast of the capital, Seoul. "We\'ve been working very closely with the Korean government since we had the initial claims," said Lt. Gen. John Johnson, who is heading the Camp Carroll Task Force. "If we get evidence that there is a risk to health, we are going to fix it." A joint U.S.- South Korean investigation is being conducted at Camp Carroll to test the validity of allegations. The U.S. military sprayed Agent Orange from planes onto jungles in Vietnam to kill vegetation in an effort to expose guerrilla fighters. Exposure to the chemical has been blamed for a wide variety of ailments, including certain forms of cancer and nerve disorders. It has also been linked to birth defects, according to the Department of Veterans Affairs. Journalist Yoonjung Seo contributed to this report.' input_ids = tokenizer.encode(input_text, return_tensors="pt") # Generate Summary Text Ids summary_text_ids = model.generate( input_ids=input_ids, bos_token_id=model.config.bos_token_id, eos_token_id=model.config.eos_token_id, length_penalty=2.0, max_length=142, min_length=56, num_beams=4, ) # Decoding Text print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)) ``` ### API You can experience this model through [ainize](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main).
Sahajtomar/GBERTQnA
23294bc03a38a1b8a51fb7bfd78c63f444c84b31
2021-05-18T22:19:34.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "de", "dataset:mlqa", "transformers", "autotrain_compatible" ]
question-answering
false
Sahajtomar
null
Sahajtomar/GBERTQnA
3,546
3
transformers
1,032
--- language: de tags: - pytorch - tf - bert datasets: - mlqa metrics: - f1 - em --- ### QA Model trained on MLQA dataset for german langauge. MODEL used for fine tuning is GBERT Large by deepset.ai ## MLQA DEV (german) EM: 63.82 F1: 77.20 ## XQUAD TEST (german) EM: 65.96 F1: 80.85 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Sahajtomar/GBERTQnA", tokenizer="Sahajtomar/GBERTQnA" ) qa_pipeline({ 'context': "Vor einigen Jahren haben Wissenschaftler ein wichtiges Mutagen identifiziert, das in unseren eigenen Zellen liegt: APOBEC, ein Protein, das normalerweise als Schutzmittel gegen Virusinfektionen fungiert. Heute hat ein Team von Schweizer und russischen Wissenschaftlern unter der Leitung von Sergey Nikolaev, Genetiker an der Universität Genf (UNIGE) in der Schweiz, entschlüsselt, wie APOBEC eine Schwäche unseres DNA-Replikationsprozesses ausnutzt, um Mutationen in unserem Genom zu induzieren.", 'question': "Welches Mutagen schützt vor Virusinfektionen?" }) # output {'answer': 'APOBEC', 'end': 121, 'score': 0.9815779328346252, 'start': 115} ## Even complex queries can be answered pretty well qa_pipeline({ "context": 'Im Juli 1944 befand sich die Rote Armee tief auf polnischem Gebiet und verfolgte die Deutschen in Richtung Warschau. In dem Wissen, dass Stalin der Idee eines unabhängigen Polens feindlich gegenüberstand, gab die polnische Exilregierung in London der unterirdischen Heimatarmee (AK) den Befehl, vor dem Eintreffen der Roten Armee zu versuchen, die Kontrolle über Warschau von den Deutschen zu übernehmen. So begann am 1. August 1944, als sich die Rote Armee der Stadt näherte, der Warschauer Aufstand. Der bewaffnete Kampf, der 48 Stunden dauern sollte, war teilweise erfolgreich, dauerte jedoch 63 Tage. Schließlich mussten die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten kapitulieren. Sie wurden in Kriegsgefangenenlager in Deutschland transportiert, während die gesamte Zivilbevölkerung ausgewiesen wurde. Die Zahl der polnischen Zivilisten wird auf 150.000 bis 200.000 geschätzt.' 'question': "Wer wurde nach Deutschland transportiert?" #output {'answer': 'die Kämpfer der Heimatarmee und die ihnen unterstützenden Zivilisten', 'end': 693, 'score': 0.23357819020748138, 'start': 625} ``` Try it on a Colab: <a href="https://github.com/Sahajtomar/Question-Answering/blob/main/Sahajtomar_GBERTQnA.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
google/long-t5-tglobal-large
31b0467e03f47bac014085e1d4fa0ec37dd43c21
2022-06-22T09:04:33.000Z
[ "pytorch", "jax", "longt5", "text2text-generation", "en", "arxiv:2112.07916", "arxiv:1912.08777", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/long-t5-tglobal-large
3,542
3
transformers
1,033
--- license: apache-2.0 language: en --- # LongT5 (transient-global attention, large-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). Results of LongT5 (transient-global attention, large-sized model) fine-tuned on multiple (summarization, QA) tasks. | Dataset | Rouge-1 | Rouge-2 | Rouge-Lsum | | --- | --- | --- | --- | | arXiv (16k input) | 48.28 | 21.63 | 44.11 | | PubMed (16k input) | 49.98 | 24.69 | 46.46 | | BigPatent (16k input) | 70.38 | 56.81 | 62.73 | | MultiNews (8k input) | 47.18 | 18.44 | 24.18 | | MediaSum (4k input) | 35.54 | 19.04 | 32.20 | | CNN / DailyMail (4k input) | 42.49 | 20.51 | 40.18 | | Dataset | EM | F1 | | --- | --- | --- | | Natural Questions (4k input) | 60.77 | 65.38 | | Trivia QA (16k input) | 78.38 | 82.45 | ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you. ### How to use ```python from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-large") model = LongT5Model.from_pretrained("google/long-t5-tglobal-large") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} } ```
NonzeroCornet34/DialoGPT-small-philbot
7d9c87dd713e1116f368f94cae92eec4416599ec
2022-04-12T21:29:09.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
NonzeroCornet34
null
NonzeroCornet34/DialoGPT-small-philbot
3,541
null
transformers
1,034
--- tags: - conversational --- # Philip DialoGPT Model
google/mt5-xl
28f55016820aa79b09609598744f950493129012
2022-05-27T15:06:44.000Z
[ "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/mt5-xl
3,530
2
transformers
1,035
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
AK270802/DialoGPT-small-harrypotter
5e5434fd66c852ebf69cc07279d85f55a645768e
2022-01-16T11:19:05.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
AK270802
null
AK270802/DialoGPT-small-harrypotter
3,524
null
transformers
1,036
--- tags: - conversational --- # Harry Potter DialoGPT Model
yoshitomo-matsubara/bert-large-uncased-sst2
4b108fe9e563ba9dc910985e350f3b48799a1c03
2021-05-29T21:34:13.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:sst2", "transformers", "sst2", "glue", "torchdistill", "license:apache-2.0" ]
text-classification
false
yoshitomo-matsubara
null
yoshitomo-matsubara/bert-large-uncased-sst2
3,516
null
transformers
1,037
--- language: en tags: - bert - sst2 - glue - torchdistill license: apache-2.0 datasets: - sst2 metrics: - accuracy --- `bert-large-uncased` fine-tuned on SST-2 dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/ce/bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
nbroad/ESG-BERT
bb721809897061818c997a223cc9ab4789cc8b05
2021-12-16T21:42:26.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers", "license:apache-2.0" ]
text-classification
false
nbroad
null
nbroad/ESG-BERT
3,511
6
transformers
1,038
--- language: - en tags: - text-classification - bert - pytorch license: apache-2.0 widget: - text: "In fiscal year 2019, we reduced our comprehensive carbon footprint for the fourth consecutive year—down 35 percent compared to 2015, when Apple’s carbon emissions peaked, even as net revenue increased by 11 percent over that same period. In the past year, we avoided over 10 million metric tons from our emissions reduction initiatives—like our Supplier Clean Energy Program, which lowered our footprint by 4.4 million metric tons. " example_title: "Carbon Footprint" --- # ESG BERT (Uploaded from https://github.com/mukut03/ESG-BERT) **Domain Specific BERT Model for Text Mining in Sustainable Investing** Read more about this pre-trained model [here.](https://towardsdatascience.com/nlp-meets-sustainable-investing-d0542b3c264b?source=friends_link&sk=1f7e6641c3378aaff319a81decf387bf) **In collaboration with [Charan Pothireddi](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/) and [Parabole.ai](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/)** ### Labels 0: Business_Ethics 1: Data_Security 2: Access_And_Affordability 3: Business_Model_Resilience 4: Competitive_Behavior 5: Critical_Incident_Risk_Management 6: Customer_Welfare 7: Director_Removal 8: Employee_Engagement_Inclusion_And_Diversity 9: Employee_Health_And_Safety 10: Human_Rights_And_Community_Relations 11: Labor_Practices 12: Management_Of_Legal_And_Regulatory_Framework 13: Physical_Impacts_Of_Climate_Change 14: Product_Quality_And_Safety 15: Product_Design_And_Lifecycle_Management 16: Selling_Practices_And_Product_Labeling 17: Supply_Chain_Management 18: Systemic_Risk_Management 19: Waste_And_Hazardous_Materials_Management 20: Water_And_Wastewater_Management 21: Air_Quality 22: Customer_Privacy 23: Ecological_Impacts 24: Energy_Management 25: GHG_Emissions ### References: [1] https://medium.com/analytics-vidhya/deploy-huggingface-s-bert-to-production-with-pytorch-serve-27b068026d18
jonatasgrosman/wav2vec2-large-xlsr-53-spanish
1e07f6b2a88e191565a1fee030fffc8cae4fec2b
2022-07-27T23:38:03.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "es", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-spanish
3,505
12
transformers
1,039
--- language: es license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - es - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - robust-speech-event - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 Spanish by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice es type: common_voice args: es metrics: - name: Test WER type: wer value: 8.82 - name: Test CER type: cer value: 2.58 - name: Test WER (+LM) type: wer value: 6.27 - name: Test CER (+LM) type: cer value: 2.06 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: es metrics: - name: Dev WER type: wer value: 30.19 - name: Dev CER type: cer value: 13.56 - name: Dev WER (+LM) type: wer value: 24.71 - name: Dev CER (+LM) type: cer value: 12.61 --- # Fine-tuned XLSR-53 large model for speech recognition in Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-spanish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "es" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS | | OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN | | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN | | TRES | TRES | | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA | | EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES | | SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS | | SÍ | SÍ | | "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ | | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-spanish, title={Fine-tuned {XLSR}-53 large model for speech recognition in {S}panish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}}, year={2021} } ```
sberbank-ai/ruclip-vit-base-patch32-384
1f7f08e5437de5dd5beba7a448983b7e4135891b
2022-01-10T00:21:50.000Z
[ "pytorch", "transformers" ]
null
false
sberbank-ai
null
sberbank-ai/ruclip-vit-base-patch32-384
3,503
1
transformers
1,040
# ruclip-vit-base-patch32-384 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `150M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `512` * Transformer Heads: `8` * Image Size: `384` * Vision Layers: `12` * Vision Width: `768` * Vision Patch Size: `32` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-base-patch32-384", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:----------------------------| | Food101 | acc | 0.642 | | CIFAR10 | acc | 0.862 | | CIFAR100 | acc | 0.529 | | Birdsnap | acc | 0.161 | | SUN397 | acc | 0.510 | | Stanford Cars | acc | 0.572 | | DTD | acc | 0.390 | | MNIST | acc | 0.404 | | STL10 | acc | 0.946 | | PCam | acc | 0.506 | | CLEVR | acc | 0.188 | | Rendered SST2 | acc | 0.508 | | ImageNet | acc | 0.451 | | FGVC Aircraft | mean-per-class | 0.053 | | Oxford Pets | mean-per-class | 0.587 | | Caltech101 | mean-per-class | 0.834 | | Flowers102 | mean-per-class | 0.449 | | HatefulMemes | roc-auc | 0.537 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
Helsinki-NLP/opus-mt-th-en
90080f69e69c567e2b145fc8723c1e53f4f760e6
2020-08-21T14:42:50.000Z
[ "pytorch", "marian", "text2text-generation", "th", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-th-en
3,484
null
transformers
1,041
--- language: - th - en tags: - translation license: apache-2.0 --- ### tha-eng * source group: Thai * target group: English * OPUS readme: [tha-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md) * model: transformer-align * source language(s): tha * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tha.eng | 48.1 | 0.644 | ### System Info: - hf_name: tha-eng - source_languages: tha - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['th', 'en'] - src_constituents: {'tha'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt - src_alpha3: tha - tgt_alpha3: eng - short_pair: th-en - chrF2_score: 0.644 - bleu: 48.1 - brevity_penalty: 0.9740000000000001 - ref_len: 7407.0 - src_name: Thai - tgt_name: English - train_date: 2020-06-17 - src_alpha2: th - tgt_alpha2: en - prefer_old: False - long_pair: tha-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
satvikag/chatbot
5736b2051dfc768c950ddd700d11e9e92ffa6d0e
2021-06-04T20:08:11.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
satvikag
null
satvikag/chatbot
3,479
6
transformers
1,042
--- tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). Chat with the model: ```python tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small') model = AutoModelWithLMHead.from_pretrained('output-small') # Let's chat for 5 lines for step in range(100): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=500, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature = 0.8 ) # pretty print last ouput tokens from bot print("AI: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
akreal/tiny-random-bert
843b6aea20ebae1c96598b6187b1bc26c105652a
2021-08-18T14:42:20.000Z
[ "pytorch", "tf", "bert", "transformers" ]
null
false
akreal
null
akreal/tiny-random-bert
3,477
null
transformers
1,043
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-bert Changes: use old format for `pytorch_model.bin`.
hivemind/gpt-j-6B-8bit
636b67ff80cb47e083bdfd8074c45857f72cac65
2022-02-10T23:15:54.000Z
[ "pytorch", "gptj", "text-generation", "arxiv:2106.09685", "arxiv:2110.02861", "transformers" ]
text-generation
false
hivemind
null
hivemind/gpt-j-6B-8bit
3,463
63
transformers
1,044
### Quantized EleutherAI/gpt-j-6b with 8-bit weights This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Where can I train for free? You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance. ### Can I use this technique with other models? The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static
8e70cc9712549ac63d3226143b81e3fadd631b01
2022-06-10T02:43:06.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:sst2", "transformers", "text-classfication", "int8", "Intel® Neural Compressor", "PostTrainingStatic", "license:apache-2.0" ]
text-classification
false
Intel
null
Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static
3,456
null
transformers
1,045
--- language: en license: apache-2.0 tags: - text-classfication - int8 - Intel® Neural Compressor - PostTrainingStatic datasets: - sst2 metrics: - accuracy --- # INT8 DistilBERT base uncased finetuned SST-2 ### Post-training static quantization This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104. ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-accuracy)** |0.9037|0.9106| | **Model size (MB)** |65|255| ### Load with Intel® Neural Compressor: ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-static', ) ```
jonatasgrosman/wav2vec2-large-xlsr-53-russian
228399bda8b8608cad580f4d71c0461358af07f9
2022-07-27T23:36:55.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "ru", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-russian
3,444
9
transformers
1,046
--- language: ru license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - robust-speech-event - ru - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 Russian by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ru type: common_voice args: ru metrics: - name: Test WER type: wer value: 13.3 - name: Test CER type: cer value: 2.88 - name: Test WER (+LM) type: wer value: 9.57 - name: Test CER (+LM) type: cer value: 2.24 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: ru metrics: - name: Dev WER type: wer value: 40.22 - name: Dev CER type: cer value: 14.8 - name: Dev WER (+LM) type: wer value: 33.61 - name: Dev CER (+LM) type: cer value: 13.5 --- # Fine-tuned XLSR-53 large model for speech recognition in Russian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-russian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ru" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-russian" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | ОН РАБОТАТЬ, А ЕЕ НЕ УДЕРЖАТЬ НИКАК — БЕГАЕТ ЗА КЛЁШЕМ КАЖДОГО БУЛЬВАРНИКА. | ОН РАБОТАТЬ А ЕЕ НЕ УДЕРЖАТ НИКАК БЕГАЕТ ЗА КЛЕШОМ КАЖДОГО БУЛЬБАРНИКА | | ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ, Я БУДУ СЧИТАТЬ, ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ. | ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ Я БУДУ СЧИТАТЬ ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ | | ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ МИР С ИЗРАИЛЕМ, А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕННОСТИ. | ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ С НИ МИР ФЕЗРЕЛЕМ А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕНСКИ | | У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО, ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРИБАВЛЯЮ. | У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРЕДБАВЛЯЕТ | | ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ. | ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ | | ВРОНСКИЙ, СЛУШАЯ ОДНИМ УХОМ, ПЕРЕВОДИЛ БИНОКЛЬ С БЕНУАРА НА БЕЛЬ-ЭТАЖ И ОГЛЯДЫВАЛ ЛОЖИ. | ЗЛАЗКИ СЛУШАЮ ОТ ОДНИМ УХАМ ТЫ ВОТИ В ВИНОКОТ СПИЛА НА ПЕРЕТАЧ И ОКЛЯДЫВАЛ БОСУ | | К СОЖАЛЕНИЮ, СИТУАЦИЯ ПРОДОЛЖАЕТ УХУДШАТЬСЯ. | К СОЖАЛЕНИЮ СИТУАЦИИ ПРОДОЛЖАЕТ УХУЖАТЬСЯ | | ВСЁ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕПЕРЕВОДИВШИХСЯ ДОЛГОВ. | ВСЕ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕ ПЕРЕВОДИВШИХСЯ ДОЛГОВ | | ТЕПЕРЬ ДЕЛО, КОНЕЧНО, ЗА ТЕМ, ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА. | ТЕПЕРЬ ДЕЛАЮ КОНЕЧНО ЗАТЕМ ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА | | ДЕВЯТЬ | ЛЕВЕТЬ | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset mozilla-foundation/common_voice_6_0 --config ru --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset speech-recognition-community-v2/dev_data --config ru --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-russian, title={Fine-tuned {XLSR}-53 large model for speech recognition in {R}ussian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-russian}}, year={2021} } ```
Yale-LILY/brio-cnndm-uncased
b3f3618fca366b8d70b460f8c32d65dab8e9322e
2022-03-31T02:44:44.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Yale-LILY
null
Yale-LILY/brio-cnndm-uncased
3,442
3
transformers
1,047
Entry not found
twmkn9/distilroberta-base-squad2
e5db80d50055024330ab14b27a9e7841734becd9
2021-05-20T22:45:57.000Z
[ "pytorch", "jax", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
twmkn9
null
twmkn9/distilroberta-base-squad2
3,427
null
transformers
1,048
This model is [Distilroberta base](https://huggingface.co/distilroberta-base) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type robberta --model_name_or_path distilroberta-base --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/distilroberta_fine_tuned/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 70.9279368213228, 'f1': 74.60439802429168, 'total': 6078, 'HasAns_exact': 67.62886597938144, 'HasAns_f1': 75.30774267754136, 'HasAns_total': 2910, 'NoAns_exact': 73.95833333333333, 'NoAns_f1': 73.95833333333333, 'NoAns_total': 3168, 'best_exact': 70.94438960184272, 'best_exact_thresh': 0.0, 'best_f1': 74.62085080481161, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
hackathon-pln-es/jurisbert-finetuning-ner
ec54eac180ee0e89c26b15bf205c51f3125e7de3
2022-04-02T13:08:31.000Z
[ "pytorch", "roberta", "token-classification", "dataset:scjnugacj/scjn_dataset_ner", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
hackathon-pln-es
null
hackathon-pln-es/jurisbert-finetuning-ner
3,420
6
transformers
1,049
--- languages: - es licenses: - cc-by-sa-4.0 tags: - generated_from_trainer datasets: - scjnugacj/scjn_dataset_ner metrics: - precision - recall - f1 - accuracy model-index: - name: jurisbert-finetuning-ner results: - task: name: Token Classification type: token-classification dataset: name: scjn_ner type: scjn_ner args: first_domain metrics: - name: Precision type: precision value: 0.9507186858316222 - name: Recall type: recall value: 0.9726890756302521 - name: F1 type: f1 value: 0.9615784008307373 - name: Accuracy type: accuracy value: 0.9980115816646898 widget: - text: "Lo anterior es así, toda vez que, si bien es cierto, el artículo 1° de la Constitución Federal tiene como finalidad brindar la protección más amplia al gobernado, y que ello se logra garantizando el derecho a un recurso efectivo en términos del artículo 25 de la Convención Americana sobre Derechos Humanos, ello no significa que en cualquier caso el órgano jurisdiccional deba resolver el fondo del asunto sin verificar los requisitos de procedencia previstos en las leyes nacionales, ya que las formalidades procesales son la vía que hace posible arribar a una adecuada resolución." - text: "Al respecto, el artículo 78 de la Ley de Ahorro y Crédito Popular es claro al destacar que cuando a juicio de la Comisión Nacional Bancaria y de Valores existan irregularidades de cualquier género en una Sociedad Financiera Popular y se determine que se encuentran en riesgo los intereses de los ahorradores, o bien, se ponga en peligro su estabilidad o solvencia, su presidente podrá declarar la intervención con carácter de gerencia. " --- # modelo-juridico-mexicano El proyecto esta compuesto por los siguientes modelos: - [hackathon-pln-es/jurisbert-finetuning-ner](https://huggingface.co/hackathon-pln-es/jurisbert-finetuning-ner) - [hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal](https://huggingface.co/hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal) - [hackathon-pln-es/jurisbert-clas-art-convencion-americana-dh](https://huggingface.co/hackathon-pln-es/jurisbert-clas-art-convencion-americana-dh) - [hackathon-pln-es/jurisbert-tsdae-sentence-transformer](https://huggingface.co/hackathon-pln-es/jurisbert-tsdae-sentence-transformer) # jurisbert-finetuning-ner This model is a fine-tuned version of [scjnugacj/jurisbert](https://huggingface.co/scjnugacj/jurisbert) on the [scjnugacj/scjn_dataset_ner](https://huggingface.co/datasets/scjnugacj/scjn_dataset_ner) dataset. ## Description Para el entrenamiento de este modelo fue utilizada la primera versión del scjn_dataset_ner fue creada para su utilización en el Hackathon PLN en Español 2022 y contiene etiquetas para identificar leyes y tratados internacionales de los que el Estado Mexicano es parte. Las etiquetas utilizadas fueron: ```python label_list = ['O', 'B-LEY', 'I-LEY', 'B-TRAT_INTL', 'I-TRAT_INTL'] ``` ## Team El equipo esta conformado por [gpalomeque](https://huggingface.co/GPalomeque), [aureliopvs](https://huggingface.co/aureliopvs), [ceciliamacias](https://huggingface.co/ceciliamacias), [giomadariaga](https://huggingface.co/giomadariaga) y [cattsytabla](https://huggingface.co/cattsytabla) ## Intended uses & limitations ### How to use You can use this model with Transformers *pipeline* for NER. ```python model = AutoModelForTokenClassification.from_pretrained(model_name) token_classifier = pipeline("token-classification", aggregation_strategy="simple", model=model, tokenizer=tokenizer) example = """Esta Primera Sala de la Suprema Corte de Justicia de la Nación es competente para conocer de la presente Solicitud de Ejercicio de la Facultad de Atracción, en términos de lo dispuesto en los artículos 107, fracción VIII, penúltimo párrafo, de la Constitución Política de los Estados Unidos Mexicanos; 80 Bis de la Ley de Amparo; así como el precepto 21, fracción II, de la Ley Orgánica del Poder Judicial de la Federación, en relación con lo dispuesto en los puntos segundo, fracción IX, y tercero del Acuerdo General 5/2013, del Pleno de este Alto Tribunal, relativo a la determinación de los asuntos que el Tribunal Pleno conservará para su resolución y el envío de los de su competencia originaria a las Salas y a los tribunales colegiados de circuito.""" results = token_classifier(example.lower()) print(results ) ``` ### Training results |Step|Training Loss|Validation Loss|Precision|Recall|F1|Accuracy| |---------|----:|---------:|---:|---:|---:|---:| |500|0.015600|0.027995|0.188612|0.334034|0.241092|0.993709| |1000|0.015800|0.026780|0.185446|0.331933|0.237952|0.993651| |1500|0.016500|0.026958|0.194836|0.348739|0.250000|0.993767| |2000|0.016100|0.028878|0.185360|0.329832|0.237339|0.993860| |2500|0.015900|0.030429|0.191646|0.327731|0.241860|0.994023| |3000|0.001900|0.016721|0.927565|0.968487|0.947585|0.997651| |3500|0.000200|0.016432|0.935354|0.972689|0.953656|0.997814| |4000|0.000600|0.017248|0.919517|0.960084|0.939363|0.997744| |4500|0.000300|0.019329|0.936992|0.968487|0.952479|0.997663| |5000|0.000300|0.020233|0.938900|0.968487|0.953464|0.997605| |5500|0.000400|0.018390|0.919608|0.985294|0.951318|0.997663| |6000|0.000200|0.020439|0.915686|0.981092|0.947262|0.997291| |6500|0.000300|0.018778|0.908382|0.978992|0.942366|0.997733| |7000|0.000000|0.018879|0.913725|0.978992|0.945233|0.998000| |7500|0.000100|0.019876|0.938144|0.955882|0.946930|0.997628| |8000|0.000500|0.022275|0.906433|0.976891|0.940344|0.997430| |8500|0.000000|0.021548|0.911765|0.976891|0.943205|0.997639| |9000|0.000100|0.021217|0.919132|0.978992|0.948118|0.997523| |9500|0.000200|0.020399|0.929860|0.974790|0.951795|0.997546| |10000|0.000100|0.025820|0.931313|0.968487|0.949537|0.997523| |10500|0.000200|0.022933|0.940452|0.962185|0.951194|0.997546| |11000|0.000100|0.022329|0.929577|0.970588|0.949640|0.997616| |11500|0.000000|0.022127|0.937247|0.972689|0.954639|0.997756| |12000|0.000000|0.024676|0.929860|0.974790|0.951795|0.997570| |12500|0.000100|0.022072|0.942857|0.970588|0.956522|0.997779| |13000|0.000000|0.023171|0.942857|0.970588|0.956522|0.997767| |13500|0.000800|0.022957|0.583193|0.728992|0.647993|0.996302| |14000|0.006300|0.024337|0.924453|0.976891|0.949949|0.997349| |14500|0.000000|0.023662|0.932000|0.978992|0.954918|0.997477| |15000|0.000000|0.028193|0.928287|0.978992|0.952965|0.997407| |15500|0.000000|0.027379|0.932136|0.981092|0.955988|0.997442| |16000|0.000000|0.024839|0.930000|0.976891|0.952869|0.997616| |16500|0.000000|0.024059|0.937247|0.972689|0.954639|0.997756| |17000|0.000000|0.022556|0.937247|0.972689|0.954639|0.997826| |17500|0.000100|0.022039|0.944898|0.972689|0.958592|0.997884| |18000|0.000000|0.022018|0.946721|0.970588|0.958506|0.997977| |18500|0.000000|0.023078|0.950719|0.972689|0.961578|0.998012| |19000|0.000000|0.023429|0.944898|0.972689|0.958592|0.997942| |19500|0.000000|0.023511|0.944898|0.972689|0.958592|0.997942| |20000|0.000000|0.023393|0.948770|0.972689|0.960581|0.997988| ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.2+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
hf-internal-testing/tiny-random-unispeech-sat
4a779774d9473a62b9436b87cf9ac885b97e6f16
2022-01-26T13:47:21.000Z
[ "pytorch", "unispeech-sat", "audio-classification", "transformers" ]
audio-classification
false
hf-internal-testing
null
hf-internal-testing/tiny-random-unispeech-sat
3,399
null
transformers
1,050
Entry not found
staka/fugumt-en-ja
bf625f6aa260d78f000ab096ab7782ed4acb1770
2022-05-29T08:27:41.000Z
[ "pytorch", "marian", "text2text-generation", "en", "ja", "transformers", "translation", "license:cc-by-sa-4.0", "autotrain_compatible" ]
translation
false
staka
null
staka/fugumt-en-ja
3,398
2
transformers
1,051
--- license: cc-by-sa-4.0 language: - en - ja tags: - translation --- # FuguMT This is a translation model using Marian-NMT. For more details, please see [my repository](https://github.com/s-taka/fugumt). * source language: en * target language: ja ### How to use This model uses transformers and sentencepiece. ```python !pip install transformers sentencepiece ``` You can use this model directly with a pipeline: ```python from transformers import pipeline fugu_translator = pipeline('translation', model='staka/fugumt-en-ja') fugu_translator('This is a cat.') ``` ### Eval results The results of the evaluation using [tatoeba](https://tatoeba.org/ja)(randomly selected 500 sentences) are as follows: |source |target |BLEU(*1)| |-------|-------|--------| |en |ja |32.7 | (*1) sacrebleu --tokenize ja-mecab
facebook/muppet-roberta-large
87df24857474bf92dc6789bf1e5a8d73bc7510cb
2021-06-28T21:44:41.000Z
[ "pytorch", "roberta", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2101.11038", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
facebook
null
facebook/muppet-roberta-large
3,387
3
transformers
1,052
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # Muppet: Massive Multi-task Representations with Pre-Finetuning # RoBERTa large model This is a Massive Multi-task Pre-finetuned version of Roberta large. It was introduced in [this paper](https://arxiv.org/abs/2101.11038). The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in smaller datasets are significant. Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Model | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | SQuAD| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:| | Roberta-large | 90.2 | 92.2 | 94.7 | 96.4 | 63.6 | 91.2 | 90.9 | 88.1 | 88.7| | MUPPET Roberta-large | 90.8 | 92.2 | 94.9 | 97.4 | - | - | 91.4 | 92.8 | 89.4| ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2101-11038, author = {Armen Aghajanyan and Anchit Gupta and Akshat Shrivastava and Xilun Chen and Luke Zettlemoyer and Sonal Gupta}, title = {Muppet: Massive Multi-task Representations with Pre-Finetuning}, journal = {CoRR}, volume = {abs/2101.11038}, year = {2021}, url = {https://arxiv.org/abs/2101.11038}, archivePrefix = {arXiv}, eprint = {2101.11038}, timestamp = {Sun, 31 Jan 2021 17:23:50 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-11038.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
uer/chinese_roberta_L-4_H-512
7cfbfa6bc21973661117c736747841c7e51ca79f
2022-07-15T08:12:26.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "dataset:CLUECorpusSmall", "arxiv:1909.05658", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
fill-mask
false
uer
null
uer/chinese_roberta_L-4_H-512
3,383
2
transformers
1,053
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese RoBERTa Miniatures ## Model description This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details. You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below: | | H=128 | H=256 | H=512 | H=768 | | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: | | **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] | | **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] | | **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] | | **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] | | **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] | | **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 | | RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 | | RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 | | RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 | | RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium): ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512') >>> unmasker("中国的首都是[MASK]京。") [ {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]', 'score': 0.8701988458633423, 'token': 1266, 'token_str': '北'}, {'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]', 'score': 0.1194809079170227, 'token': 1298, 'token_str': '南'}, {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]', 'score': 0.0037803512532263994, 'token': 691, 'token_str': '东'}, {'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]', 'score': 0.0017127094324678183, 'token': 3249, 'token_str': '普'}, {'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]', 'score': 0.001687526935711503, 'token': 3307, 'token_str': '望'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. Taking the case of RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{devlin2018bert, title={Bert: Pre-training of deep bidirectional transformers for language understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} } @article{liu2019roberta, title={Roberta: A robustly optimized bert pretraining approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128 [2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256 [2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512 [2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768 [4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128 [4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256 [4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512 [4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768 [6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128 [6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256 [6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512 [6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768 [8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128 [8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256 [8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512 [8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768 [10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128 [10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256 [10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512 [10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768 [12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128 [12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256 [12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512 [12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768
Team-PIXEL/pixel-base
303131a01e1c8cdcc158d455df9ab75afe9795f4
2022-07-15T00:24:50.000Z
[ "pytorch", "pixel", "en", "dataset:wikipedia", "dataset:bookcorpusopen", "arxiv:2207.06991", "arxiv:2111.06377", "transformers", "pretraining", "license:apache-2.0" ]
null
false
Team-PIXEL
null
Team-PIXEL/pixel-base
3,381
17
transformers
1,054
--- license: apache-2.0 tags: - pretraining - pixel datasets: - wikipedia - bookcorpusopen language: - en --- # PIXEL (Pixel-based Encoder of Language) PIXEL is a language model trained to reconstruct masked image patches that contain rendered text. PIXEL was pretrained on the *English* Wikipedia and Bookcorpus (in total around 3.2B words) but can theoretically be finetuned on data in any written language that can be typeset on a computer screen because it operates on rendered text as opposed to using a tokenizer with a fixed vocabulary. It is not currently possible to use the Hosted Inference API with PIXEL. Paper: [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) Codebase: [https://github.com/xplip/pixel](https://github.com/xplip/pixel) ## Model description PIXEL consists of three major components: a text renderer, which draws text as an image; an encoder, which encodes the unmasked regions of the rendered image; and a decoder, which reconstructs the masked regions at the pixel level. It is built on [ViT-MAE](https://arxiv.org/abs/2111.06377). During pretraining, the renderer produces images containing the training sentences. Patches of these images are linearly projected to obtain patch embeddings (as opposed to having an embedding matrix like e.g. in BERT), and 25% of the patches are masked out. The encoder, which is a Vision Transformer (ViT), then only processes the unmasked patches. The lightweight decoder with hidden size 512 and 8 transformer layers inserts learnable mask tokens into the encoder's output sequence and learns to reconstruct the raw pixel values at the masked positions. After pretraining, the decoder can be discarded leaving an 86M parameter encoder, upon which task-specific classification heads can be stacked. Alternatively, the decoder can be retained and PIXEL can be used as a pixel-level generative language model (see Figures 3 and 6 in the paper for examples). For more details on how PIXEL works, please check the paper and the codebase linked above. ## Intended uses PIXEL is primarily intended to be finetuned to downstream NLP tasks. See the [model hub](https://huggingface.co/models?search=Team-PIXEL/pixel-base) to look for finetuned versions on a task that interests you. Otherwise, check out the PIXEL codebase on Github [here](https://github.com/xplip/pixel) to find out how to finetune PIXEL for your task. ### How to use Here is how to load PIXEL: ```python from pixel import PIXELConfig, PIXELForPreTraining config = PIXELConfig.from_pretrained("Team-PIXEL/pixel-base") model = PIXELForPretraining.from_pretrained("Team-PIXEL/pixel-base", config=config) ``` ## Citing and Contact Author ```bibtex @article{rust-etal-2022-pixel, title={Language Modelling with Pixels}, author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott}, journal={arXiv preprint}, year={2022}, url={https://arxiv.org/abs/2207.06991} } ``` Github: [@xplip](https://github.com/xplip) Twitter: [@rust_phillip](https://twitter.com/rust_phillip)
google/bert_uncased_L-6_H-512_A-8
dd53ec6ca9d05e0a91b309c4e137f31988888071
2021-05-19T17:34:01.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-6_H-512_A-8
3,377
null
transformers
1,055
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
NHStudios/DialoGPT-small-jake
cff628bcf40f6184934668cc5afe9a86021101cd
2022-05-04T15:48:06.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
NHStudios
null
NHStudios/DialoGPT-small-jake
3,373
null
transformers
1,056
--- tags: - conversational --- # Jake Peralta DialoGPT Model
flair/ner-spanish-large
9d4671d2f345c1258f37a29ce2321067f2ed296e
2021-05-08T15:36:59.000Z
[ "pytorch", "es", "dataset:conll2003", "arxiv:2011.06993", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-spanish-large
3,364
3
flair
1,057
--- tags: - flair - token-classification - sequence-tagger-model language: es datasets: - conll2003 widget: - text: "George Washington fue a Washington" --- ## Spanish NER in Flair (large model) This is the large 4-class NER model for Spanish that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90,54** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-spanish-large") # make example sentence sentence = Sentence("George Washington fue a Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington fue a Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03_SPANISH corpus = CONLL_03_SPANISH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-spanish-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
GroNLP/hateBERT
f56d507e4b6a64413aff29e541e1b2178ee79d67
2021-08-09T16:09:32.000Z
[ "pytorch", "bert", "fill-mask", "en", "transformers", "HateBERT", "text classification", "abusive language", "hate speech", "offensive language", "autotrain_compatible" ]
fill-mask
false
GroNLP
null
GroNLP/hateBERT
3,352
6
transformers
1,058
--- language: en tags: - HateBERT - text classification - abusive language - hate speech - offensive language --- # [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) • [Valerio Basile](https://www.semanticscholar.org/author/Valerio-Basile/3101511) • [Jelena Mitrovic](https://www.semanticscholar.org/author/Jelena-Mitrovic/145157863) • [Michael Granizter](https://www.semanticscholar.org/author/M.-Granitzer/2389675) ## Model description HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau. For details, check out the paper presented at [WOAH 2021](https://aclanthology.org/2021.woah-1.3/). The code and the fine-tuned models are available on [OSF](https://osf.io/tbd58/?view_onlycb79b3228d4248ddb875eb1803525ad8). ### BibTeX entry and citation info ```bibtex @inproceedings{caselli-etal-2021-hatebert, \ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish", \tauthor = "Caselli, Tommaso and Basile, Valerio and Mitrovi{\'c}, Jelena and Granitzer, Michael", \tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", \tmonth = aug, \tyear = "2021", \taddress = "Online", \tpublisher = "Association for Computational Linguistics", \tturl = "https://aclanthology.org/2021.woah-1.3", \tdoi = "10.18653/v1/2021.woah-1.3", \tpages = "17--25", \tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.", } ```
asahi417/tner-xlm-roberta-base-ontonotes5
122d8bf1ed931dd9571b2c3f38317e04ba648a3b
2021-02-13T00:07:17.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
asahi417
null
asahi417/tner-xlm-roberta-base-ontonotes5
3,348
3
transformers
1,059
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5") ```
shtoshni/longformer_coreference_joint
c0336f7402fa98e4293f2995df587fb19519e5b0
2021-11-12T15:52:54.000Z
[ "pytorch", "longformer", "feature-extraction", "arxiv:2109.09667", "transformers" ]
feature-extraction
false
shtoshni
null
shtoshni/longformer_coreference_joint
3,343
null
transformers
1,060
Longformer-large model finetuned for the coreference resolution task. The model is fine-tuned over a mixture of OntoNotes, LitBank, and PreCo. The model is released as part of [this paper](https://arxiv.org/pdf/2109.09667.pdf). Note that the document encoder is to be used with the rest of the model parameters to perform the coreference resolution task. For demo purposes, please check this [Colab notebook](https://colab.research.google.com/drive/11ejXc1wDqzUxpgRH1nLvqEifAX30Z71_?usp=sharing).
RarePizzaDog/Apes_Bot
e90a334ea9410ef53f029263e51c441d67313c19
2022-04-09T19:21:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
RarePizzaDog
null
RarePizzaDog/Apes_Bot
3,339
null
transformers
1,061
--- tags: - conversational --- # 9APES DialoGPT Model
csebuetnlp/mT5_m2m_crossSum
cbbbc2408fa8fa65e75bc4e2acce6ea4a7395008
2022-04-22T15:12:26.000Z
[ "pytorch", "mt5", "text2text-generation", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "arxiv:2112.08804", "transformers", "summarization", "mT5", "autotrain_compatible" ]
summarization
false
csebuetnlp
null
csebuetnlp/mT5_m2m_crossSum
3,337
1
transformers
1,062
--- tags: - summarization - mT5 language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo licenses: - cc-by-nc-sa-4.0 widget: - text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization." --- # mT5-m2m-CrossSum This repository contains the many-to-many (m2m) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset. This model tries to **summarize text written in any language in the provided target language.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2m_crossSum" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) get_lang_id = lambda lang: tokenizer._convert_token_to_id( model.config.task_specific_params["langid_map"][lang][1] ) target_lang = "english" # for a list of available language names see below input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, decoder_start_token_id=get_lang_id(target_lang), max_length=84, no_repeat_ngram_size=2, num_beams=4, )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ### Available target language names - `amharic` - `arabic` - `azerbaijani` - `bengali` - `burmese` - `chinese_simplified` - `chinese_traditional` - `english` - `french` - `gujarati` - `hausa` - `hindi` - `igbo` - `indonesian` - `japanese` - `kirundi` - `korean` - `kyrgyz` - `marathi` - `nepali` - `oromo` - `pashto` - `persian` - `pidgin` - `portuguese` - `punjabi` - `russian` - `scottish_gaelic` - `serbian_cyrillic` - `serbian_latin` - `sinhala` - `somali` - `spanish` - `swahili` - `tamil` - `telugu` - `thai` - `tigrinya` - `turkish` - `ukrainian` - `urdu` - `uzbek` - `vietnamese` - `welsh` - `yoruba` ## Citation If you use this model, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ```
allegro/herbert-klej-cased-v1
6953ff83476f8e7a4afb4131cb629c0cffde6c9e
2021-05-28T16:18:22.000Z
[ "pytorch", "jax", "roberta", "pl", "arxiv:2005.00630", "transformers" ]
null
false
allegro
null
allegro/herbert-klej-cased-v1
3,321
1
transformers
1,063
--- language: pl --- # HerBERT **[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. For more details, please refer to: [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://arxiv.org/abs/2005.00630). ## Dataset **HerBERT** training dataset is a combination of several publicly available corpora for Polish language: | Corpus | Tokens | Texts | | :------ | ------: | ------: | | [OSCAR](https://traces1.inria.fr/oscar/)| 6710M | 145M | | [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1084M | 1.1M | | [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.5M | | [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k | | [Allegro Articles](https://allegro.pl/artykuly) | 18M | 33k | ## Tokenizer The training dataset was tokenized into subwords using [HerBERT Tokenizer](https://huggingface.co/allegro/herbert-klej-cased-tokenizer-v1); a character level byte-pair encoding with a vocabulary size of 50k tokens. The tokenizer itself was trained on [Wolne Lektury](https://wolnelektury.pl/) and a publicly available subset of [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=0) with a [fastBPE](https://github.com/glample/fastBPE) library. Tokenizer utilizes `XLMTokenizer` implementation for that reason, one should load it as `allegro/herbert-klej-cased-tokenizer-v1`. ## HerBERT models summary | Model | WWM | Cased | Tokenizer | Vocab Size | Batch Size | Train Steps | | :------ | ------: | ------: | ------: | ------: | ------: | ------: | | herbert-klej-cased-v1 | YES | YES | BPE | 50K | 570 | 180k | ## Model evaluation HerBERT was evaluated on the [KLEJ](https://klejbenchmark.com/) benchmark, publicly available set of nine evaluation tasks for the Polish language understanding. It had the best average performance and obtained the best results for three of them. | Model | Average | NKJP-NER | CDSC-E | CDSC-R | CBD | PolEmo2.0-IN\t|PolEmo2.0-OUT | DYK | PSC | AR\t| | :------ | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | | herbert-klej-cased-v1 | **80.5** | 92.7 | 92.5 | 91.9 | **50.3** | **89.2** |**76.3** |52.1 |95.3 | 84.5 | Full leaderboard is available [online](https://klejbenchmark.com/leaderboard). ## HerBERT usage Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.0. Example code: ```python from transformers import XLMTokenizer, RobertaModel tokenizer = XLMTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors='pt') outputs = model(encoded_input) ``` HerBERT can also be loaded using `AutoTokenizer` and `AutoModel`: ```python tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") ``` ## License CC BY-SA 4.0 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for {P}olish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", doi = "10.18653/v1/2020.acl-main.111", pages = "1191--1201", } ``` ## Authors The model was trained by **Allegro Machine Learning Research** team. You can contact us at: <a href="mailto:[email protected]">[email protected]</a>
huggingface-course/bert-finetuned-squad
cdce6f8f43121716ec99d2d2a28ff06ddbefa2e0
2021-11-11T17:49:56.000Z
[ "pytorch", "tf", "tensorboard", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
huggingface-course
null
huggingface-course/bert-finetuned-squad
3,311
2
transformers
1,064
--- tags: - generated_from_trainer datasets: - squad model-index: - name: test-bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-bert-finetuned-squad This model was trained from scratch on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.8.1+cu111 - Datasets 1.12.2.dev0 - Tokenizers 0.10.3
cambridgeltl/BioRedditBERT-uncased
53c71817b807682020273a0fa13aca033dfca292
2021-05-19T13:43:40.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "en", "arxiv:2010.03295", "transformers", "BioNLP", "social_media" ]
feature-extraction
false
cambridgeltl
null
cambridgeltl/BioRedditBERT-uncased
3,299
2
transformers
1,065
--- language: - en tags: - BioNLP - social_media --- # BioRedditBERT ## Model description BioRedditBERT is a BERT model initialised from BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) and further pre-trained on health-related Reddit posts. Please view our paper [COMETA: A Corpus for Medical Entity Linking in the Social Media](https://arxiv.org/pdf/2010.03295.pdf) (EMNLP 2020) for more details. ## Training data We crawled all threads from 68 health themed subreddits such as `r/AskDocs`, `r/health` and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than 800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary size of ca. 780,000 words. ## Training procedure We use the same pre-training script in the original [google-research/bert](https://github.com/google-research/bert) repo. The model is initialised with [`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`](https://github.com/dmis-lab/biobert). We train with a batch size of 64, a max sequence length of 64, a learning rate of `2e-5` for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default. ## Eval results To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: [AskAPatient](https://zenodo.org/record/55013#.X4ncRmTYpb8) [(Limsopatham and Collier 2016)](https://www.aclweb.org/anthology/P16-1096.pdf). We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. `[CLS]` is used as representations for entity mentions (we also tried average of all tokens but found `[CLS]` generally performs better). Model | Accuracy@1 | Accuracy@5 -------|---------|--------- [BERT-base-uncased](https://huggingface.co/bert-base-uncased) | 38.2 | 43.3 [BioBERT v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) | 41.4 | 51.5 [ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) | 43.9 | 54.3 [BlueBERT](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/NCBI_BERT_pubmed_mimic_uncased_L-12_H-768_A-12.zip) | 41.5 | 48.5 [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) | 42.3 | 51.9 [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) | 42.5 | 49.6 BioRedditBERT | **44.3** | **56.2** ### BibTeX entry and citation info ```bibtex @inproceedings{basaldella-2020-cometa, title = "{COMETA}: A Corpus for Medical Entity Linking in the Social Media", author = "Basaldella, Marco and Liu, Fangyu, and Shareghi, Ehsan, and Collier, Nigel", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2020", publisher = "Association for Computational Linguistics" } ```
philschmid/BERT-Banking77
e08d5e191921b9e0713327dc7e29293ecb286043
2022-06-24T14:31:58.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:banking77", "transformers", "autotrain", "model-index", "co2_eq_emissions" ]
text-classification
false
philschmid
null
philschmid/BERT-Banking77
3,290
1
transformers
1,066
--- tags: autotrain language: en widget: - text: I am still waiting on my card? datasets: - banking77 model-index: - name: BERT-Banking77 results: - task: name: Text Classification type: text-classification dataset: name: BANKING77 type: banking77 metrics: - name: Accuracy type: accuracy value: 92.64 - name: Macro F1 type: macro-f1 value: 92.64 - name: Weighted F1 type: weighted-f1 value: 92.6 - task: type: text-classification name: Text Classification dataset: name: banking77 type: banking77 config: default split: test metrics: - name: Accuracy type: accuracy value: 0.9275974025974026 verified: true - name: Precision Macro type: precision value: 0.9305185253845069 verified: true - name: Precision Micro type: precision value: 0.9275974025974026 verified: true - name: Precision Weighted type: precision value: 0.9305185253845071 verified: true - name: Recall Macro type: recall value: 0.9275974025974028 verified: true - name: Recall Micro type: recall value: 0.9275974025974026 verified: true - name: Recall Weighted type: recall value: 0.9275974025974026 verified: true - name: F1 Macro type: f1 value: 0.927623314966026 verified: true - name: F1 Micro type: f1 value: 0.9275974025974026 verified: true - name: F1 Weighted type: f1 value: 0.927623314966026 verified: true - name: loss type: loss value: 0.3199225962162018 verified: true co2_eq_emissions: 0.03330651014155927 --- # `BERT-Banking77` Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 940131041 - CO2 Emissions (in grams): 0.03330651014155927 ## Validation Metrics - Loss: 0.3505457043647766 - Accuracy: 0.9263261296660118 - Macro F1: 0.9268371013605569 - Micro F1: 0.9263261296660118 - Weighted F1: 0.9259954221865809 - Macro Precision: 0.9305746406646502 - Micro Precision: 0.9263261296660118 - Weighted Precision: 0.929031563971418 - Macro Recall: 0.9263724620088746 - Micro Recall: 0.9263261296660118 - Weighted Recall: 0.9263261296660118 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/philschmid/autotrain-does-it-work-940131041 ``` Or Python API: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_id = 'philschmid/BERT-Banking77' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSequenceClassification.from_pretrained(model_id) classifier = pipeline('text-classification', tokenizer=tokenizer, model=model) classifier('What is the base of the exchange rates?') ```
deepset/gelectra-base-germanquad-distilled
6a8efb1646c90f306b490a11a14e74cc617264e2
2021-12-07T14:49:28.000Z
[ "pytorch", "electra", "question-answering", "de", "dataset:deepset/germanquad", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/gelectra-base-germanquad-distilled
3,288
1
transformers
1,067
--- language: de datasets: - deepset/germanquad license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-base-germanquad-distilled **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-base model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers. - In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 6 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 2 distillation_loss_weight = 0.75 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad. The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ``` "exact": 62.4773139745916 "f1": 80.9488017070188 ``` ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1) ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis
0392a911472f7fa3db4ebacee570be79b16187f2
2021-09-16T18:43:08.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:financial_phrasebank", "transformers", "generated_from_trainer", "financial", "stocks", "sentiment", "license:apache-2.0", "model-index" ]
text-classification
false
mrm8488
null
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis
3,288
19
transformers
1,068
--- license: apache-2.0 tags: - generated_from_trainer - financial - stocks - sentiment widget: - text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ." datasets: - financial_phrasebank metrics: - accuracy model-index: - name: distilRoberta-financial-sentiment results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_allagree metrics: - name: Accuracy type: accuracy value: 0.9823008849557522 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilRoberta-financial-sentiment This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.1116 - Accuracy: 0.9823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 255 | 0.1670 | 0.9646 | | 0.209 | 2.0 | 510 | 0.2290 | 0.9558 | | 0.209 | 3.0 | 765 | 0.2044 | 0.9558 | | 0.0326 | 4.0 | 1020 | 0.1116 | 0.9823 | | 0.0326 | 5.0 | 1275 | 0.1127 | 0.9779 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
NonzeroCornet34/DialoGPT-small-hansolo
2ee056889d261a42265d5aee73fb4f220d693b40
2022-04-12T02:43:43.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
NonzeroCornet34
null
NonzeroCornet34/DialoGPT-small-hansolo
3,285
null
transformers
1,069
--- tags: - conversational --- # Han Solo DialoGPT Model
hatmimoha/arabic-ner
ebf5b11a9673ff9cd4acf735c55dd45956b1858d
2022-04-14T12:08:17.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "ar", "transformers", "autotrain_compatible" ]
token-classification
false
hatmimoha
null
hatmimoha/arabic-ner
3,283
2
transformers
1,070
--- language: ar --- # Arabic Named Entity Recognition Model Pretrained BERT-based ([arabic-bert-base](https://huggingface.co/asafaya/bert-base-arabic)) Named Entity Recognition model for Arabic. The pre-trained model can recognize the following entities: 1. **PERSON** - و هذا ما نفاه المعاون السياسي للرئيس ***نبيه بري*** ، النائب ***علي حسن خليل*** - لكن أوساط ***الحريري*** تعتبر أنه ضحى كثيرا في سبيل البلد - و ستفقد الملكة ***إليزابيث الثانية*** بذلك سيادتها على واحدة من آخر ممالك الكومنولث 2. **ORGANIZATION** - حسب أرقام ***البنك الدولي*** - أعلن ***الجيش العراقي*** - و نقلت وكالة ***رويترز*** عن ثلاثة دبلوماسيين في ***الاتحاد الأوروبي*** ، أن ***بلجيكا*** و ***إيرلندا*** و ***لوكسمبورغ*** تريد أيضاً مناقشة - ***الحكومة الاتحادية*** و ***حكومة إقليم كردستان*** - و هو ما يثير الشكوك حول مشاركة النجم البرتغالي في المباراة المرتقبة أمام ***برشلونة*** الإسباني في 3. ***LOCATION*** - الجديد هو تمكين اللاجئين من “ مغادرة الجزيرة تدريجياً و بهدوء إلى ***أثينا*** ” - ***جزيرة ساكيز*** تبعد 1 كم عن ***إزمير*** 4. **DATE** - ***غدا الجمعة*** - ***06 أكتوبر 2020*** - ***العام السابق*** 5. **PRODUCT** - عبر حسابه ب ***تطبيق “ إنستغرام ”*** - الجيل الثاني من ***نظارة الواقع الافتراضي أوكولوس كويست*** تحت اسم " ***أوكولوس كويست 2*** " 6. **COMPETITION** - عدم المشاركة في ***بطولة فرنسا المفتوحة للتنس*** - في مباراة ***كأس السوبر الأوروبي*** 7. **PRIZE** - ***جائزة نوبل ل لآداب*** - الذي فاز ب ***جائزة “ إيمي ” لأفضل دور مساند*** 8. **EVENT** - تسجّل أغنية جديدة خاصة ب ***العيد الوطني السعودي*** - ***مهرجان المرأة يافوية*** في دورته الرابعة 9. **DISEASE** - في مكافحة فيروس ***كورونا*** و عدد من الأمراض - الأزمات المشابهة مثل “ ***انفلونزا الطيور*** ” و ” ***انفلونزا الخنازير*** ## Example [Find here a complete example to use this model](https://github.com/hatmimoha/arabic-ner) ## Training Corpus The training corpus is made of 378.000 tokens (14.000 sentences) collected from the Web and annotated manually. ## Results The results on a valid corpus made of 30.000 tokens shows an F-measure of ~87%.
tscholak/cxmefzzi
2899ad9eafd58585ef3cb8634367c404c2d266e9
2022-01-10T21:49:50.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:spider", "arxiv:2109.05093", "transformers", "text2sql", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
tscholak
null
tscholak/cxmefzzi
3,282
2
transformers
1,071
--- language: - en thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab" tags: - text2sql widget: - "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id" license: "apache-2.0" datasets: - spider metrics: - spider --- ## tscholak/cxmefzzi Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [T5-3B](https://huggingface.co/t5-3b). ### Training Data The model has been fine-tuned on the 7000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves Spider's zero-shot text-to-SQL translation task, and that means that it can generalize to unseen SQL databases. ### Training Objective This model was initialized with [T5-3B](https://huggingface.co/t5-3b) and fine-tuned with the text-to-text generation objective. Questions are always grounded in a database schema, and the model is trained to predict the SQL query that would be used to answer the question. The input to the model is composed of the user's natural language question, the database identifier, and a list of tables and their columns: ``` [question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... ``` The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's question: ``` [db_id] | [sql] ``` ### Performance Out of the box, this model achieves 71.5 % exact-set match accuracy and 74.4 % execution accuracy on the Spider development set. On the test set, the model achieves 68.0 % exact-set match accuracy and 70.1 % execution accuracy. Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **75.5 %** exact-set match accuracy and **79.3 %** execution accuracy on the Spider development set. On the test set and with PICARD, the model achieves **71.9 %** exact-set match accuracy and **75.1 %** execution accuracy. ### Usage Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model. ### References 1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) 2. [Official PICARD code](https://github.com/ElementAI/picard) ### Citation ```bibtex @inproceedings{Scholak2021:PICARD, author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau}, title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.779", pages = "9895--9901", } ```
microsoft/resnet-18
2f536bd335677c6b111b3d103af458ef57a6145e
2022-07-01T17:33:48.000Z
[ "pytorch", "tf", "resnet", "image-classification", "dataset:imagenet-1k", "arxiv:1512.03385", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
microsoft
null
microsoft/resnet-18
3,267
null
transformers
1,072
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ResNet ResNet model trained on imagenet-1k. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) and first released in [this repository](https://github.com/KaimingHe/deep-residual-networks). Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, ResNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-18") >>> model = ResNetForImageClassification.from_pretrained("microsoft/resnet-18") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tiger cat ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/resnet).
darthrussel/DialoGPT-small-homerbot-halfdata
dcc0b9cb579623477dc7e5ccfb53d2e0aa2b2f7c
2022-03-30T19:39:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
darthrussel
null
darthrussel/DialoGPT-small-homerbot-halfdata
3,263
null
transformers
1,073
--- tags: - conversational --- # Homer DialoGPT Model half data
Garsic/DialoGPT-medium-jill
9b24790edb159d65d1fc3aea3533f8d2eb4abad0
2022-04-16T20:50:34.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Garsic
null
Garsic/DialoGPT-medium-jill
3,254
null
transformers
1,074
--- tags: - conversational --- # dialog model
dbmdz/bert-base-turkish-128k-uncased
f5287aecee60f0c597c11c34341cb92d31c0e71b
2021-05-19T15:13:16.000Z
[ "pytorch", "tf", "jax", "bert", "tr", "transformers", "license:mit" ]
null
false
dbmdz
null
dbmdz/bert-base-turkish-128k-uncased
3,245
4
transformers
1,075
--- language: tr license: mit --- # 🤗 + 📚 dbmdz Turkish BERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources an uncased model for Turkish 🎉 # 🇹🇷 BERTurk BERTurk is a community-driven uncased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk. ## Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model on a TPU v3-8 for 2M steps. For this model we use a vocab size of 128k. ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | -------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-128k-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/vocab.txt) ## Usage With Transformers >= 2.3 our BERTurk uncased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-uncased") ``` ## Results For results on PoS tagging or NER tasks, please refer to [this repository](https://github.com/stefan-it/turkish-bert). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
uer/roberta-base-finetuned-dianping-chinese
9498566e5da5b6cdc52f8eea002be9c24aae959a
2022-02-20T07:57:32.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "zh", "arxiv:1909.05658", "arxiv:1708.02657", "transformers" ]
text-classification
false
uer
null
uer/roberta-base-finetuned-dianping-chinese
3,237
7
transformers
1,076
--- language: zh widget: - text: "这本书真的很不错" --- # Chinese RoBERTa-Base Models for Text Classification ## Model description This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the links below: | Dataset | Link | | :-----------: | :-------------------------------------------------------: | | **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] | | **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] | | **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] | | **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] | | **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] | ## How to use You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese): ```python >>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline >>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese') >>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese') >>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) >>> text_classification("北京上个月召开了两会") [{'label': 'mainland China politics', 'score': 0.7211663722991943}] ``` ## Training data 5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in corresponding [paper](https://arxiv.org/abs/1708.02657). ## Training procedure Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models. Taking the case of roberta-base-finetuned-chinanews-chinese ``` python3 run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \ --vocab_path models/google_zh_vocab.txt \ --train_path datasets/glyph/chinanews/train.tsv \ --dev_path datasets/glyph/chinanews/dev.tsv \ --output_model_path models/chinanews_classifier_model.bin \ --learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{devlin2018bert, title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} } @article{liu2019roberta, title={Roberta: A robustly optimized bert pretraining approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } @article{zhang2017encoding, title={Which encoding is the best for text classification in chinese, english, japanese and korean?}, author={Zhang, Xiang and LeCun, Yann}, journal={arXiv preprint arXiv:1708.02657}, year={2017} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese [jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese [dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese [ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese [chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese
joeddav/distilbert-base-uncased-go-emotions-student
8f145be763be749ae21d1209758c855d5ddf1b9c
2021-02-19T22:15:52.000Z
[ "pytorch", "tf", "distilbert", "text-classification", "en", "dataset:go_emotions", "transformers", "tensorflow", "license:mit" ]
text-classification
false
joeddav
null
joeddav/distilbert-base-uncased-go-emotions-student
3,230
11
transformers
1,077
--- language: en tags: - text-classification - pytorch - tensorflow datasets: - go_emotions license: mit widget: - text: "I feel lucky to be here." --- # distilbert-base-uncased-go-emotions-student ## Model Description This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation). It was trained with mixed precision for 10 epochs and otherwise used the default script arguments. ## Intended Usage The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label classification to create psuedo-labels.
facebook/mgenre-wiki
dbb6f7bc18c4f477073231b125254182f1290155
2022-06-14T14:23:17.000Z
[ "pytorch", "tf", "jax", "mbart", "text2text-generation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bm", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kg", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "qu", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "te", "th", "ti", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yo", "zh", "arxiv:2103.12528", "arxiv:2001.08210", "transformers", "retrieval", "entity-retrieval", "named-entity-disambiguation", "entity-disambiguation", "named-entity-linking", "entity-linking", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/mgenre-wiki
3,230
5
transformers
1,078
--- language: - multilingual - af - am - ar - as - az - be - bg - bm - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - ff - fi - fr - fy - ga - gd - gl - gn - gu - ha - he - hi - hr - ht - hu - hy - id - ig - is - it - ja - jv - ka - kg - kk - km - kn - ko - ku - ky - la - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - qu - ro - ru - sa - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - te - th - ti - tl - tn - tr - uk - ur - uz - vi - wo - xh - yo - zh tags: - retrieval - entity-retrieval - named-entity-disambiguation - entity-disambiguation - named-entity-linking - entity-linking - text2text-generation --- # mGENRE The mGENRE (multilingual Generative ENtity REtrieval) system as presented in [Multilingual Autoregressive Entity Linking](https://arxiv.org/abs/2103.12528) implemented in pytorch. In a nutshell, mGENRE uses a sequence-to-sequence approach to entity retrieval (e.g., linking), based on fine-tuned [mBART](https://arxiv.org/abs/2001.08210) architecture. GENRE performs retrieval generating the unique entity name conditioned on the input text using constrained beam search to only generate valid identifiers. The model was first released in the [facebookresearch/GENRE](https://github.com/facebookresearch/GENRE) repository using `fairseq` (the `transformers` models are obtained with a conversion script similar to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py). This model was trained on 105 languages from Wikipedia. ## BibTeX entry and citation info **Please consider citing our works if you use code from this repository.** ```bibtex @article{decao2020multilingual, author = {De Cao, Nicola and Wu, Ledell and Popat, Kashyap and Artetxe, Mikel and Goyal, Naman and Plekhanov, Mikhail and Zettlemoyer, Luke and Cancedda, Nicola and Riedel, Sebastian and Petroni, Fabio}, title = "{Multilingual Autoregressive Entity Linking}", journal = {Transactions of the Association for Computational Linguistics}, volume = {10}, pages = {274-290}, year = {2022}, month = {03}, issn = {2307-387X}, doi = {10.1162/tacl_a_00460}, url = {https://doi.org/10.1162/tacl\_a\_00460}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00460/2004070/tacl\_a\_00460.pdf}, } ``` ## Usage Here is an example of generation for Wikipedia page disambiguation: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # OPTIONAL: load the prefix tree (trie), you need to additionally download # https://huggingface.co/facebook/mgenre-wiki/blob/main/trie.py and # https://huggingface.co/facebook/mgenre-wiki/blob/main/titles_lang_all105_trie_with_redirect.pkl # that is fast but memory inefficient prefix tree (trie) -- it is implemented with nested python `dict` # NOTE: loading this map may take up to 10 minutes and occupy a lot of RAM! # import pickle # from trie import Trie # with open("titles_lang_all105_marisa_trie_with_redirect.pkl", "rb") as f: # trie = Trie.load_from_dict(pickle.load(f)) # or a memory efficient but a bit slower prefix tree (trie) -- it is implemented with `marisa_trie` from # https://huggingface.co/facebook/mgenre-wiki/blob/main/titles_lang_all105_marisa_trie_with_redirect.pkl # from genre.trie import MarisaTrie # with open("titles_lang_all105_marisa_trie_with_redirect.pkl", "rb") as f: # trie = pickle.load(f) tokenizer = AutoTokenizer.from_pretrained("facebook/mgenre-wiki") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mgenre-wiki").eval() sentences = ["[START] Einstein [END] era un fisico tedesco."] # Italian for "[START] Einstein [END] was a German physicist." outputs = model.generate( **tokenizer(sentences, return_tensors="pt"), num_beams=5, num_return_sequences=5, # OPTIONAL: use constrained beam search # prefix_allowed_tokens_fn=lambda batch_id, sent: trie.get(sent.tolist()), ) tokenizer.batch_decode(outputs, skip_special_tokens=True) ``` which outputs the following top-5 predictions (using constrained beam search) ``` ['Albert Einstein >> it', 'Albert Einstein (disambiguation) >> en', 'Alfred Einstein >> it', 'Alberto Einstein >> it', 'Einstein >> it'] ```
awvik360/DialoGPT-medium-plemons-04262022
bb7f8797fa4d04ecb63c59f643b02c26f0598fc0
2022-04-27T01:46:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
awvik360
null
awvik360/DialoGPT-medium-plemons-04262022
3,223
null
transformers
1,079
--- tags: - conversational --- # My Awesome Model
sgugger/funnel-random-tiny
c6a1a5e19530e187b6cecd5457d69788645ef668
2021-04-08T19:31:32.000Z
[ "pytorch", "tf", "funnel", "feature-extraction", "transformers" ]
feature-extraction
false
sgugger
null
sgugger/funnel-random-tiny
3,218
null
transformers
1,080
Entry not found
dbmdz/bert-base-italian-xxl-uncased
08cf646465c0cab40fe7b68bf98ae9f7247d1804
2021-05-19T15:03:37.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "it", "dataset:wikipedia", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
dbmdz
null
dbmdz/bert-base-italian-xxl-uncased
3,206
4
transformers
1,081
--- language: it license: mit datasets: - wikipedia --- # 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
facebook/dpr-question_encoder-multiset-base
1b547dba8676a9b96d143a6fffabe21b50553928
2020-11-25T16:59:33.000Z
[ "pytorch", "tf", "dpr", "feature-extraction", "transformers" ]
feature-extraction
false
facebook
null
facebook/dpr-question_encoder-multiset-base
3,206
null
transformers
1,082
Entry not found
tunib/electra-ko-en-base
8004b116b7cac4b0ade59d1da0e58641da725788
2021-09-28T07:50:21.000Z
[ "pytorch", "electra", "pretraining", "arxiv:2003.10555", "transformers" ]
null
false
tunib
null
tunib/electra-ko-en-base
3,201
6
transformers
1,083
# TUNiB-Electra We release several new versions of the [ELECTRA](https://arxiv.org/abs/2003.10555) model, which we name TUNiB-Electra. There are two motivations. First, all the existing pre-trained Korean encoder models are monolingual, that is, they have knowledge about Korean only. Our bilingual models are based on the balanced corpora of Korean and English. Second, we want new off-the-shelf models trained on much more texts. To this end, we collected a large amount of Korean text from various sources such as blog posts, comments, news, web novels, etc., which sum up to 100 GB in total. ## How to use You can use this model directly with [transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoModel, AutoTokenizer # Base Model (Korean-English bilingual model) tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-en-base') model = AutoModel.from_pretrained('tunib/electra-ko-en-base') ``` ### Tokenizer example ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('tunib/electra-ko-en-base') >>> tokenizer.tokenize("tunib is a natural language processing tech startup.") ['tun', '##ib', 'is', 'a', 'natural', 'language', 'processing', 'tech', 'startup', '.'] >>> tokenizer.tokenize("튜닙은 자연어처리 테크 스타트업입니다.") ['튜', '##닙', '##은', '자연', '##어', '##처리', '테크', '스타트업', '##입니다', '.'] ``` ## Results on Korean downstream tasks | |**# Params** |**Avg.**| **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) |**Korean-Hate-Speech (Dev)**<br/>(F1)| | :----------------:| :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | :----------------: | |***TUNiB-Electra-ko-base*** | 110M | **85.99** | 90.95 | 87.63 | 84.65 | **82.27** | 85.00 | 95.77 | 64.01 / 90.32 |71.40 | |***TUNiB-Electra-ko-en-base*** | 133M |85.34 |90.59 | 87.25 | **84.90** | 80.43 | 83.81 | 94.85 | 83.09 / 92.06 |68.83 | | [KoELECTRA-base-v3](https://github.com/monologg/KoELECTRA) | 110M | 85.92 |90.63 | **88.11** | 84.45 | 82.24 | **85.53** | 95.25 | **84.83 / 93.45** | 67.61 | | [KcELECTRA-base](https://github.com/Beomi/KcELECTRA) | 124M| 84.75 |**91.71** | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 | **74.49** | | [KoBERT-base](https://github.com/SKTBrain/KoBERT) | 90M | 84.17 | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 | 66.21 | | [KcBERT-base](https://github.com/Beomi/KcBERT) | 110M | 81.37 | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 | 68.77 | | [XLM-Roberta-base](https://github.com/pytorch/fairseq/tree/master/examples/xlmr) | 280M | 85.74 |89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 | 64.06 | ## Results on English downstream tasks | |**# Params** | **Avg.** |**CoLA**<br/>(MCC) | **SST**<br/>(Acc) |MRPC<br/>(Acc)| **STS**<br/>(Spearman) | **QQP**<br/>(Acc) | **MNLI**<br/>(Acc) | **QNLI**<br/>(Acc) | **RTE**<br/>(Acc) | | :----------------:| :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | :---------------------------: | |***TUNiB-Electra-ko-en-base*** | 133M | 85.2| **65.36** | 92.09 | **88.97** | **90.61** | **90.91** | 85.32 | 91.51 |**76.53**| |[ELECTRA-base](https://github.com/google-research/electra) | 110M | **85.7** | 64.6 | **96.0** | 88.1| 90.2 | 89.5 | **88.5** | **93.1** | 75.2 | |[BERT-base](https://github.com/google-research/bert) | 110M | 80.8| 52.1 | 93.5 | 84.8| 85.8 | 89.2 | 84.6 | 90.5 | 66.4 |
alvaroalon2/biobert_diseases_ner
ce0fd86ac9e145d1a6ca3455219843e0a855471f
2021-07-07T12:35:55.000Z
[ "pytorch", "bert", "token-classification", "English", "dataset:BC5CDR-diseases", "dataset:ncbi_disease", "transformers", "NER", "Biomedical", "Diseases", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
alvaroalon2
null
alvaroalon2/biobert_diseases_ner
3,179
6
transformers
1,084
--- language: "English" license: apache-2.0 tags: - token-classification - NER - Biomedical - Diseases datasets: - BC5CDR-diseases - ncbi_disease --- BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner
chocoduck/Joey_bot
1b7fdb7d87203116427a38b24890ded0df104f26
2022-03-20T11:31:56.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
chocoduck
null
chocoduck/Joey_bot
3,173
null
transformers
1,085
--- tags: - conversational --- # My Awesome Model
snrspeaks/KeyPhraseTransformer
4a31635920d6d0fcaf8d13eb9069cb898e3c1523
2022-03-25T13:05:44.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
snrspeaks
null
snrspeaks/KeyPhraseTransformer
3,172
1
transformers
1,086
--- license: mit ---
lvwerra/gpt2-imdb
f1bfd819c6bee6c18fa5f95bfe88d9198839a435
2021-05-23T08:38:34.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
lvwerra
null
lvwerra/gpt2-imdb
3,167
1
transformers
1,087
# GPT2-IMDB ## What is it? A GPT2 (`gpt2`) language model fine-tuned on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). ## Training setting The GPT2 language model was fine-tuned for 1 epoch on the IMDB dataset. All comments were joined into a single text file separated by the EOS token: ``` import pandas as pd df = pd.read_csv("imdb-dataset.csv") imdb_str = " <|endoftext|> ".join(df['review'].tolist()) with open ('imdb.txt', 'w') as f: f.write(imdb_str) ``` To train the model the `run_language_modeling.py` script in the `transformer` library was used: ``` python run_language_modeling.py --train_data_file imdb.txt --output_dir gpt2-imdb --model_type gpt2 --model_name_or_path gpt2 ```
sentence-transformers/msmarco-roberta-base-v3
80e7e11abacef57acc1225f6b3517b74c42b27f2
2022-06-15T22:06:07.000Z
[ "pytorch", "tf", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-roberta-base-v3
3,150
null
sentence-transformers
1,088
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/msmarco-roberta-base-v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-roberta-base-v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-roberta-base-v3') model = AutoModel.from_pretrained('sentence-transformers/msmarco-roberta-base-v3') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-roberta-base-v3) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
makiharukawa/DialoGPT-small-oples
1f97f0d6f62bab9ca4604ffae2f8bc1ae17c2dc3
2022-04-22T14:22:15.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
makiharukawa
null
makiharukawa/DialoGPT-small-oples
3,143
null
transformers
1,089
--- tags: - conversational --- # personal dialoGPT model
sshleifer/distill-pegasus-cnn-16-4
2055eea8e1a19ac362d3f975ffbc6d9e57e3029c
2020-10-08T03:05:37.000Z
[ "pytorch", "pegasus", "text2text-generation", "en", "arxiv:1912.08777", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
sshleifer
null
sshleifer/distill-pegasus-cnn-16-4
3,141
1
transformers
1,090
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wissamantoun/araelectra-base-artydiqa
5de9f5b88e471c6e95e1100e7aec89dfc783a4b9
2021-04-05T11:58:31.000Z
[ "pytorch", "electra", "question-answering", "ar", "dataset:tydiqa", "arxiv:2012.15516", "transformers", "autotrain_compatible" ]
question-answering
false
wissamantoun
null
wissamantoun/araelectra-base-artydiqa
3,131
2
transformers
1,091
--- language: ar datasets: - tydiqa widget: - text: "ما هو نظام الحكم في لبنان؟" context: "لبنان أو (رسميا: الجمهورية اللبنانية)، هي دولة عربية واقعة في الشرق الأوسط في غرب القارة الآسيوية. تحدها سوريا من الشمال و الشرق، و فلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. معظم سكانه من العرب المسلمين و المسيحيين. وبخلاف غالبية الدول العربية هناك وجود فعال للمسيحيين في الحياة العامة والسياسية. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليا فإن عدد اللبنانيين المهاجرين يقدر بضعف عدد اللبنانيين المقيمين. واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة. في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا و فلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرت على لبنان عدة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين." --- <img src="https://raw.githubusercontent.com/WissamAntoun/arabic-wikipedia-qa-streamlit/main/is2alni_logo.png" width="150" align="center"/> # Arabic QA AraELECTRA powered Arabic Wikipedia QA system with Streamlit [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main) This model is trained on the Arabic section of ArTyDiQA using the colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hik0L_Dxg6WwJFcDPP1v74motSkst4gE?usp=sharing) # How to use: ```bash git clone https://github.com/aub-mind/arabert pip install pyarabic ``` ```python from arabert.preprocess import ArabertPreprocessor from transformers import pipeline prep = ArabertPreprocessor("aubmindlab/araelectra-base-discriminator") #or empty string it's the same qa_pipe =pipeline("question-answering",model="wissamantoun/araelectra-base-artydiqa") text = " ما هو نظام الحكم في لبنان؟" context = """ لبنان أو (رسميًّا: الجُمْهُورِيَّة اللبنانيَّة)، هي دولة عربيّة واقِعَة في الشَرق الأوسط في غرب القارة الآسيويّة. تَحُدّها سوريا من الشمال و‌الشرق، و‌فلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. مُعظم سكانه من العرب المسلمين و‌المسيحيين. وبخلاف غالبيّة الدول العربيّة هناك وجود فعّال للمسيحيين في الحياة العامّة والسياسيّة. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليًّا فإن عدد اللبنانيين المهاجرين يُقدَّر بضعف عدد اللبنانيين المقيمين. واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلّت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة. في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا و‌فلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبًا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرّت على لبنان عدّة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين. """ context = prep.preprocess(context)# don't forget to preprocess the question and the context to get the optimal results result = qa_pipe(question=text,context=context) """ {'answer': 'ديمقراطي جمهوري طوائفي', 'end': 241, 'score': 0.4910127818584442, 'start': 219} """ ``` # If you used this model please cite us as : ``` @misc{antoun2020araelectra, title={AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding}, author={Wissam Antoun and Fady Baly and Hazem Hajj}, year={2020}, eprint={2012.15516}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
gerulata/slovakbert
0557b0aa92a9e5abb6d9ec977ce70bc90662083b
2021-10-01T07:53:31.000Z
[ "pytorch", "tf", "roberta", "fill-mask", "sk", "dataset:wikipedia", "dataset:opensubtitles", "dataset:oscar", "dataset:gerulatawebcrawl", "dataset:gerulatamonitoring", "dataset:blbec.online", "arxiv:2109.15254", "transformers", "SlovakBERT", "license:mit", "autotrain_compatible" ]
fill-mask
false
gerulata
null
gerulata/slovakbert
3,127
3
transformers
1,092
--- language: sk tags: - SlovakBERT license: mit datasets: - wikipedia - opensubtitles - oscar - gerulatawebcrawl - gerulatamonitoring - blbec.online --- # SlovakBERT (base-sized model) SlovakBERT pretrained model on Slovak language using a masked language modeling (MLM) objective. This model is case-sensitive: it makes a difference between slovensko and Slovensko. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. **IMPORTANT**: The model was not trained on the “ and ” (direct quote) character -> so before tokenizing the text, it is advised to replace all “ and ” (direct quote marks) with a single "(double quote marks). ### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Deti sa <mask> na ihrisku.") [{'sequence': 'Deti sa hrali na ihrisku.', 'score': 0.6355380415916443, 'token': 5949, 'token_str': ' hrali'}, {'sequence': 'Deti sa hrajú na ihrisku.', 'score': 0.14731724560260773, 'token': 9081, 'token_str': ' hrajú'}, {'sequence': 'Deti sa zahrali na ihrisku.', 'score': 0.05016357824206352, 'token': 32553, 'token_str': ' zahrali'}, {'sequence': 'Deti sa stretli na ihrisku.', 'score': 0.041727423667907715, 'token': 5964, 'token_str': ' stretli'}, {'sequence': 'Deti sa učia na ihrisku.', 'score': 0.01886524073779583, 'token': 18099, 'token_str': ' učia'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = RobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = TFRobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` Or extract information from the model like this: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Slovenské národne povstanie sa uskutočnilo v roku <mask>.") [{'sequence': 'Slovenske narodne povstanie sa uskutočnilo v roku 1944.', 'score': 0.7383289933204651, 'token': 16621, 'token_str': ' 1944'},...] ``` # Training data The SlovakBERT model was pretrained on these datasets: - Wikipedia (326MB of text), - OpenSubtitles (415MB of text), - Oscar (4.6GB of text), - Gerulata WebCrawl (12.7GB of text) , - Gerulata Monitoring (214 MB of text), - blbec.online (4.5GB of text) The text was then processed with the following steps: - URL and email addresses were replaced with special tokens ("url", "email"). - Elongated interpunction was reduced (e.g. -- to -). - Markdown syntax was deleted. - All text content in braces f.g was eliminated to reduce the amount of markup and programming language text. We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text. # Pretraining The model was trained in **fairseq** on 4 x Nvidia A100 GPUs for 300K steps with a batch size of 512 and a sequence length of 512. The optimizer used is Adam with a learning rate of 5e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, dropout rate 0.1, learning rate warmup for 10k steps and linear decay of the learning rate after. We used 16-bit float precision. ## About us <a href="https://www.gerulata.com/"> <img width="300px" src="https://www.gerulata.com/images/gerulata-logo-blue.png"> </a> Gerulata uses near real-time monitoring, advanced analytics and machine learning to help create a safer, more productive and enjoyable online environment for everyone. ### BibTeX entry and citation info If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2109.15254 ``` @misc{pikuliak2021slovakbert, title={SlovakBERT: Slovak Masked Language Model}, author={Matúš Pikuliak and Štefan Grivalský and Martin Konôpka and Miroslav Blšták and Martin Tamajka and Viktor Bachratý and Marián Šimko and Pavol Balážik and Michal Trnka and Filip Uhlárik}, year={2021}, eprint={2109.15254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
xlm-mlm-ende-1024
9b403ad70d01ef2a24624f1d733b7274f92cbcda
2022-07-22T08:08:01.000Z
[ "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "de", "arxiv:1901.07291", "arxiv:1910.09700", "transformers", "license:cc-by-nc-4.0", "autotrain_compatible" ]
fill-mask
false
null
null
xlm-mlm-ende-1024
3,125
null
transformers
1,093
--- language: - multilingual - en - de license: cc-by-nc-4.0 --- # xlm-mlm-ende-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) 10. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. ## Model Description - **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291) - **Model type:** Language model - **Language(s) (NLP):** English-German - **License:** CC-BY-NC-4.0 - **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024) - **Resources for more information:** - [Associated paper](https://arxiv.org/abs/1901.07291) - [GitHub Repo](https://github.com/facebookresearch/XLM) - [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) # Uses ## Direct Use The model is a language model. The model can be used for masked language modeling. ## Downstream Use To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. # Training The model developers write: > In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure. The model developers also write that: > If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details. # Evaluation ## Testing Data, Factors & Metrics The model developers evaluated the model on the [WMT'16 English-German](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics. ## Results For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications The model developers write: > We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details. # Citation **BibTeX:** ```bibtex @article{lample2019cross, title={Cross-lingual language model pretraining}, author={Lample, Guillaume and Conneau, Alexis}, journal={arXiv preprint arXiv:1901.07291}, year={2019} } ``` **APA:** - Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
deepset/bert-base-uncased-squad2
932875db3f21b4365cbac7504be7252e4e1d96b8
2022-07-26T08:36:29.000Z
[ "pytorch", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/bert-base-uncased-squad2
3,121
2
transformers
1,094
--- language: en datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/bert-base-uncased-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 75.6529 verified: true - name: F1 type: f1 value: 78.6191 verified: true --- # bert-base-uncased for QA ## Overview **Language model:** bert-base-uncased **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 32 n_epochs = 3 base_LM_model = "bert-base-uncased" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Performance ``` "exact": 73.67977764676156 "f1": 77.87647139308865 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
TsinghuaAI/CPM-Generate
0e4bcd995f9a9e70ba7c31f67df60e73a922676f
2021-07-29T19:03:51.000Z
[ "pytorch", "tf", "gpt2", "text-generation", "zh", "dataset:100GB Chinese corpus", "arxiv:2012.00413", "transformers", "cpm", "license:mit" ]
text-generation
false
TsinghuaAI
null
TsinghuaAI/CPM-Generate
3,115
7
transformers
1,095
--- language: - zh tags: - cpm license: mit datasets: - 100GB Chinese corpus --- # CPM-Generate ## Model description CPM (Chinese Pre-trained Language Model) is a Transformer-based autoregressive language model, with 2.6 billion parameters and 100GB Chinese training data. To the best of our knowledge, CPM is the largest Chinese pre-trained language model, which could facilitate downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. [[Project](https://cpm.baai.ac.cn)] [[Model](https://cpm.baai.ac.cn/download.html)] [[Paper](https://arxiv.org/abs/2012.00413)] ## Intended uses & limitations #### How to use ```python from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("TsinghuaAI/CPM-Generate") model = AutoModelWithLMHead.from_pretrained("TsinghuaAI/CPM-Generate") text_generator = TextGenerationPipeline(model, tokenizer) text_generator('清华大学', max_length=50, do_sample=True, top_p=0.9) ``` #### Limitations and bias The text generated by CPM is automatically generated by a neural network model trained on a large number of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by CPM is only used for technical and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it, but contact the authors and the authors will deal with it promptly. ## Training data We collect different kinds of texts in our pre-training, including encyclopedia, news, novels, and Q\&A. The details of our training data are shown as follows. | Data Source | Encyclopedia | Webpage | Story | News | Dialog | | ----------- | ------------ | ------- | ----- | ----- | ------ | | **Size** | ~40GB | ~39GB | ~10GB | ~10GB | ~1GB | ## Training procedure Based on the hyper-parameter searching on the learning rate and batch size, we set the learning rate as \\(1.5\times10^{-4}\\) and the batch size as \\(3,072\\), which makes the model training more stable. In the first version, we still adopt the dense attention and the max sequence length is \\(1,024\\). We will implement sparse attention in the future. We pre-train our model for \\(20,000\\) steps, and the first \\(5,000\\) steps are for warm-up. The optimizer is Adam. It takes two weeks to train our largest model using \\(64\\) NVIDIA V100. ## Eval results | | n_param | n_layers | d_model | n_heads | d_head | |------------|-------------------:|--------------------:|-------------------:|-------------------:|------------------:| | CPM-Small | 109M | 12 | 768 | 12 | 64 | | CPM-Medium | 334M | 24 | 1,024 | 16 | 64 | | CPM-Large | 2.6B | 32 | 2,560 | 32 | 80 | We evaluate CPM with different numbers of parameters (the details are shown above) on various Chinese NLP tasks in the few-shot (even zero-shot) settings. With the increase of parameters, CPM performs better on most datasets, indicating that larger models are more proficient at language generation and language understanding. We provide results of text classification, chinese idiom cloze test, and short text conversation generation as follows. Please refer to our [paper](https://arxiv.org/abs/2012.00413) for more detailed results. ### Zero-shot performance on text classification tasks | | TNEWS | IFLYTEK | OCNLI | | ---------- | :------------: | :------------: | :------------: | | CPM-Small | 0.626 | 0.584 | 0.378 | | CPM-Medium | 0.618 | 0.635 | 0.379 | | CPM-Large | **0.703** | **0.708** | **0.442** | ### Performance on Chinese Idiom Cloze (ChID) dataset | | Supervised | Unsupervised | |------------|:--------------:|:--------------:| | CPM-Small | 0.657 | 0.433 | | CPM-Medium | 0.695 | 0.524 | | CPM-Large | **0.804** | **0.685** | ### Performance on Short Text Conversation Generation (STC) dataset | | Average | Extrema | Greedy | Dist-1 | Dist-2 | |----------------------------------|:--------------:|:--------------:|:--------------:|:-------------------------------:|:--------------------------------:| | *Few-shot (Unsupervised)* | | | | | | | CDial-GPT | 0.899 | 0.797 | 0.810 | 1,963 / **0.011** | 20,814 / 0.126 | | CPM-Large | **0.928** | **0.805** | **0.815** | **3,229** / 0.007 | **68,008** / **0.154** | | *Supervised* | | | | | | | CDial-GPT | 0.933 | **0.814** | **0.826** | 2,468 / 0.008 | 35,634 / 0.127 | | CPM-Large | **0.934** | 0.810 | 0.819 | **3,352** / **0.011** | **67,310** / **0.233** | ### BibTeX entry and citation info ```bibtex @article{cpm-v1, title={CPM: A Large-scale Generative Chinese Pre-trained Language Model}, author={Zhang, Zhengyan and Han, Xu, and Zhou, Hao, and Ke, Pei, and Gu, Yuxian and Ye, Deming and Qin, Yujia and Su, Yusheng and Ji, Haozhe and Guan, Jian and Qi, Fanchao and Wang, Xiaozhi and Zheng, Yanan and Zeng, Guoyang and Cao, Huanqi and Chen, Shengqi and Li, Daixuan and Sun, Zhenbo and Liu, Zhiyuan and Huang, Minlie and Han, Wentao and Tang, Jie and Li, Juanzi and Sun, Maosong}, year={2020} } ```
PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
617bf244e3106b6d50abfc600d62b858d798867d
2022-04-08T14:10:05.000Z
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "clinical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
PlanTL-GOB-ES
null
PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
3,113
5
transformers
1,096
--- language: - es tags: - biomedical - clinical - spanish license: apache-2.0 metrics: - ppl widget: - text: "El único antecedente personal a reseñar era la <mask> arterial." - text: "Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales." - text: "En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés." --- # Biomedical-clinical language model for Spanish Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-SANIDAD/lm-biomedical-clinical-es) and read our [preprint](https://arxiv.org/abs/2109.03570) "_Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario._". ## Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ## Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 745,705,946 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | Clinical notes/documents | 91,250,080 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://github.com/PlanTL-SANIDAD/SciELO-Spain-Crawler) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources are aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation and results The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the [mBERT](https://huggingface.co/bert-base-multilingual-cased) and [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) models: | F1 - Precision - Recall | roberta-base-biomedical-clinical-es | mBERT | BETO | |---------------------------|----------------------------|-------------------------------|-------------------------| | PharmaCoNER | **90.04** - **88.92** - **91.18** | 87.46 - 86.50 - 88.46 | 88.18 - 87.12 - 89.28 | | CANTEMIST | **83.34** - **81.48** - **85.30** | 82.61 - 81.12 - 84.15 | 82.42 - 80.91 - 84.00 | | ICTUSnet | **88.08** - **84.92** - **91.50** | 86.75 - 83.53 - 90.23 | 85.95 - 83.10 - 89.02 | ## Intended uses & limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## Cite If you use our models, please cite our latest preprint: ```bibtex @misc{carrino2021biomedical, title={Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Asier Gutiérrez-Fandiño and Joan Llop-Palao and Marc Pàmies and Aitor Gonzalez-Agirre and Marta Villegas}, year={2021}, eprint={2109.03570}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` If you use our Medical Crawler corpus, please cite the preprint: ```bibtex @misc{carrino2021spanish, title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models}, author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas}, year={2021}, eprint={2109.07765}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- --- ## How to use ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-biomedical-es") from transformers import pipeline unmasker = pipeline('fill-mask', model="BSC-TeMU/roberta-base-biomedical-es") unmasker("El único antecedente personal a reseñar era la <mask> arterial.") ``` ``` # Output [ { "sequence": " El único antecedente personal a reseñar era la hipertensión arterial.", "score": 0.9855039715766907, "token": 3529, "token_str": " hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la diabetes arterial.", "score": 0.0039140828885138035, "token": 1945, "token_str": " diabetes" }, { "sequence": " El único antecedente personal a reseñar era la hipotensión arterial.", "score": 0.002484665485098958, "token": 11483, "token_str": " hipotensión" }, { "sequence": " El único antecedente personal a reseñar era la Hipertensión arterial.", "score": 0.0023484621196985245, "token": 12238, "token_str": " Hipertensión" }, { "sequence": " El único antecedente personal a reseñar era la presión arterial.", "score": 0.0008009297889657319, "token": 2267, "token_str": " presión" } ] ```
microsoft/deberta-xlarge
b1f7182c4065333dc7cf4247570892cf1d8b7029
2022-01-13T18:33:03.000Z
[ "pytorch", "tf", "deberta", "en", "arxiv:2006.03654", "transformers", "deberta-v1", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-xlarge
3,106
1
transformers
1,097
--- language: en tags: deberta-v1 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This the DeBERTa XLarge model with 48 layers, 1024 hidden size. Total parameters 750M. ### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
facebook/flava-full
57949b6a84a80fd01c3dd62a09450d8670f1c418
2022-05-25T07:53:39.000Z
[ "pytorch", "flava", "pretraining", "arxiv:2112.04482", "arxiv:2108.10904", "transformers", "license:bsd-3-clause" ]
null
false
facebook
null
facebook/flava-full
3,105
5
transformers
1,098
--- license: bsd-3-clause --- ## Model Card: FLAVA ## Model Details FLAVA model was developed by the researchers at FAIR to understand if a single model can work across different modalities with a unified architecture. The model was pretrained solely using publicly available multimodal datasets containing 70M image-text pairs in total and thus fully reproducible. Unimodal datasets ImageNet and BookCorpus + CCNews were also used to provide unimodal data to the model. The model (i) similar to CLIP can be used for arbitrary image classification tasks in a zero-shot manner (ii) used for image or text retrieval in a zero-shot manner (iii) can also be fine-tuned for natural language understanding (NLU) tasks such as GLUE and vision-and-language reasoning tasks such as VQA v2. The model is able to use the data available as images, text corpus and image-text pairs. In the original paper, the authors evaluate FLAVA on 32 tasks from computer vision, NLU and vision-and-language domains and show impressive performance across the board scoring higher micro-average than CLIP while being open. ## Model Date Model was originally released in November 2021. ## Model Type The FLAVA model uses a ViT-B/32 transformer for both image encoder and text encoder. FLAVA also employs a multimodal encoder on top for multimodal tasks such as vision-and-language tasks (VQA) which is a 6-layer encoder. Each component of FLAVA model can be loaded individually from `facebook/flava-full` checkpoint. If you need complete heads used for pretraining, please use `FlavaForPreTraining` model class otherwise `FlavaModel` should suffice for most use case. This [repository](https://github.com/facebookresearch/multimodal/tree/main/examples/flava) also contains code to pretrain the FLAVA model from scratch. ## Documents - [FLAVA Paper](https://arxiv.org/abs/2112.04482) ## Using with Transformers ### FlavaModel FLAVA model supports vision, language and multimodal inputs. You can pass inputs corresponding to the domain you are concerned with to get losses and outputs related to that domain. ```py from PIL import Image import requests from transformers import FlavaProcessor, FlavaModel model = FlavaModel.from_pretrained("facebook/flava-full") processor = FlavaProcessor.from_pretrained("facebook/flava-full") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=[image, image], return_tensors="pt", padding="max_length", max_length=77 ) outputs = model(**inputs) image_embeddings = outputs.image_embeddings # Batch size X (Number of image patches + 1) x Hidden size => 2 X 197 X 768 text_embeddings = outputs.text_embeddings # Batch size X (Text sequence length + 1) X Hidden size => 2 X 77 X 768 multimodal_embeddings = outputs.multimodal_embeddings # Batch size X (Number of image patches + Text Sequence Length + 3) X Hidden size => 2 X 275 x 768 # Multimodal embeddings can be used for multimodal tasks such as VQA ## Pass only image from transformers import FlavaFeatureExtractor feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full") inputs = feature_extractor(images=[image, image], return_tensors="pt") outputs = model(**inputs) image_embeddings = outputs.image_embeddings ## Pass only image from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("facebook/flava-full") inputs = tokenizer(["a photo of a cat", "a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77) outputs = model(**inputs) text_embeddings = outputs.text_embeddings ``` #### Encode Image ```py from PIL import Image import requests from transformers import FlavaFeatureExtractor, FlavaModel model = FlavaModel.from_pretrained("facebook/flava-full") feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=[image], return_tensors="pt") image_embedding = model.get_image_features(**inputs) ``` #### Encode Text ```py from PIL import Image from transformers import BertTokenizer, FlavaModel model = FlavaModel.from_pretrained("facebook/flava-full") tokenizer = BertTokenizer.from_pretrained("facebook/flava-full") inputs = tokenizer(text=["a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77) text_embedding = model.get_text_features(**inputs) ``` ### FlavaForPreTraining FLAVA model supports vision, language and multimodal inputs. You can pass corresponding inputs to modality to get losses and outputs related to that domain. ```py from PIL import Image import requests from transformers import FlavaProcessor, FlavaForPreTraining model = FlavaForPreTraining.from_pretrained("facebook/flava-full") processor = FlavaProcessor.from_pretrained("facebook/flava-full") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=[image, image], return_tensors="pt", padding="max_length", max_length=77, return_codebook_pixels=True, return_image_mask=True, # Other things such as mlm_labels, itm_labels can be passed here. See docs ) inputs.bool_masked_pos.zero_() outputs = model(**inputs) image_embeddings = outputs.image_embeddings # Batch size X (Number of image patches + 1) x Hidden size => 2 X 197 X 768 text_embeddings = outputs.text_embeddings # Batch size X (Text sequence length + 1) X Hidden size => 2 X 77 X 768 # Multimodal embeddings can be used for multimodal tasks such as VQA multimodal_embeddings = outputs.multimodal_embeddings # Batch size X (Number of image patches + Text Sequence Length + 3) X Hidden size => 2 X 275 x 768 # Loss loss = output.loss # probably NaN due to missing labels # Global contrastive loss logits image_contrastive_logits = outputs.contrastive_logits_per_image text_contrastive_logits = outputs.contrastive_logits_per_text # ITM logits itm_logits = outputs.itm_logits ``` ### FlavaImageModel ```py from PIL import Image import requests from transformers import FlavaFeatureExtractor, FlavaImageModel model = FlavaImageModel.from_pretrained("facebook/flava-full") feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=[image], return_tensors="pt") outputs = model(**inputs) image_embeddings = outputs.last_hidden_state ``` ### FlavaTextModel ```py from PIL import Image from transformers import BertTokenizer, FlavaTextModel model = FlavaTextModel.from_pretrained("facebook/flava-full") tokenizer = BertTokenizer.from_pretrained("facebook/flava-full") inputs = tokenizer(text=["a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77) outputs = model(**inputs) text_embeddings = outputs.last_hidden_state ``` ## Model Use ## Intended Use The model is intended to serve as a reproducible research artifact for research communities in the light of models whose exact reproduction details are never released such as [CLIP](https://github.com/openai/CLIP) and [SimVLM](https://arxiv.org/abs/2108.10904). FLAVA model performs equivalently to these models on most tasks while being trained on less (70M pairs compared to CLIP's 400M and SimVLM's 1.8B pairs respectively) but public data. We hope that this model enable communities to better understand, and explore zero-shot and arbitrary image classification, multi-domain pretraining, modality-agnostic generic architectures while also providing a chance to develop on top of it. ## Primary Intended Uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of foundation models which work across domains which in this case are vision, language and combined multimodal vision-and-language domain. ## Out-of-Scope Use Cases Similar to CLIP, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. Though FLAVA is trained on open and public data which doesn't contain a lot of harmful data, users should still employ proper safety measures. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data FLAVA was pretrained on public available 70M image and text pairs. This includes datasets such as COCO, Visual Genome, Localized Narratives, RedCaps, a custom filtered subset of YFCC100M, SBUCaptions, Conceptual Captions and Wikipedia Image-Text datasets. A larger portion of this dataset comes from internet and thus can have bias towards people most connected to internet such as those from developed countries and younger, male users. ## Data Mission Statement Our goal with building this dataset called PMD (Public Multimodal Datasets) was two-fold (i) allow reproducibility of vision-language foundation models with publicly available data and (ii) test robustness and generalizability of FLAVA across the domains. The data was collected from already existing public dataset sources which have already been filtered out by the original dataset curators to not contain adult and excessively violent content. We will make the URLs of the images public for further research reproducibility. ## Performance and Limitations ## Performance FLAVA has been evaluated on 35 different tasks from computer vision, natural language understanding, and vision-and-language reasoning. On COCO and Flickr30k retrieval, we report zero-shot accuracy, on image tasks, we report linear-eval and on rest of the tasks, we report fine-tuned accuracies. Generally, FLAVA works much better than CLIP where tasks require good text understanding. The paper describes more in details but following are the 35 datasets: ### Natural Language Understanding - MNLI - CoLA - MRPC - QQP - SST-2 - QNLI - RTE - STS-B ### Image Understanding - ImageNet - Food100 - CIFAR10 - CIFAR100 - Cars - Aircraft - DTD - Pets - Caltech101 - Flowers102 - MNIST - STL10 - EuroSAT - GTSRB - KITTI - PCAM - UCF101 - CLEVR - FER 2013 - SUN397 - Image SST - Country 211 ### Vision and Language Reasoning - VQA v2 - SNLI-VE - Hateful Memes - Flickr30K Retrieval - COCO Retrieval ## Limitations Currently, FLAVA has many limitations. The image classification accuracy is not on par with CLIP on some of the tasks while text accuracy is not on par with BERT on some of the tasks suggesting possible room for improvement. FLAVA also doesn't work well on tasks containing scene text given the lack of scene text in most public datasets. Additionally, similar to CLIP, our approach to testing FLAVA also has an important limitation in the case of image tasks, where we use linear probes to evaluate FLAVA and there is evidence suggesting that linear probes can underestimate model performance. ## Feedback/Questions Please email Amanpreet at `amanpreet [at] nyu [dot] edu` for questions.
studio-ousia/luke-large
0729d044dfe301d9ecabc222d60633f92ac450eb
2022-04-13T09:06:10.000Z
[ "pytorch", "luke", "fill-mask", "en", "arxiv:1906.08237", "arxiv:1903.07785", "arxiv:2002.01808", "transformers", "named entity recognition", "entity typing", "relation classification", "question answering", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
studio-ousia
null
studio-ousia/luke-large
3,101
1
transformers
1,099
--- language: en thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png tags: - luke - named entity recognition - entity typing - relation classification - question answering license: apache-2.0 --- ## LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention **LUKE** (**L**anguage **U**nderstanding with **K**nowledge-based **E**mbeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. LUKE achieves state-of-the-art results on five popular NLP benchmarks including **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive question answering), **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)** (cloze-style question answering), **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation classification), and **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)** (entity typing). Please check the [official repository](https://github.com/studio-ousia/luke) for more details and updates. This is the LUKE large model with 24 hidden layers, 1024 hidden size. The total number of parameters in this model is 483M. It is trained using December 2018 version of Wikipedia. ### Experimental results The experimental results are provided as follows: | Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA | | ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- | | Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) | | Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) | | Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) | | Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | | Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | ### Citation If you find LUKE useful for your work, please cite the following paper: ```latex @inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} } ```