modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
Helsinki-NLP/opus-mt-kl-en
1a55c53e0315586661456929a8e102bfcdb90a63
2021-09-10T13:53:56.000Z
[ "pytorch", "marian", "text2text-generation", "kl", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-kl-en
39
null
transformers
6,500
--- tags: - translation license: apache-2.0 --- ### opus-mt-kl-en * source languages: kl * target languages: en * OPUS readme: [kl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kl.en | 26.4 | 0.432 | | Tatoeba.kl.en | 35.5 | 0.443 |
Mathking/bert-base-german-cased-gnad10
c3993046d7580335ca1c52b2e543fb449b4be00b
2021-11-07T09:07:25.000Z
[ "pytorch", "bert", "text-classification", "de", "dataset:gnad10", "transformers", "german-news-classification" ]
text-classification
false
Mathking
null
Mathking/bert-base-german-cased-gnad10
39
null
transformers
6,501
--- language: - de datasets: - gnad10 tags: - text-classification - german-news-classification metrics: - accuracy - precision - recall - f1 --- # German BERT for News Classification This a bert-base-german-cased model finetuned for text classification on german news articles ## Training data Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets).
SEBIS/code_trans_t5_base_api_generation_transfer_learning_finetune
26a80447a84ffc28be0bb7d562a2c6911adee9a7
2021-06-23T04:03:25.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_base_api_generation_transfer_learning_finetune
39
null
transformers
6,502
--- tags: - summarization widget: - text: "parse the uses licence node of this package , if any , and returns the license definition if theres" --- # CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_api_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_api_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/api%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
WangZeJun/roformer-sim-base-chinese
5da827822935d53285c1103f7e72cef2dae84749
2022-06-14T09:17:25.000Z
[ "pytorch", "transformers" ]
null
false
WangZeJun
null
WangZeJun/roformer-sim-base-chinese
39
1
transformers
6,503
https://github.com/zejunwang1/bert4vec
addy88/gptj8
f07b3aaf67ce4053852bf45a307bd58dc2f4b39f
2022-01-02T06:33:57.000Z
[ "pytorch", "gptj", "text-generation", "arxiv:2106.09685", "arxiv:2110.02861", "transformers" ]
text-generation
false
addy88
null
addy88/gptj8
39
1
transformers
6,504
This Model is 8bit Version of EleutherAI/gpt-j-6B. It is converted by Facebook's bitsandbytes library. The original GPT-J takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. So for finetuning on single GPU This model is converted into 8bit. Here's how to run it: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1KNf5siQdM7ILQM-pHsP6gNVPKl1SJdU1) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://colab.research.google.com/drive/1FxGeYQyE7cx9VNCBC4gUyRVZGORW7c6g) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Can I use this technique with other models? The model was converted using [this notebook](https://colab.research.google.com/drive/1rwxh0XRdVi8VEbTx97l9xXr4JbRhZaq5#scrollTo=CX3VHn-J1Zer). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
aseifert/distilbert-casing
1c532d673e156dba74da474add70775e44d7989a
2020-10-29T09:44:22.000Z
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
aseifert
null
aseifert/distilbert-casing
39
null
transformers
6,505
Entry not found
benjaminbeilharz/bart-base-empatheticdialogues
81a073af4a4c402a5340ff6410fcd445001f8eec
2022-01-24T11:29:02.000Z
[ "pytorch", "tensorboard", "bart", "text-generation", "transformers" ]
text-generation
false
benjaminbeilharz
null
benjaminbeilharz/bart-base-empatheticdialogues
39
null
transformers
6,506
Entry not found
blizrys/biobert-v1.1-finetuned-pubmedqa
d36bc441473d389521c400a4814722b4a084673b
2021-09-13T17:56:32.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
blizrys
null
blizrys/biobert-v1.1-finetuned-pubmedqa
39
null
transformers
6,507
--- tags: - generated_from_trainer datasets: - null metrics: - accuracy model-index: - name: biobert-v1.1-finetuned-pubmedqa results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8810 | 0.56 | | No log | 2.0 | 114 | 0.8139 | 0.62 | | No log | 3.0 | 171 | 0.7963 | 0.68 | | No log | 4.0 | 228 | 0.7709 | 0.66 | | No log | 5.0 | 285 | 0.7931 | 0.64 | | No log | 6.0 | 342 | 0.7420 | 0.7 | | No log | 7.0 | 399 | 0.7654 | 0.7 | | No log | 8.0 | 456 | 0.7756 | 0.68 | | 0.5849 | 9.0 | 513 | 0.7605 | 0.68 | | 0.5849 | 10.0 | 570 | 0.7737 | 0.7 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
castorini/ance-dpr-context-multi
adb8465629826220f773030de9e06e463e486f1e
2021-09-22T09:41:18.000Z
[ "pytorch", "dpr", "arxiv:2007.00808", "transformers" ]
null
false
castorini
null
castorini/ance-dpr-context-multi
39
null
transformers
6,508
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
flax-community/roberta-base-thai
3d400e88b72d7765267bd63a59eec6f34cca4f13
2021-07-17T09:43:54.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
flax-community
null
flax-community/roberta-base-thai
39
null
transformers
6,509
Entry not found
flax-sentence-embeddings/all_datasets_v3_distilroberta-base
a02c9dc41679249af6ad7df1228c49026ec490be
2021-07-23T15:43:19.000Z
[ "pytorch", "roberta", "fill-mask", "en", "arxiv:2104.08727", "arxiv:1810.09305", "arxiv:2102.07033", "arxiv:1904.06472", "sentence-transformers", "feature-extraction", "sentence-similarity" ]
sentence-similarity
false
flax-sentence-embeddings
null
flax-sentence-embeddings/all_datasets_v3_distilroberta-base
39
2
sentence-transformers
6,510
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_distilroberta-base') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained [`distilroberta-base`](https://huggingface.co/distilroberta-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | SearchQA | - | 582,261 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | total | | 1,097,953,922 |
huggingtweets/kaikothesharko
76d8c4b143e655009d8bb5925611444fa46d39be
2021-12-02T04:58:11.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/kaikothesharko
39
null
transformers
6,511
--- language: en thumbnail: http://www.huggingtweets.com/kaikothesharko/1638421086822/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1463379249578987527/OUX9AGXt_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Kaiko TF (RAFFLE IN PINNED)</div> <div style="text-align: center; font-size: 14px;">@kaikothesharko</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Kaiko TF (RAFFLE IN PINNED). | Data | Kaiko TF (RAFFLE IN PINNED) | | --- | --- | | Tweets downloaded | 2169 | | Retweets | 259 | | Short tweets | 529 | | Tweets kept | 1381 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18zt3o3w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kaikothesharko's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ajrcjpz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ajrcjpz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kaikothesharko') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ismaelfaro/gpt2-poems.es
4817ebe99130415355897998baa848fb1fd2f4ea
2021-10-12T14:23:53.000Z
[ "pytorch", "gpt2", "text-generation", "es", "transformers", "GPT", "license:mit" ]
text-generation
false
ismaelfaro
null
ismaelfaro/gpt2-poems.es
39
1
transformers
6,512
--- language: es tags: - GPT license: mit --- # GTP2-Poems Spanish This model is part of the Poems+AI experiment more info https://poems-ai.github.io/art/ # Original Dataset - https://www.kaggle.com/andreamorgar/spanish-poetry-dataset - Marcos de la Fuente's poems
it5/it5-small-wiki-summarization
470e69837f740891d8f35e12f209acbb0caadbba
2022-03-09T07:50:42.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "it", "dataset:wits", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "wikipedia", "summarization", "wits", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
summarization
false
it5
null
it5/it5-small-wiki-summarization
39
null
transformers
6,513
--- language: - it license: apache-2.0 datasets: - wits tags: - italian - sequence-to-sequence - wikipedia - summarization - wits widget: - text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati." - text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. " - text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. " - text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. " metrics: - rouge - bertscore model-index: - name: it5-small-wiki-summarization results: - task: type: wiki-summarization name: "Wikipedia Summarization" dataset: type: wits name: "WITS" metrics: - type: rouge1 value: 0.337 name: "Test Rouge1" - type: rouge2 value: 0.191 name: "Test Rouge2" - type: rougeL value: 0.306 name: "Test RougeL" - type: bertscore value: 0.504 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "8g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Small for Wikipedia Summarization ✂️📑 🇮🇹 This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines wikisum = pipeline("summarization", model='it5/it5-small-wiki-summarization') wikisum("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. ") >>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-wiki-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-wiki-summarization") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
izumi-lab/electra-small-japanese-discriminator
d453e1f3b45100dc246c7645b18803f2e8824126
2022-03-19T09:38:49.000Z
[ "pytorch", "electra", "pretraining", "ja", "dataset:wikipedia", "arxiv:2003.10555", "transformers", "license:cc-by-sa-4.0" ]
null
false
izumi-lab
null
izumi-lab/electra-small-japanese-discriminator
39
null
transformers
6,514
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東京大学で[MASK]の研究をしています。 --- # ELECTRA small Japanese discriminator This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language. The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0). ## Model architecture The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021. The corpus file is 2.9GB, consisting of approximately 20M sentences. ## Tokenization The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. ## Training The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps. The size of the generator is the same of the discriminator. ## Citation **There will be another paper for this pretrained model. Be sure to check here again when you cite.** ``` @inproceedings{suzuki2021fin-bert-electra, title={金融文書を用いた事前学習言語モデルの構築と検証}, % title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents}, author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔}, % author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi}, booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)}, % booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27}, pages={5-10}, year={2021} } ``` ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). ## Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP21K12010.
ml6team/distilbart-tos-summarizer-tosdr
5c4b53b6b876b4a8b861b24901c4ba2793d7b0e7
2022-01-20T15:21:41.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:tosdr", "transformers", "summarization", "t&c", "tos", "distilbart", "distilbart-6-6", "autotrain_compatible" ]
summarization
false
ml6team
null
ml6team/distilbart-tos-summarizer-tosdr
39
12
transformers
6,515
--- language: - en tags: - summarization - t&c - tos - distilbart - distilbart-6-6 datasets: - tosdr metrics: - rouge1 - rouge2 - rougel inference: parameters: min_length: 5 max_length: 512 do_sample: False widget: - text: "In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides." --- # T&C Summarization Model T&C Summarization Model based on [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6), This abstractive summarization model is a part of a bigger end-to-end T&C summarizer pipeline which is preceded by LSA (Latent Semantic Analysis) extractive summarization. The extractive summarization shortens the T&C to be further summarized by this model. ## Finetuning Corpus We collaborated with [TOSDR](https://tosdr.org/) to work with their data, and the model is finetuned accordingly. The article and summarization text is reduced via extractive summarization before it is finetuned to the model. ## Contact Us https://ml6.eu/ . This abstractive model finetuning is the continuation of the Christmas Project 2021 done in ML6: https://bit.ly/XmasProjects . ## Load Finetuned Model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") ``` ## Code Sample This sample requires [sumy](https://pypi.org/project/sumy/), the LSA Extractive Summarization library, as additional package to run. ``` import re import nltk nltk.download('punkt') from sumy.parsers.plaintext import PlaintextParser from sumy.nlp.tokenizers import Tokenizer from sumy.nlp.stemmers import Stemmer from sumy.summarizers.lsa import LsaSummarizer from transformers import AutoTokenizer, AutoModelForSeq2SeqLM LANGUAGE = "english" EXTRACTED_ARTICLE_SENTENCES_LEN = 12 stemmer = Stemmer(LANGUAGE) lsa_summarizer = LsaSummarizer(stemmer) tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr") def get_extractive_summary(text, sentences_count): parser = PlaintextParser.from_string(text, Tokenizer(LANGUAGE)) summarized_info = lsa_summarizer(parser.document, sentences_count) summarized_info = [element._text for element in summarized_info] return ' '.join(summarized_info) def get_summary(dict_summarizer_model, dict_tokenizer, text_content): text_content = get_extractive_summary(text_content, EXTRACTED_ARTICLE_SENTENCES_LEN) tokenizer = dict_tokenizer['tokenizer'] model = dict_summarizer_model['model'] inputs = tokenizer(text_content, max_length=dict_tokenizer['max_length'], truncation=True, return_tensors="pt") outputs = model.generate( inputs["input_ids"], max_length=dict_summarizer_model['max_length'], min_length=dict_summarizer_model['min_length'], ) summarized_text = tokenizer.decode(outputs[0]) match = re.search(r"<s>(.*)</s>", summarized_text) if match is not None: summarized_text = match.group(1) return summarized_text.replace('<s>', '').replace('</s>', '') test_tos = """ In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides """ model_dict = { 'model': model, 'max_length': 512, 'min_length': 4 } tokenizer_dict = { 'tokenizer': tokenizer, 'max_length': 1024 } print(get_summary(model_dict, tokenizer_dict, test_tos)) ```
mrm8488/squeezebert-finetuned-squadv2
a45563f9d8689d843cf8c45742e58b161e905c4f
2020-12-11T21:55:26.000Z
[ "pytorch", "squeezebert", "question-answering", "en", "dataset:squad_v2", "arxiv:2006.11316", "arxiv:2004.02984", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/squeezebert-finetuned-squadv2
39
null
transformers
6,516
--- language: en datasets: - squad_v2 --- # SqueezeBERT + SQuAD v2 [squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) fine-tuned on [SQUAD v2](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of SqueezeBERT This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python /content/transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path squeezebert/squeezebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file /content/dataset/train-v2.0.json \ --predict_file /content/dataset/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 15 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output_dir \ --overwrite_output_dir \ --version_2_with_negative \ --save_steps 2000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **69.98** | | **F1** | **74.14** | Model Size: **195 MB** ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/squeezebert-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.9768241047859192, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
nielsr/coref-roberta-base
d193901a739fcc27cfbf1b25fd712d99bb1b69e1
2021-01-21T08:18:55.000Z
[ "pytorch", "en", "dataset:wikipedia", "dataset:quoref", "dataset:docred", "dataset:fever", "dataset:gap", "dataset:winograd_wsc", "dataset:winogender", "dataset:glue", "arxiv:2004.06870", "transformers", "exbert", "license:apache-2.0" ]
null
false
nielsr
null
nielsr/coref-roberta-base
39
null
transformers
6,517
--- language: en tags: - exbert license: apache-2.0 datasets: - wikipedia - quoref - docred - fever - gap - winograd_wsc - winogender - glue --- # CorefRoBERTa base model Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in [this paper](https://arxiv.org/abs/2004.06870) and first released in [this repository](https://github.com/thunlp/CorefBERT). Disclaimer: The team releasing CorefRoBERTa did not write a model card for this model so this model card has been written by me. ## Model description CorefRoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefRoBERTa model as inputs. ### BibTeX entry and citation info ```bibtex @misc{ye2020coreferential, title={Coreferential Reasoning Learning for Language Representation}, author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu}, year={2020}, eprint={2004.06870}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
panggi/t5-base-indonesian-summarization-cased
0fb07c370837e01d71085ec681fb945ab3fed823
2021-06-23T13:18:18.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "id", "dataset:indosum", "transformers", "pipeline:summarization", "summarization", "autotrain_compatible" ]
summarization
false
panggi
null
panggi/t5-base-indonesian-summarization-cased
39
null
transformers
6,518
--- language: id tags: - pipeline:summarization - summarization - t5 datasets: - indosum --- # Indonesian T5 Summarization Base Model Finetuned T5 base summarization model for Indonesian. ## Finetuning Corpus `t5-base-indonesian-summarization-cased` model is based on `t5-base-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [indosum](https://github.com/kata-ai/indosum) dataset. ## Load Finetuned Model ```python from transformers import T5Tokenizer, T5Model, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-base-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-base-indonesian-summarization-cased") ``` ## Code Sample ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-base-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-base-indonesian-summarization-cased") # https://www.sehatq.com/artikel/apa-itu-dispepsia-fungsional-ketahui-gejala-dan-faktor-risikonya ARTICLE_TO_SUMMARIZE = "Secara umum, dispepsia adalah kumpulan gejala pada saluran pencernaan seperti nyeri, sensasi terbakar, dan rasa tidak nyaman pada perut bagian atas. Pada beberapa kasus, dispepsia yang dialami seseorang tidak dapat diketahui penyebabnya. Jenis dispepsia ini disebut dengan dispepsia fungsional. Apa saja gejala dispepsia fungsional? Apa itu dispepsia fungsional? Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas atau ulu hati. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. Dispepsia ini memiliki nama “fungsional” karena kumpulan gejalanya tidak memiliki penyebab yang jelas. Dilihat dari fungsi dan struktur saluran pencernaan, dokter tidak menemukan hal yang salah. Namun, gejalanya bisa sangat mengganggu dan menyiksa. Dispepsia fungsional disebut juga dengan dispepsia nonulkus. Diperkirakan bahwa 20% masyarakat dunia menderita dispepsia fungsional. Kondisi ini berisiko tinggi dialami oleh wanita, perokok, dan orang yang mengonsumsi obat anti-peradangan nonsteroid (NSAID). Dispepsia fungsional bisa bersifat kronis dan mengganggu kehidupan penderitanya. Namun beruntung, ada beberapa strategi yang bisa diterapkan untuk mengendalikan gejala dispepsia ini. Strategi tersebut termasuk perubahan gaya hidup, obat-obatan, dan terapi.Ragam gejala dispepsia fungsional Gejala dispepsia fungsional dapat bervariasi antara satu pasien dengan pasien lain. Beberapa tanda yang bisa dirasakan seseorang, yaitu: Sensasi terbakar atau nyeri di saluran pencernaan bagian atas Perut kembung Cepat merasa kenyang walau baru makan sedikit Mual Muntah Bersendawa Rasa asam di mulut Penurunan berat badan Tekanan psikologis terkait dengan kondisi yang dialami Apa sebenarnya penyebab dispepsia fungsional? Sebagai penyakit fungsional, dokter mengkategorikan dispepsia ini sebagai penyakit yang tidak diketahui penyebabnya. Hanya saja, beberapa faktor bisa meningkatkan risiko seseorang terkena dispepsia fungsional. Faktor risiko tersebut, termasuk: Alergi terhadap zat tertentu Perubahan mikrobioma usus Infeksi, seperti yang dipicu oleh bakteriHelicobacter pylori Sekresi asam lambung yang tidak normal Peradangan pada saluran pencernaan bagian atas Gangguan pada fungsi lambung untuk mencerna makanan Pola makan tertentu Gaya hidup tidak sehat Stres Kecemasan atau depresi Efek samping pemakaian obat seperti obat antiinflamasi nonsteroid Penanganan untuk dispepsia fungsional Ada banyak pilihan pengobatan untuk dispepsia fungsional. Seperti yang disampaikan di atas, tidak ada penyebab tunggal dispepsia ini yang bisa diketahui. Gejala yang dialami antara satu pasien juga mungkin amat berbeda dari orang lain. Dengan demikian, jenis pengobatan dispepsia fungsional juga akan bervariasi. Beberapa pilihan strategi penanganan untuk dispepsia fungsional, meliputi: 1. Obat-obatan Ada beberapa jenis obat yang mungkin akan diberikan dokter, seperti Obat penetral asam lambung yang disebut penghambat reseptor H2 Obat penghambat produksi asam lambung yang disebut proton pump inhibitors Obat untuk mengendalikan gas di perut yang mengandung simetikon Antidepresan seperti amitriptyline Obat penguat kerongkongan yang disebut agen prokinetik Obat untuk pengosongan isi lambung seperti metoclopramide Antibiotik jika dokter mendeteksi adanya infeksi bakteri H. pylori 2. Anjuran terkait perubahan gaya hidup Selain obat-obatan, dokter akan memberikan rekomendasi perubahan gaya hidup yang harus diterapkan pasien. Tips terkait perubahan gaya hidup termasuk: Makan lebih sering namun dengan porsi yang lebih sedikit Menjauhi makanan berlemak karena memperlambat pengosongan makanan di lambung Menjauhi jenis makanan lain yang memicu gejala dispepsia, seperti makanan pedas, makanan tinggi asam, produk susu, dan produk kafein Menjauhi rokok Dokter juga akan meminta pasien untuk mencari cara untuk mengendalikan stres, tidur dengan kepala lebih tinggi, dan menjalankan usaha untuk mengendalikan berat badan. Apakah penyakit dispepsia itu berbahaya? Dispepsia, termasuk dispepsia fungsional, dapat menjadi kronis dengan gejala yang menyiksa. Jika tidak ditangani, dispepsia tentu dapat berbahaya dan mengganggu kehidupan pasien. Segera hubungi dokter apabila Anda merasakan gejala dispepsia, terlebih jika tidak merespons obat-obatan yang dijual bebas. Catatan dari SehatQ Dispepsia fungsional adalah kumpulan gejala pada saluran pencernaan bagian atas yang tidak diketahui penyebabnya. Dispepsia fungsional dapat ditangani dengan kombinasi obat-obatan dan perubahan gaya hidup. Jika masih memiliki pertanyaan terkait dispepsia fungsional, Anda bisa menanyakan ke dokter di aplikasi kesehatan keluarga SehatQ. Aplikasi SehatQ bisa diunduh gratis di Appstore dan Playstore yang berikan informasi penyakit terpercaya." # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, max_length=100, num_beams=2, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` 'Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. ``` ## Acknowledgement Thanks to Immanuel Drexel for his article [Text Summarization, Extractive, T5, Bahasa Indonesia, Huggingface’s Transformers](https://medium.com/analytics-vidhya/text-summarization-t5-bahasa-indonesia-huggingfaces-transformers-ee9bfe368e2f)
pere/norwegian-roberta-base
08afd54fbcfe247ba66bbe5024facfafad5c633e
2021-11-29T20:32:14.000Z
[ "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
pere
null
pere/norwegian-roberta-base
39
null
transformers
6,519
Entry not found
vslaykovsky/roberta-news-duplicates
046b363bc6747ed13a6b176468d2700fd7acdbd0
2021-05-20T23:07:11.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
vslaykovsky
null
vslaykovsky/roberta-news-duplicates
39
null
transformers
6,520
Entry not found
ghadeermobasher/BC5CDR-Disease-Modified_scibert_scivocab_uncased
a86a29368b40742102fa8e5554ed1b664408b4ab
2022-02-25T18:11:07.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BC5CDR-Disease-Modified_scibert_scivocab_uncased
39
null
transformers
6,521
Entry not found
dbmdz/bert-base-historic-multilingual-64k-td-cased
cdd9ea90daac4e436a3df382966a5da8ab6f2ff2
2022-06-03T09:48:41.000Z
[ "pytorch", "bert", "fill-mask", "multilingual", "arxiv:2205.15575", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
dbmdz
null
dbmdz/bert-base-historic-multilingual-64k-td-cased
39
null
transformers
6,522
--- language: multilingual license: mit widget: - text: "and I cannot conceive the reafon why [MASK] hath" - text: "Täkäläinen sanomalehdistö [MASK] erit - täin" - text: "Det vore [MASK] häller nödvändigt att be" - text: "Comme, à cette époque [MASK] était celle de la" - text: "In [MASK] an atmosphärischen Nahrungsmitteln" --- # hmBERT: Historical Multilingual Language Models for Named Entity Recognition More information about our hmBERT model can be found in our new paper: ["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575). ## Languages Our Historic Language Models Zoo contains support for the following languages - incl. their training data source: | Language | Training data | Size | -------- | ------------- | ---- | German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered) | French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered) | English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered) | Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB | Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB # Corpora Stats ## German Europeana Corpus We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size and use less-noisier data: | OCR confidence | Size | -------------- | ---- | **0.60** | 28GB | 0.65 | 18GB | 0.70 | 13GB For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution: ![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png) ## French Europeana Corpus Like German, we use different ocr confidence thresholds: | OCR confidence | Size | -------------- | ---- | 0.60 | 31GB | 0.65 | 27GB | **0.70** | 27GB | 0.75 | 23GB | 0.80 | 11GB For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution: ![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png) ## British Library Corpus Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering: | Years | Size | ----------------- | ---- | ALL | 24GB | >= 1800 && < 1900 | 24GB We use the year filtered variant. The following plot shows a tokens per year distribution: ![British Library Corpus Stats](stats/figures/bl_corpus_stats.png) ## Finnish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.2GB The following plot shows a tokens per year distribution: ![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png) ## Swedish Europeana Corpus | OCR confidence | Size | -------------- | ---- | 0.60 | 1.1GB The following plot shows a tokens per year distribution: ![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png) ## All Corpora The following plot shows a tokens per year distribution of the complete training corpus: ![All Corpora Stats](stats/figures/all_corpus_stats.png) # Multilingual Vocab generation For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB. The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs: | Language | Size | -------- | ---- | German | 10GB | French | 10GB | English | 10GB | Finnish | 9.5GB | Swedish | 9.7GB We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora: | Language | NER corpora | -------- | ------------------ | German | CLEF-HIPE, NewsEye | French | CLEF-HIPE, NewsEye | English | CLEF-HIPE | Finnish | NewsEye | Swedish | NewsEye Breakdown of subword fertility rate and unknown portion per language for the 32k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.43 | 0.0004 | French | 1.25 | 0.0001 | English | 1.25 | 0.0 | Finnish | 1.69 | 0.0007 | Swedish | 1.43 | 0.0 Breakdown of subword fertility rate and unknown portion per language for the 64k vocab: | Language | Subword fertility | Unknown portion | -------- | ------------------ | --------------- | German | 1.31 | 0.0004 | French | 1.16 | 0.0001 | English | 1.17 | 0.0 | Finnish | 1.54 | 0.0007 | Swedish | 1.32 | 0.0 # Final pretraining corpora We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here: | Language | Size | -------- | ---- | German | 28GB | French | 27GB | English | 24GB | Finnish | 27GB | Swedish | 27GB Total size is 130GB. # Pretraining Details about the pretraining are coming soon. # Acknowledgments Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
JoofytheBloofy/T5LargeTest
11497a056734d3b703739f53ba4cb5084cdca45c
2022-03-27T16:26:03.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
JoofytheBloofy
null
JoofytheBloofy/T5LargeTest
39
null
transformers
6,523
--- tags: - summarization ---
NbAiLab/nb-bert-ncc-male2female
44016b54dc71ea3b34b8d8e25a392244b9cbb518
2022-04-27T18:33:23.000Z
[ "pytorch", "jax", "tensorboard", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
NbAiLab
null
NbAiLab/nb-bert-ncc-male2female
39
null
transformers
6,524
Entry not found
UrukHan/t5-russian-summarization
c300ff14fea8ecfbf9a7e73a4d30cceeffa01d07
2022-04-04T09:51:50.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
UrukHan
null
UrukHan/t5-russian-summarization
39
null
transformers
6,525
--- tags: - generated_from_trainer model-index: - name: t5-russian-summarization results: [] widget: - text: "Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее." --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> --- # t5-russian-summarization --- модель для исправление текста из распознаного аудио. моя модлеь для распознования аудио https://huggingface.co/UrukHan/wav2vec2-russian и его результаты можно закидывать в эту модель. тестил на видео случайном с ютюба <table border="0"> <tr> <td><b style="font-size:30px">Input</b></td> <td><b style="font-size:30px">Output</b></td> </tr> <tr> <td>Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.</td> <td>Запад ввел новые санкции против России</td> </tr> </table> # --- Датасеты для обучения: UrukHan/t5-russian-summarization : https://huggingface.co/datasets/UrukHan/t5-russian-summarization --- # Запуск на вывод результатов пример работы с комментариями в колабе https://colab.research.google.com/drive/1ame2va9_NflYqy4RZ07HYmQ0moJYy7w2?usp=sharing : # ```python # Установим библиотеку трансформеров !pip install transformers # Импортируем библиотеки from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast # Зададим название выбронной модели из хаба MODEL_NAME = 'UrukHan/t5-russian-summarization' MAX_INPUT = 256 # Загрузка модели и токенизатора tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME) model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME) # Входные данные (можно массив фраз или текст) input_sequences = ['Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.'] # или можно использовать одиночные фразы: input_sequences = 'сеглдыя хорош ден' task_prefix = "Spell correct: " # Токенизирование данных if type(input_sequences) != list: input_sequences = [input_sequences] encoded = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=MAX_INPUT, truncation=True, return_tensors="pt", ) predicts = model.generate(encoded) # # Прогнозирование tokenizer.batch_decode(predicts, skip_special_tokens=True) # Декодируем данные ``` # --- #Настроенный блокнот для запуска обучения и сохранения модели в свой репозиторий на huggingface hub: #https://colab.research.google.com/drive/1H4IoasDqa2TEjGivVDp-4Pdpm0oxrCWd?usp=sharing # ```python # Установка библиотек !pip install datasets !apt install git-lfs !pip install transformers !pip install sentencepiece !pip install rouge_score # Импорт библиотек import numpy as np from datasets import Dataset import tensorflow as import nltk from transformers import T5TokenizerFast, Seq2SeqTrainingArguments, Seq2SeqTrainer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq import torch from transformers.optimization import Adafactor, AdafactorSchedule from datasets import load_dataset, load_metric # загрузка параметров raw_datasets = load_dataset("xsum") metric = load_metric("rouge") nltk.download('punkt') # Ввести свой ключ huggingface hyb from huggingface_hub import notebook_login notebook_login() # Определение параметров REPO = "t5-russian-summarization" # Введите наазвание название репозитория MODEL_NAME = "UrukHan/t5-russian-summarization" # Введите наазвание выбранной модели из хаба MAX_INPUT = 256 # Введите максимальную длинну входных данных в токенах (длинна входных фраз в словах (можно считать полслова токен)) MAX_OUTPUT = 64 # Введите максимальную длинну прогнозов в токенах (можно уменьшить для задач суммризации или других задач где выход короче) BATCH_SIZE = 8 DATASET = 'UrukHan/t5-russian-summarization' # Введите наазвание название датасета # Загрузка датасета использование других типов данных опишу ниже data = load_dataset(DATASET) # Загрузка модели и токенизатора tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME) model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME) model.config.max_length = MAX_OUTPUT # по умолчанию 20, поэтому во всех моделях прогнозы обрезаются выходные последовательности # Закоментить после первого соъранения в репозиторий свой необъязательно tokenizer.push_to_hub(repo_name) train = data['train'] test = data['test'].train_test_split(0.02)['test'] # Уменьшил так тестовыу. выборку чтоб не ждать долго расчет ошибок между эпохами data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) #return_tensors="tf" def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Rouge expects a newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} # Add mean generated length prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} training_args = Seq2SeqTrainingArguments( output_dir = REPO, #overwrite_output_dir=True, evaluation_strategy='steps', #learning_rate=2e-5, eval_steps=5000, save_steps=5000, num_train_epochs=1, predict_with_generate=True, per_device_train_batch_size=BATCH_SIZE, per_device_eval_batch_size=BATCH_SIZE, fp16=True, save_total_limit=2, #generation_max_length=256, #generation_num_beams=4, weight_decay=0.005, #logging_dir='logs', push_to_hub=True, ) # Выберем вручную оптимизатор. Т5 в оригинальной архитектуре использует Адафактор оптимизатор optimizer = Adafactor( model.parameters(), lr=1e-5, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) lr_scheduler = AdafactorSchedule(optimizer) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset = train, eval_dataset = test, optimizers = (optimizer, lr_scheduler), tokenizer = tokenizer, compute_metrics=compute_metrics ) trainer.train() trainer.push_to_hub() ``` # --- # Пример конвертации массивов для данной сети # ```python input_data = ['Запад после начала российской специальной операции по демилитаризации Украины ввел несколько раундов новых экономических санкций. В Кремле новые ограничения назвали серьезными, но отметили, что Россия готовилась к ним заранее.'] output_data = ['Запад ввел новые санкции против России'] # Токенизируем входные данные task_prefix = "Spell correct: " input_sequences = input_data encoding = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=MAX_INPUT, truncation=True, return_tensors="pt", ) input_ids, attention_mask = encoding.input_ids, encoding.attention_mask # Токенизируем выходные данные target_encoding = tokenizer(output_data, padding="longest", max_length=MAX_OUTPUT, truncation=True) labels = target_encoding.input_ids # replace padding token id's of the labels by -100 labels = torch.tensor(labels) labels[labels == tokenizer.pad_token_id] = -100''' # Конвертируем наши данные в формат dataset data = Dataset.from_pandas(pd.DataFrame({'input_ids': list(np.array(input_ids)), 'attention_mask': list(np.array(attention_mask)), 'labels': list(np.array(labels))})) data = data.train_test_split(0.02) # и получим на вход сети для нашешго trainer: train_dataset = data['train'], eval_dataset = data['test']
yihsuan/albert-base-chinese-0407-ner
7113e82d9ecbadb3045163efcd72984059602b89
2022-04-07T03:20:43.000Z
[ "pytorch", "tensorboard", "albert", "token-classification", "List of ISO 639-1 code for your language", "zh", "transformers", "autotrain_compatible" ]
token-classification
false
yihsuan
null
yihsuan/albert-base-chinese-0407-ner
39
null
transformers
6,526
--- language: - "List of ISO 639-1 code for your language" - zh widget: - text: "中央疫情指揮中心臨時記者會宣布全院區為紅區,擴大隔離,但鄭文燦早在七十二小時前就主張,只要是先前在桃園醫院住院、轉院的患者與陪病家屬,都要居家隔離" example_title: "範例ㄧ" - text: "台東地檢署21日指揮警方前往張靜的事務所及黃姓女友所經營的按摩店進行搜索" example_title: "範例二" - text: "各地停電事件頻傳,即便經濟部與台電均否認「台灣缺電」,但也難消國人的疑慮。" example_title: "範例三" --- --- license: gpl-3.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: albert-base-chinese-0407-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-chinese-0407-ner This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0948 - Precision: 0.8603 - Recall: 0.8871 - F1: 0.8735 - Accuracy: 0.9704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.3484 | 0.05 | 500 | 0.5395 | 0.1841 | 0.1976 | 0.1906 | 0.8465 | | 0.3948 | 0.09 | 1000 | 0.2910 | 0.6138 | 0.7113 | 0.6590 | 0.9263 | | 0.2388 | 0.14 | 1500 | 0.2030 | 0.6628 | 0.7797 | 0.7165 | 0.9414 | | 0.1864 | 0.18 | 2000 | 0.1729 | 0.7490 | 0.7935 | 0.7706 | 0.9498 | | 0.1754 | 0.23 | 2500 | 0.1641 | 0.7415 | 0.7869 | 0.7635 | 0.9505 | | 0.1558 | 0.28 | 3000 | 0.1532 | 0.7680 | 0.8002 | 0.7838 | 0.9530 | | 0.1497 | 0.32 | 3500 | 0.1424 | 0.7865 | 0.8282 | 0.8068 | 0.9555 | | 0.1488 | 0.37 | 4000 | 0.1373 | 0.7887 | 0.8111 | 0.7997 | 0.9553 | | 0.1361 | 0.42 | 4500 | 0.1311 | 0.7942 | 0.8382 | 0.8156 | 0.9590 | | 0.1335 | 0.46 | 5000 | 0.1264 | 0.7948 | 0.8423 | 0.8179 | 0.9596 | | 0.1296 | 0.51 | 5500 | 0.1242 | 0.8129 | 0.8416 | 0.8270 | 0.9603 | | 0.1338 | 0.55 | 6000 | 0.1315 | 0.7910 | 0.8588 | 0.8235 | 0.9586 | | 0.1267 | 0.6 | 6500 | 0.1193 | 0.8092 | 0.8399 | 0.8243 | 0.9609 | | 0.1207 | 0.65 | 7000 | 0.1205 | 0.8021 | 0.8469 | 0.8239 | 0.9601 | | 0.1214 | 0.69 | 7500 | 0.1201 | 0.7969 | 0.8489 | 0.8220 | 0.9605 | | 0.1168 | 0.74 | 8000 | 0.1134 | 0.8087 | 0.8607 | 0.8339 | 0.9620 | | 0.1162 | 0.78 | 8500 | 0.1127 | 0.8177 | 0.8492 | 0.8331 | 0.9625 | | 0.1202 | 0.83 | 9000 | 0.1283 | 0.7986 | 0.8550 | 0.8259 | 0.9580 | | 0.1135 | 0.88 | 9500 | 0.1101 | 0.8213 | 0.8572 | 0.8389 | 0.9638 | | 0.1121 | 0.92 | 10000 | 0.1097 | 0.8190 | 0.8588 | 0.8384 | 0.9635 | | 0.1091 | 0.97 | 10500 | 0.1088 | 0.8180 | 0.8521 | 0.8347 | 0.9632 | | 0.1058 | 1.02 | 11000 | 0.1085 | 0.8136 | 0.8716 | 0.8416 | 0.9630 | | 0.0919 | 1.06 | 11500 | 0.1079 | 0.8309 | 0.8566 | 0.8436 | 0.9646 | | 0.0914 | 1.11 | 12000 | 0.1079 | 0.8423 | 0.8542 | 0.8482 | 0.9656 | | 0.0921 | 1.15 | 12500 | 0.1109 | 0.8312 | 0.8647 | 0.8476 | 0.9646 | | 0.0926 | 1.2 | 13000 | 0.1240 | 0.8413 | 0.8488 | 0.8451 | 0.9637 | | 0.0914 | 1.25 | 13500 | 0.1040 | 0.8336 | 0.8666 | 0.8498 | 0.9652 | | 0.0917 | 1.29 | 14000 | 0.1032 | 0.8352 | 0.8707 | 0.8526 | 0.9662 | | 0.0928 | 1.34 | 14500 | 0.1052 | 0.8347 | 0.8656 | 0.8498 | 0.9651 | | 0.0906 | 1.38 | 15000 | 0.1032 | 0.8399 | 0.8619 | 0.8507 | 0.9662 | | 0.0903 | 1.43 | 15500 | 0.1074 | 0.8180 | 0.8708 | 0.8436 | 0.9651 | | 0.0889 | 1.48 | 16000 | 0.0990 | 0.8367 | 0.8713 | 0.8537 | 0.9670 | | 0.0914 | 1.52 | 16500 | 0.1055 | 0.8508 | 0.8506 | 0.8507 | 0.9661 | | 0.0934 | 1.57 | 17000 | 0.0979 | 0.8326 | 0.8740 | 0.8528 | 0.9669 | | 0.0898 | 1.62 | 17500 | 0.1022 | 0.8393 | 0.8615 | 0.8502 | 0.9668 | | 0.0869 | 1.66 | 18000 | 0.0962 | 0.8484 | 0.8762 | 0.8621 | 0.9682 | | 0.089 | 1.71 | 18500 | 0.1008 | 0.8447 | 0.8714 | 0.8579 | 0.9674 | | 0.0927 | 1.75 | 19000 | 0.0986 | 0.8379 | 0.8749 | 0.8560 | 0.9673 | | 0.0883 | 1.8 | 19500 | 0.0965 | 0.8518 | 0.8749 | 0.8632 | 0.9688 | | 0.0965 | 1.85 | 20000 | 0.0937 | 0.8412 | 0.8766 | 0.8585 | 0.9682 | | 0.0834 | 1.89 | 20500 | 0.0920 | 0.8451 | 0.8862 | 0.8652 | 0.9687 | | 0.0817 | 1.94 | 21000 | 0.0943 | 0.8439 | 0.8800 | 0.8616 | 0.9686 | | 0.088 | 1.99 | 21500 | 0.0927 | 0.8483 | 0.8762 | 0.8620 | 0.9683 | | 0.0705 | 2.03 | 22000 | 0.0993 | 0.8525 | 0.8783 | 0.8652 | 0.9690 | | 0.0709 | 2.08 | 22500 | 0.0976 | 0.8610 | 0.8697 | 0.8653 | 0.9689 | | 0.0655 | 2.12 | 23000 | 0.0997 | 0.8585 | 0.8665 | 0.8625 | 0.9683 | | 0.0656 | 2.17 | 23500 | 0.0966 | 0.8569 | 0.8822 | 0.8694 | 0.9695 | | 0.0698 | 2.22 | 24000 | 0.0955 | 0.8604 | 0.8775 | 0.8689 | 0.9696 | | 0.065 | 2.26 | 24500 | 0.0971 | 0.8614 | 0.8780 | 0.8696 | 0.9697 | | 0.0653 | 2.31 | 25000 | 0.0959 | 0.8600 | 0.8787 | 0.8692 | 0.9698 | | 0.0685 | 2.35 | 25500 | 0.1001 | 0.8610 | 0.8710 | 0.8659 | 0.9690 | | 0.0684 | 2.4 | 26000 | 0.0969 | 0.8490 | 0.8877 | 0.8679 | 0.9690 | | 0.0657 | 2.45 | 26500 | 0.0954 | 0.8532 | 0.8832 | 0.8680 | 0.9696 | | 0.0668 | 2.49 | 27000 | 0.0947 | 0.8604 | 0.8793 | 0.8698 | 0.9695 | | 0.0644 | 2.54 | 27500 | 0.0989 | 0.8527 | 0.8790 | 0.8656 | 0.9696 | | 0.0685 | 2.59 | 28000 | 0.0955 | 0.8596 | 0.8772 | 0.8683 | 0.9700 | | 0.0702 | 2.63 | 28500 | 0.0937 | 0.8585 | 0.8837 | 0.8709 | 0.9700 | | 0.0644 | 2.68 | 29000 | 0.0946 | 0.8605 | 0.8830 | 0.8716 | 0.9702 | | 0.065 | 2.72 | 29500 | 0.0953 | 0.8617 | 0.8822 | 0.8719 | 0.9701 | | 0.063 | 2.77 | 30000 | 0.0943 | 0.8597 | 0.8848 | 0.8721 | 0.9701 | | 0.0638 | 2.82 | 30500 | 0.0941 | 0.8619 | 0.8846 | 0.8731 | 0.9702 | | 0.066 | 2.86 | 31000 | 0.0942 | 0.8608 | 0.8847 | 0.8726 | 0.9701 | | 0.0589 | 2.91 | 31500 | 0.0952 | 0.8632 | 0.8836 | 0.8733 | 0.9704 | | 0.0568 | 2.95 | 32000 | 0.0948 | 0.8603 | 0.8871 | 0.8735 | 0.9704 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
GPL/quora-tsdae-msmarco-distilbert-margin-mse
873c111d4d11af7983370f15675e57d5f2f9043a
2022-04-19T16:45:50.000Z
[ "pytorch", "distilbert", "feature-extraction", "transformers" ]
feature-extraction
false
GPL
null
GPL/quora-tsdae-msmarco-distilbert-margin-mse
39
null
transformers
6,527
Entry not found
xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text
0268512aaefa5d26d4c4eb8dd873b3f0aca51e0a
2022-04-26T05:57:49.000Z
[ "pytorch", "bart", "text2text-generation", "en", "arxiv:2203.07836", "transformers", "AMRBART", "license:mit", "autotrain_compatible" ]
text2text-generation
false
xfbai
null
xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text
39
null
transformers
6,528
--- language: en tags: - AMRBART license: mit --- ## AMRBART-large-finetuned-AMR2.0-AMR2Text This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR2.0 dataset. It achieves a sacre-bleu score of 45.7 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022. ## Model description Same with AMRBART. ## Training data The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 36,521 training instances, 1,368 validation instances, and 1,371 test instances. ## Intended uses & limitations You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News. ## How to use Here is how to initialize this model in PyTorch: ```python from transformers import BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text") ``` Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing. ## BibTeX entry and citation info Please cite this paper if you find this model helpful ```bibtex @inproceedings{bai-etal-2022-graph, title = "Graph Pre-training for {AMR} Parsing and Generation", author = "Bai, Xuefeng and Chen, Yulong and Zhang, Yue", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", url = "todo", doi = "todo", pages = "todo" } ```
hustvl/yolos-small-300
98945dea89fc0e6396dfc3fc10d6c4de169c773d
2022-06-27T08:38:34.000Z
[ "pytorch", "yolos", "object-detection", "dataset:coco", "arxiv:2106.00666", "transformers", "vision", "license:apache-2.0" ]
object-detection
false
hustvl
null
hustvl/yolos-small-300
39
1
transformers
6,529
--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport --- # YOLOS (small-sized) model (300 pre-train epochs) YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small-300') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small-300') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
BlackSamorez/ebanko-large
f6b76dd3570ef2dcc2b1a18d3fe3aef7589c5ce0
2022-04-28T19:19:55.000Z
[ "pytorch", "t5", "text2text-generation", "ru", "transformers", "PyTorch", "Transformers", "autotrain_compatible" ]
text2text-generation
false
BlackSamorez
null
BlackSamorez/ebanko-large
39
null
transformers
6,530
--- language: - ru tags: - PyTorch - Transformers thumbnail: "https://github.com/sberbank-ai/model-zoo" --- # ebanko-base Model was finetuned by [black_samorez](https://github.com/BlackSamorez). Based off [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base). Finetuned on [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments) and [ russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022) train to toxify text. * Task: `text2text generation` * Type: `encoder-decoder` * Tokenizer: `bpe` * Dict size: `32 101 ` * Num Parameters: `737 M` --- license: apache-2.0 ---
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg_ACE-arg
edad04dfeba14cf700bc442318bd951f16256fc8
2022-05-08T23:10:17.000Z
[ "pytorch", "roberta", "text-classification", "dataset:snli", "dataset:anli", "dataset:multi_nli", "dataset:multi_nli_mismatch", "dataset:fever", "arxiv:2104.14690", "arxiv:2203.13602", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
HiTZ
null
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg_ACE-arg
39
null
transformers
6,531
--- pipeline_tag: zero-shot-classification datasets: - snli - anli - multi_nli - multi_nli_mismatch - fever --- # A2T Entailment model **Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers). Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format. For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers: - [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/) - [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]() ## About the model The model name describes the configuration used for training as follows: <!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ --> <h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3> - `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>. - `NLI_datasets`: The NLI datasets used for pivot training. - `S`: Standford Natural Language Inference (SNLI) dataset. - `M`: Multi Natural Language Inference (MNLI) dataset. - `F`: Fever-nli dataset. - `A`: Adversarial Natural Language Inference (ANLI) dataset. - `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg. Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results. ## Cite If you use this model, consider citing the following publications: ```bibtex @inproceedings{sainz-etal-2021-label, title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction", author = "Sainz, Oscar and Lopez de Lacalle, Oier and Labaka, Gorka and Barrena, Ander and Agirre, Eneko", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.92", doi = "10.18653/v1/2021.emnlp-main.92", pages = "1199--1212", } ```
kornosk/polibertweet-political-twitter-roberta-mlm
7ebea6e1dab8f54f95aec14e4d3e0ffa5a407da2
2022-06-17T23:45:14.000Z
[ "pytorch", "roberta", "fill-mask", "en", "transformers", "twitter", "masked-token-prediction", "bertweet", "election2020", "politics", "license:gpl-3.0", "autotrain_compatible" ]
fill-mask
false
kornosk
null
kornosk/polibertweet-political-twitter-roberta-mlm
39
null
transformers
6,532
--- language: "en" tags: - twitter - masked-token-prediction - bertweet - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Political Election 2020 Pre-trained weights for PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter, LREC 2022. Please see the [official repository](https://github.com/GU-DataLab/PoliBERTweet) for more detail. We use the initialized weights from [BERTweet](https://huggingface.co/vinai/bertweet-base) or `vinai/bertweet-base`. # Training Data This model is pre-trained on over 83 million English tweets about the 2020 US Presidential Election. # Training Objective This model is initialized with BERTweet and trained with an MLM objective. # Usage This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**. ```python from transformers import AutoModel, AutoTokenizer, pipeline import torch # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/polibertweet-mlm" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModel.from_pretrained(pretrained_LM_path) # fill mask example = "Trump is the <mask> of USA" fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer) outputs = fill_mask(example) print(outputs) # see embeddings inputs = tokenizer(example, return_tensors="pt") outputs = model(**inputs) print(outputs) # OR you can use this model to train on your downstream task! # please consider citing our paper if you feel this is useful :) ``` # Reference - [PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter](XXX), LREC 2022. # Citation ```bibtex @inproceedings{kawintiranon2022polibertweet, title = {PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter}, author = {Kawintiranon, Kornraphop and Singh, Lisa}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, year = {2022}, publisher = {European Language Resources Association} } ```
okho0653/Bio_ClinicalBERT-zero-shot-sentiment-model
fee1ce229a671ff13fd2f111ba21c4529ab0909a
2022-05-06T05:57:30.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
okho0653
null
okho0653/Bio_ClinicalBERT-zero-shot-sentiment-model
39
null
transformers
6,533
--- license: mit tags: - generated_from_trainer model-index: - name: Bio_ClinicalBERT-zero-shot-sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-zero-shot-sentiment-model This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Jatin-WIAI/doctor_patient_clf_en
78115f1d9f4649339685a365649554ef018c9cb8
2022-05-19T09:53:05.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Jatin-WIAI
null
Jatin-WIAI/doctor_patient_clf_en
39
null
transformers
6,534
Entry not found
connectivity/feather_berts_28
9beae1bc72b121e5703987cc2a003eb7d1bbb3a8
2022-05-21T14:28:23.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
connectivity
null
connectivity/feather_berts_28
39
null
transformers
6,535
Entry not found
thunninoi/wav2vec2-japanese-vtuber
1d9c01d13d10df74385de5b468bb692ac6ac377b
2022-07-08T17:33:48.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
thunninoi
null
thunninoi/wav2vec2-japanese-vtuber
39
null
transformers
6,536
--- tags: - generated_from_trainer model-index: - name: checkpoints2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints2 This model is a fine-tuned version of [ttop324/wav2vec2-live-japanese](https://huggingface.co/ttop324/wav2vec2-live-japanese) on the extracted and cleaned transcript of [Holo No Graffiti](https://youtube.com/playlist?list=PLS51cvjOMUKwKtxe_IxhbBBvQ9XpiL1W_) dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Evaluated from [common_voice | train | ja ](https://huggingface.co/datasets/common_voice/viewer/ja/test): - WER: 32.940524 - CER: 15.251746 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 3 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
aomar85/fine-tuned-arabert-random-negative
44c6420e66595a90e1837aa5da74b456f99775e5
2022-05-29T22:24:58.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
aomar85
null
aomar85/fine-tuned-arabert-random-negative
39
null
transformers
6,537
--- tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: fine-tuned-arabert-random-negative results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-arabert-random-negative This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0080 - Accuracy: 0.9989 - Precision: 0.9990 - Recall: 0.9988 - F1: 0.9989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0105 | 1.0 | 62920 | 0.0061 | 0.9986 | 0.9993 | 0.9979 | 0.9986 | | 0.0069 | 2.0 | 125840 | 0.0096 | 0.9986 | 0.9993 | 0.9979 | 0.9986 | | 0.0058 | 3.0 | 188760 | 0.0084 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | | 0.0047 | 4.0 | 251680 | 0.0080 | 0.9989 | 0.9990 | 0.9988 | 0.9989 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
juancavallotti/t5-base-gec
25b3102f637927f3555e3a098577cc2b64d517d6
2022-06-08T15:26:04.000Z
[ "pytorch", "tensorboard", "onnx", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
juancavallotti
null
juancavallotti/t5-base-gec
39
1
transformers
6,538
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-gec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-gec This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
niclas/ATC_1
6369daa401b2761f6a1f404745a39a51fad4caec
2022-06-09T08:52:37.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
niclas
null
niclas/ATC_1
39
null
transformers
6,539
Entry not found
wvangils/GPT-Neo-125m-Beatles-Lyrics-finetuned-newlyrics
5889d9869dc3f3840b2aa3496b0122550233a58c
2022-06-17T11:20:35.000Z
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
wvangils
null
wvangils/GPT-Neo-125m-Beatles-Lyrics-finetuned-newlyrics
39
null
transformers
6,540
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: GPT-Neo-125m-Beatles-Lyrics-finetuned-newlyrics results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT-Neo-125m-Beatles-Lyrics-finetuned-newlyrics This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4438 | 1.0 | 18 | 1.8004 | | 2.1981 | 2.0 | 36 | 1.6985 | | 1.9766 | 3.0 | 54 | 1.6487 | | 1.8233 | 4.0 | 72 | 1.6384 | | 1.6137 | 5.0 | 90 | 1.6574 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
GonzoJurezz/gpt2-horo
e26def2b2971f8c22e813de26efeb43e3d15e987
2022-06-18T21:30:54.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
GonzoJurezz
null
GonzoJurezz/gpt2-horo
39
null
transformers
6,541
Entry not found
ryo0634/bert-base-zip-dependency-0
5436dcfe6d178aef3142a48730f41e919d8a79b5
2022-06-13T03:38:10.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ryo0634
null
ryo0634/bert-base-zip-dependency-0
39
null
transformers
6,542
Entry not found
Intel/distilbert-base-cased-distilled-squad-int8-static
1a083d94199ed7c70554880354283ba9c23a4a47
2022-07-25T02:50:45.000Z
[ "pytorch", "distilbert", "question-answering", "dataset:squad", "transformers", "int8", "Intel® Neural Compressor", "PostTrainingStatic", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
Intel
null
Intel/distilbert-base-cased-distilled-squad-int8-static
39
null
transformers
6,543
--- license: apache-2.0 tags: - int8 - Intel® Neural Compressor - PostTrainingStatic datasets: - squad metrics: - f1 --- # INT8 DistilBERT base cased finetuned on Squad ### Post-training static quantization This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad). The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304. The linear module **distilbert.transformer.layer.1.ffn.lin2** falls back to fp32 to meet the 1% relative accuracy loss. ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-f1)** |86.0005|86.8373| | **Model size (MB)** |71.2|249| ### Load with Intel® Neural Compressor: ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/distilbert-base-cased-distilled-squad-int8-static', ) ```
mosesju/distilbert-base-uncased-finetuned-news
e52883c2776fbfc0299d3eb7ef6299970b24e602
2022-06-17T12:14:46.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:ag_news", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
mosesju
null
mosesju/distilbert-base-uncased-finetuned-news
39
null
transformers
6,544
--- license: apache-2.0 tags: - generated_from_trainer datasets: - ag_news metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-news results: - task: name: Text Classification type: text-classification dataset: name: ag_news type: ag_news args: default metrics: - name: Accuracy type: accuracy value: 0.9388157894736842 - name: F1 type: f1 value: 0.9388275184627893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.2117 - Accuracy: 0.9388 - F1: 0.9388 ## Model description This model is intended to categorize news headlines into one of four categories; World, Sports, Science & Technology, or Business ## Intended uses & limitations The model is limited by the training data it used. If you use the model with a news story that falls outside of the four intended categories, it produces quite confused results. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2949 | 1.0 | 3750 | 0.2501 | 0.9262 | 0.9261 | | 0.1569 | 2.0 | 7500 | 0.2117 | 0.9388 | 0.9388 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
emilys/BERTweet-WNUT17
1a748f44c714525686d30b777c36b4b5f8f40334
2022-06-15T22:31:22.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:wnut_17", "transformers", "NER", "autotrain_compatible" ]
token-classification
false
emilys
null
emilys/BERTweet-WNUT17
39
null
transformers
6,545
--- language: - en tags: - NER datasets: - wnut_17 --- bertweet-base (https://huggingface.co/vinai/bertweet-base) finetuned on WNUT (2017), following https://github.com/huggingface/transformers/tree/main/examples/legacy/token-classification
DingosGotMyBaby/uhn-twitch-chat
8ac35a645e15581fdc54d1df4cca84d0a9e9daeb
2022-06-24T05:08:58.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "license:mit" ]
text-generation
false
DingosGotMyBaby
null
DingosGotMyBaby/uhn-twitch-chat
39
null
transformers
6,546
--- license: mit --- # A model based on UberHaxorNova's Twitch chat Trained on over 700 vods worth of chat and with some scuffed filtering it became a 300mb dataset. ## Dataset The dataset was created by downloading all the available vods at the time of creation as a json file and stripping out all the chat messages into a simple line-by-line text file. ## Training This was trained using [aitextgen](https://github.com/minimaxir/aitextgen), created by [Max Woolf](https://github.com/minimaxir), using the example notebook found [here](https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing). Using GPT-2's 124M model as the base, it was trained for 3000 steps and produces an output scuffed enough to look like a real Twitch chat user. ## Use This was created as a fun little project for the discord server and as such, should only be used for fun and not to harm people. This model must also follow the ethics guide of the tool that created it https://docs.aitextgen.io/ethics/
plncmm/mdeberta-wl-base-es
0682a32b193e16422ff25c16ce48eb417fb737e2
2022-06-26T13:49:00.000Z
[ "pytorch", "deberta-v2", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
fill-mask
false
plncmm
null
plncmm/mdeberta-wl-base-es
39
null
transformers
6,547
--- license: mit tags: - generated_from_trainer model-index: - name: mdeberta-wl-base-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-wl-base-es This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
hellennamulinda/eng-lug
2ed04490c7d5bdce995951c5d3642d4e00c7aff6
2022-07-11T06:45:00.000Z
[ "pytorch", "marian", "text2text-generation", "unk", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
hellennamulinda
null
hellennamulinda/eng-lug
39
null
transformers
6,548
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" co2_eq_emissions: 0.04087910671538076 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 1026034854 - CO2 Emissions (in grams): 0.04087910671538076 ## Validation Metrics - Loss: 1.0871405601501465 - Rouge1: 55.8225 - Rouge2: 34.1547 - RougeL: 54.4274 - RougeLsum: 54.408 - Gen Len: 23.178 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/hellennamulinda/autotrain-eng-lug-1070637495 ```
chradden/opencampus_age-detection
7173e20896f511734cdf9aacec4e5dd9bada8d86
2022-07-02T12:28:02.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index" ]
image-classification
false
chradden
null
chradden/opencampus_age-detection
39
null
transformers
6,549
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: opencampus_age-detection results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5892857313156128 --- # opencampus_age-detection Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### child portrait face ![child portrait face](images/child_portrait_face.jpg) #### generation x portrait face ![generation x portrait face](images/generation_x_portrait_face.jpg) #### millennials portrait face ![millennials portrait face](images/millennials_portrait_face.jpg) #### pensioner portrait face ![pensioner portrait face](images/pensioner_portrait_face.jpg) #### teenager portrait face ![teenager portrait face](images/teenager_portrait_face.jpg)
Olivia-umich/SpanKeptParaphraser
13f08b84c05c3f67fe915f5ee461e0f86787f428
2022-07-04T19:45:17.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
Olivia-umich
null
Olivia-umich/SpanKeptParaphraser
39
null
transformers
6,550
--- license: apache-2.0 ---
turingmachine/hupd-distilroberta-base
5bd1290dfaa958dd29ebb8641674d2c88df0176b
2022-07-05T15:30:46.000Z
[ "pytorch", "roberta", "fill-mask", "en", "dataset:HUPD/hupd", "transformers", "hupd", "distilroberta", "patents", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
turingmachine
null
turingmachine/hupd-distilroberta-base
39
1
transformers
6,551
--- language: - en thumbnail: "url to a thumbnail used in social sharing" tags: - hupd - roberta - distilroberta - patents license: cc-by-sa-4.0 datasets: - HUPD/hupd --- # HUPD DistilRoBERTa-Base Model This HUPD DistilRoBERTa model was fine-tuned on the HUPD dataset with a masked language modeling objective. It was originally introduced in [this paper](TBD). For more information about the Harvard USPTO Patent Dataset, please feel free to visit the [project website](https://patentdataset.org/) or the [project's GitHub repository](https://github.com/suzgunmirac/hupd). ### How to Use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline model = pipeline(task="fill-mask", model="turingmachine/hupd-distilroberta-base") model("Improved <mask> for playing a game of thumb wrestling.") ``` Here is the output: ```python [{'score': 0.4274042248725891, 'sequence': 'Improved method for playing a game of thumb wrestling.', 'token': 5448, 'token_str': ' method'}, {'score': 0.06967400759458542, 'sequence': 'Improved system for playing a game of thumb wrestling.', 'token': 467, 'token_str': ' system'}, {'score': 0.06849079579114914, 'sequence': 'Improved device for playing a game of thumb wrestling.', 'token': 2187, 'token_str': ' device'}, {'score': 0.04544765502214432, 'sequence': 'Improved apparatus for playing a game of thumb wrestling.', 'token': 26529, 'token_str': ' apparatus'}, {'score': 0.025765646249055862, 'sequence': 'Improved means for playing a game of thumb wrestling.', 'token': 839, 'token_str': ' means'}] ``` Alternatively, you can load the model and use it as follows: ```python import torch from transformers import AutoTokenizer, AutoModelForMaskedLM # cuda/cpu device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = AutoTokenizer.from_pretrained("turingmachine/hupd-distilroberta-base") model = AutoModelForMaskedLM.from_pretrained("turingmachine/hupd-distilroberta-base").to(device) TEXT = "Improved <mask> for playing a game of thumb wrestling." inputs = tokenizer(TEXT, return_tensors="pt").to(device) with torch.no_grad(): logits = model(**inputs).logits # retrieve indices of <mask> mask_token_indxs = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] for mask_idx in mask_token_indxs: predicted_token_id = logits[0, mask_idx].argmax(axis=-1) output = tokenizer.decode(predicted_token_id) print(f'Prediction for the <mask> token at index {mask_idx}: "{output}"') ``` Here is the output: ```python Prediction for the <mask> token at index 2: " method" ``` ## Citation For more information, please take a look at the original paper. * Paper: [The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications](TBD) * Authors: *Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber* * BibTeX: ``` @article{suzgun2022hupd, title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications}, author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K and Kominers, Scott and Shieber, Stuart}, year={2022} } ```
KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos
88ecb4c66419d6db05f57a4b600d35300129b205
2022-07-23T14:44:08.000Z
[ "pytorch", "deberta-v2", "token-classification", "ja", "dataset:universal_dependencies", "transformers", "japanese", "wikipedia", "pos", "dependency-parsing", "license:cc-by-sa-4.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos
39
null
transformers
6,552
--- language: - "ja" tags: - "japanese" - "wikipedia" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" widget: - text: "国境の長いトンネルを抜けると雪国であった。" --- # deberta-large-japanese-wikipedia-luw-upos ## Model Description This is a DeBERTa(V2) model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-wikipedia). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
Team-PIXEL/pixel-base-finetuned-pos-ud-arabic-padt
d2ad296d3b6a72c6f02d6ed2ceecbc4d9b251fee
2022-07-13T00:21:13.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-pos-ud-arabic-padt
39
null
transformers
6,553
Entry not found
huggingtweets/angelsexytexty-janieclone
a7e5148042812eefbed5d2f88a1e6dc94966d728
2022-07-28T13:51:41.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/angelsexytexty-janieclone
39
null
transformers
6,554
--- language: en thumbnail: http://www.huggingtweets.com/angelsexytexty-janieclone/1659016297136/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1536389142287892481/N6kCwACw_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1539644507880411137/05M0Qc_I_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Columbine Janie & Angel Sexy Texty</div> <div style="text-align: center; font-size: 14px;">@angelsexytexty-janieclone</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Columbine Janie & Angel Sexy Texty. | Data | Columbine Janie | Angel Sexy Texty | | --- | --- | --- | | Tweets downloaded | 2475 | 171 | | Retweets | 1037 | 2 | | Short tweets | 343 | 14 | | Tweets kept | 1095 | 155 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vy2ixd4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angelsexytexty-janieclone's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yvxvuns) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yvxvuns/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/angelsexytexty-janieclone') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sam34738/bert-hindi-kabita
5b686be729d946bee89c52f01b134530e7aae210
2022-07-13T19:31:57.000Z
[ "pytorch", "bert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
sam34738
null
sam34738/bert-hindi-kabita
39
null
transformers
6,555
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-hindi-kabita results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-hindi-kabita This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1956 | 1.0 | 460 | 0.5352 | | 0.4796 | 2.0 | 920 | 0.4795 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Tokenizers 0.12.1
Team-PIXEL/pixel-base-finetuned-masakhaner-yor
eb79d078cd4db81bdbbacdf1258af0587e633a5a
2022-07-15T03:33:30.000Z
[ "pytorch", "pixel", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Team-PIXEL
null
Team-PIXEL/pixel-base-finetuned-masakhaner-yor
39
null
transformers
6,556
Entry not found
Bachstelze/Rapgenerator
4ab4eaef76e56d6edf1efa987e1f1695976b96cd
2022-07-20T15:39:36.000Z
[ "pytorch", "gpt2", "text-generation", "de", "dataset:genius lyrics", "transformers", "Text Generation", "license:mit" ]
text-generation
false
Bachstelze
null
Bachstelze/Rapgenerator
39
null
transformers
6,557
--- language: de widget: - text: "Ich mach′ ein'n Song auf mein′n Lieblings-MCs (jaja)" tags: - Text Generation datasets: - genius lyrics license: mit --- # GPT-Rapgenerator The Rapgenerator is trained for [nullsechsroy](https://genius.com/artists/Nullsechsroy) on an english [GPT2](https://huggingface.co/transformers/model_doc/gpt2.html) that is converted to a german [GerPT2](https://github.com/bminixhofer/gerpt2). We used the [genius](https://docs.genius.com/#/songs-h2) songlyrics from the following artists: ['Ace Tee', 'Aligatoah', 'AnnenMayKantereit', 'Apache 207', 'Azad', 'Badmómzjay', 'Bausa', 'Blumentopf', 'Blumio', 'Capital Bra', 'Casper', 'Celo & Abdi', 'Cro', 'Dardan', 'Dendemann', 'Die P', 'Dondon', 'Dynamite Deluxe', 'Edgar Wasser', 'Eko Fresh', 'Farid Bang', 'Favorite', 'Genetikk', 'Haftbefehl', 'Haiyti', 'Huss und Hodn', 'Jamule', 'Jamule', 'Juju', 'Kasimir1441', 'Katja Krasavice', 'Kay One', 'Kitty Kat', 'Kool Savas', 'LX & Maxwell', 'Leila Akinyi', 'Loredana', 'Loredana & Mozzik', 'Luciano', 'Marsimoto', 'Marteria', 'Morlockk Dilemma', 'Moses Pelham', 'Nimo', 'NullSechsRoy', 'Prinz Pi', 'SSIO', 'SXTN', 'Sabrina Setlur', 'Samy Deluxe', 'Sanito', 'Sebastian Fitzek', 'Shirin David', 'Summer Cem', 'T-Low', 'Ufo361', 'YBRE', 'YFG Pave']
shengnan/visualize-v0-pre10w-preseed1-ft2w-seed1
6e84747286bc734fa321720d6d917c408e87c12a
2022-07-17T05:51:16.000Z
[ "pytorch", "t5", "transformers" ]
null
false
shengnan
null
shengnan/visualize-v0-pre10w-preseed1-ft2w-seed1
39
null
transformers
6,558
Entry not found
Anonymous1111/bert-base-emotion
d235f103f8464f7b65f1416b207d12b6797973c5
2022-07-18T10:32:56.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
Anonymous1111
null
Anonymous1111/bert-base-emotion
39
null
transformers
6,559
--- license: apache-2.0 ---
BrunoHays/wav2vec2XLS-R-common_voice_10-fr
ac7fa7e969ef3f60fbd63f9a33947d40da38c126
2022-07-25T11:39:43.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
BrunoHays
null
BrunoHays/wav2vec2XLS-R-common_voice_10-fr
39
null
transformers
6,560
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2XLS-R-common_voice_10-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2XLS-R-common_voice_10-fr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.3934 - Wer: 0.2774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 2.72 | 400 | 1.3597 | 0.8722 | | 3.9444 | 5.44 | 800 | 0.5503 | 0.4751 | | 0.5425 | 8.16 | 1200 | 0.4261 | 0.3445 | | 0.2356 | 10.88 | 1600 | 0.4007 | 0.3042 | | 0.1345 | 13.61 | 2000 | 0.4101 | 0.2836 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
diwank/bartner
50124b1f2357e726a40f5ac1aac14fabd2fd09c5
2022-07-27T14:24:28.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
diwank
null
diwank/bartner
39
null
transformers
6,561
--- license: mit --- Bart + Gartner = Bartner
tdobrxl/ClinicBERT
4824b6802bfd113e24551840f61ba1a1ffab9659
2022-07-29T22:33:11.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
tdobrxl
null
tdobrxl/ClinicBERT
39
null
transformers
6,562
ClinicBERT has the same architecture of RoBERTa model. It has been trained on clinical text and can be used for feature extraction from textual data. ## How to use ### Feature Extraction ``` from transformers import RobertaModel, RobertaTokenizer model = RobertaModel.from_pretrained("tdobrxl/ClinicBERT") tokenizer = RobertaTokenizer.from_pretrained("tdobrxl/ClinicBERT") text = "Randomized Study of Shark Cartilage in Patients With Breast Cancer." last_hidden_state, pooler_output = model(tokenizer.encode(text)).last_hidden_state, model(tokenizer.encode(text)).pooler_output ``` ### Masked Word Prediction ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="tdobrxl/ClinicBERT", tokenizer="tdobrxl/ClinicBERT") text = "this is the start of a beautiful <mask>." fill_mask(text) ``` ```[{'score': 0.26558592915534973, 'token': 363, 'token_str': ' study', 'sequence': 'this is the start of a beautiful study.'}, {'score': 0.06330082565546036, 'token': 2010, 'token_str': ' procedure', 'sequence': 'this is the start of a beautiful procedure.'}, {'score': 0.04393036663532257, 'token': 661, 'token_str': ' trial', 'sequence': 'this is the start of a beautiful trial.'}, {'score': 0.0363750196993351, 'token': 839, 'token_str': ' period', 'sequence': 'this is the start of a beautiful period.'}, {'score': 0.027248281985521317, 'token': 436, 'token_str': ' treatment', 'sequence': 'this is the start of a beautiful treatment.'}```
Emran/ClinicalBERT_description_full_ICD10_Code
2ba30aa7e482f86279d1889507dd1664f42c2520
2021-10-18T20:31:13.000Z
[ "pytorch", "bert", "transformers" ]
null
false
Emran
null
Emran/ClinicalBERT_description_full_ICD10_Code
38
null
transformers
6,563
Entry not found
Helsinki-NLP/opus-mt-en-hy
2b0a113b0968e30d7c6d4eedfe58007fe50ad819
2021-01-18T08:09:21.000Z
[ "pytorch", "marian", "text2text-generation", "en", "hy", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-hy
38
null
transformers
6,564
--- language: - en - hy tags: - translation license: apache-2.0 --- ### eng-hye * source group: English * target group: Armenian * OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md) * model: transformer-align * source language(s): eng * target language(s): hye * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.hye | 16.6 | 0.404 | ### System Info: - hf_name: eng-hye - source_languages: eng - target_languages: hye - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hy'] - src_constituents: {'eng'} - tgt_constituents: {'hye', 'hye_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: hye - short_pair: en-hy - chrF2_score: 0.40399999999999997 - bleu: 16.6 - brevity_penalty: 1.0 - ref_len: 5115.0 - src_name: English - tgt_name: Armenian - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: hy - prefer_old: False - long_pair: eng-hye - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-zh-fi
02f43b28f6765bc0397c4c2da1609e8c358243e1
2020-08-21T14:42:52.000Z
[ "pytorch", "marian", "text2text-generation", "zh", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-zh-fi
38
null
transformers
6,565
--- language: - zh - fi tags: - translation license: apache-2.0 --- ### zho-fin * source group: Chinese * target group: Finnish * OPUS readme: [zho-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md) * model: transformer-align * source language(s): cmn_Bopo cmn_Hani cmn_Latn nan_Hani yue yue_Hani * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.zho.fin | 35.1 | 0.579 | ### System Info: - hf_name: zho-fin - source_languages: zho - target_languages: fin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['zh', 'fi'] - src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - tgt_constituents: {'fin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt - src_alpha3: zho - tgt_alpha3: fin - short_pair: zh-fi - chrF2_score: 0.579 - bleu: 35.1 - brevity_penalty: 0.935 - ref_len: 1847.0 - src_name: Chinese - tgt_name: Finnish - train_date: 2020-06-17 - src_alpha2: zh - tgt_alpha2: fi - prefer_old: False - long_pair: zho-fin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
IMSyPP/hate_speech_slo
e20bf5141fd7718c657493702dccaaae795147f3
2022-05-16T06:13:11.000Z
[ "pytorch", "bert", "text-classification", "sl", "transformers", "license:mit" ]
text-classification
false
IMSyPP
null
IMSyPP/hate_speech_slo
38
null
transformers
6,566
--- pipeline_tag: text-classification inference: true widget: - text: "Sem Mark in živim v Ljubljani. Sem doktorski študent na Mednarodni podiplomski šoli Jožefa Stefana." language: - sl license: mit --- # Hate Speech Classifier for Social Media Content in Slovenian Language A monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model. ## Tokenizer During training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference. ## Model output The model classifies each input into one of four distinct classes: * 0 - acceptable * 1 - inappropriate * 2 - offensive * 3 - violent
SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask_finetune
1aa14babad086d30fd7cc14836f816a857f171d4
2021-06-23T05:15:56.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask_finetune
38
null
transformers
6,567
--- tags: - summarization widget: - text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" --- # CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
SaulLu/recreate-history
858f5225017881cd075c830c329427d1bea0b001
2021-05-28T16:37:37.000Z
[ "pytorch", "albert", "token-classification", "bn", "dataset:xtreme", "transformers", "collaborative", "bengali", "NER", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
SaulLu
null
SaulLu/recreate-history
38
null
transformers
6,568
--- language: bn tags: - collaborative - bengali - NER license: apache-2.0 datasets: xtreme metrics: - Loss - Accuracy - Precision - Recall --- # sahajBERT Named Entity Recognition ## Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann). Named Entities predicted by the model: | Label id | Label | |:--------:|:----:| |0 |O| |1 |B-PER| |2 |I-PER| |3 |B-ORG| |4 |I-ORG| |5 |B-LOC| |6 |I-LOC| ## Intended uses & limitations #### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER") # Initialize model model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER") # Initialize pipeline pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### Limitations and bias <!-- Provide examples of latent issues and potential remediations. --> WIP ## Training data The model was initialized it with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali of [WikiANN ](https://huggingface.co/datasets/wikiann) ## Training procedure Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` --> ## Eval results loss: 0.11714419722557068 accuracy: 0.9772286821705426 precision: 0.9585365853658536 recall: 0.9651277013752456 f1 : 0.9618208516886931 ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
XSY/albert-base-v2-fakenews-discriminator
49230d434cd1df26a368a2284ada7aeeacd8f25b
2021-11-16T13:11:50.000Z
[ "pytorch", "albert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
XSY
null
XSY/albert-base-v2-fakenews-discriminator
38
null
transformers
6,569
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: albert-base-v2-fakenews-discriminator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-fakenews-discriminator The dataset: Fake and real news dataset https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset I use title and label to train the classifier label_0 : Fake news label_1 : Real news This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0910 - Accuracy: 0.9758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0452 | 1.0 | 1768 | 0.0910 | 0.9758 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
XSY/albert-base-v2-imdb-calssification
833e0badbb0807db0f9382e61bf95f537e36cd42
2021-11-13T09:10:38.000Z
[ "pytorch", "albert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
XSY
null
XSY/albert-base-v2-imdb-calssification
38
null
transformers
6,570
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: albert-base-v2-imdb-calssification results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93612 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-imdb-calssification label_0: negative label_1: positive This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1983 - Accuracy: 0.9361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.26 | 1.0 | 1563 | 0.1983 | 0.9361 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
aasem/wav2vec2-xls-r-300m-Urdu
4d8badf671cc4297a2f264e61da973fb82fc78b2
2022-03-01T08:28:25.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
aasem
null
aasem/wav2vec2-xls-r-300m-Urdu
38
null
transformers
6,571
--- datasets: - common_voice: ~ language: - ur: ~ library_name: transformers: ~ license: mit: ~ metrics: - wer: ~ model-index: - name: wav2vec2-xls-r-300m-Urdu: ~ results: - task: dataset: args: ur: ~ name: : "common_voice" : ~ type: common_voice: ~ metrics: - type: wer: ~ value: 0.2459: ~ - type: cer: ~ value: 0.0691: ~ type: automatic-speech-recognition: ~ tags: - audio: ~ - automatic-speech-recognition: ~ - speech: ~ Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset
amtam0/timer-ner-en
c89664191fc6fcc9ab0755b487610121b5d171d4
2021-11-28T09:58:54.000Z
[ "pytorch", "en", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
amtam0
null
amtam0/timer-ner-en
38
1
flair
6,572
--- tags: - flair - token-classification - sequence-tagger-model language: en widget: - text: "12 sets of 2 minutes 38 minutes between each set" --- #### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5) 7-class NER English model using [Flair TransformerWordEmbeddings - distilroberta-base](https://github.com/flairNLP/flair/). | **tag** | **meaning** | |---------------------------------|-----------| | nb_rounds | Number of rounds | | duration_br_sd | Duration btwn rounds in seconds | | duration_br_min | Duration btwn rounds in minutes | | duration_br_hr | Duration btwn rounds in hours | | duration_wt_sd | workout duration in seconds | | duration_wt_min | workout duration in minutes | | duration_wt_hr | workout duration in hours | --- The dataset was created manually (perfectible). Sentences example : ``` 19 sets of 3 minutes 21 minutes between sets start 7 sets of 32 seconds create 13 sets of 26 seconds init 8 series of 3 hours 2 sets of 30 seconds 35 minutes between each cycle ... ```
asapp/sew-d-base-plus-400k
d93aa9415b25df69eba6a72df98b0bc30ad5c1de
2021-10-28T13:55:32.000Z
[ "pytorch", "sew-d", "feature-extraction", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "transformers", "speech", "license:apache-2.0" ]
feature-extraction
false
asapp
null
asapp/sew-d-base-plus-400k
38
null
transformers
6,573
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # SEW-D-base+ [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
bhavikardeshna/multilingual-bert-base-cased-vietnamese
389f1f11625880b00408b04071e601d88936320b
2021-12-21T11:44:14.000Z
[ "pytorch", "bert", "question-answering", "arxiv:2112.09866", "transformers", "autotrain_compatible" ]
question-answering
false
bhavikardeshna
null
bhavikardeshna/multilingual-bert-base-cased-vietnamese
38
null
transformers
6,574
# BibTeX entry and citation info ``` @misc{pandya2021cascading, title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages}, author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt}, year={2021}, eprint={2112.09866}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
chinhon/pegasus-newsroom-summarizer_02
d414b047bcbd01efa2e62a062f6fe8d4d5b5cc9e
2021-11-06T02:55:10.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
chinhon
null
chinhon/pegasus-newsroom-summarizer_02
38
1
transformers
6,575
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: pegasus-newsroom-summarizer_02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-newsroom-summarizer_02 This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2204 - Rouge1: 52.4459 - Rouge2: 35.2568 - Rougel: 41.6213 - Rougelsum: 48.7859 - Gen Len: 98.0627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.3231 | 1.0 | 16113 | 1.2305 | 52.1565 | 34.8681 | 41.3189 | 48.4258 | 95.9049 | | 1.3001 | 2.0 | 32226 | 1.2186 | 52.4921 | 35.2661 | 41.6264 | 48.8168 | 98.9241 | | 1.2372 | 3.0 | 48339 | 1.2204 | 52.4459 | 35.2568 | 41.6213 | 48.7859 | 98.0627 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
danurahul/alex_gpt3_endoftext
9472e564ff4306b8ef9387faa10dddbcb636ef8b
2021-05-21T15:20:28.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
danurahul
null
danurahul/alex_gpt3_endoftext
38
null
transformers
6,576
Entry not found
deepset/tinyroberta-squad2-step1
8289712ae4b3799ae63cdf0cdf1b321d0c9baac7
2022-02-15T14:29:07.000Z
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/tinyroberta-squad2-step1
38
null
transformers
6,577
Entry not found
educhav/Austin-DialoGPT-small
72b5f77232ab9c085aa141971333de2372119bc9
2022-01-22T07:00:24.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
educhav
null
educhav/Austin-DialoGPT-small
38
null
transformers
6,578
--- tags: - conversational --- # Austin Medina
flax-community/roberta-hindi
81f8b41477e02b631eac5fbbc6493ee57c8108ac
2021-07-20T12:50:29.000Z
[ "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
flax-community
null
flax-community/roberta-hindi
38
1
transformers
6,579
--- widget: - text: "मुझे उनसे बात करना <mask> अच्छा लगा" - text: "हम आपके सुखद <mask> की कामना करते हैं" - text: "सभी अच्छी चीजों का एक <mask> होता है" --- # RoBERTa base model for Hindi language Pretrained model on Hindi language using a masked language modeling (MLM) objective. [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax-community/roberta-hindi). > This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [Hugging Face](https://huggingface.co/) and TPU usage sponsored by Google. ## Model description RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of **mc4, oscar and indic-nlp** datasets) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi') >>> unmasker("हम आपके सुखद <mask> की कामना करते हैं") [{'score': 0.3310680091381073, 'sequence': 'हम आपके सुखद सफर की कामना करते हैं', 'token': 1349, 'token_str': ' सफर'}, {'score': 0.15317578613758087, 'sequence': 'हम आपके सुखद पल की कामना करते हैं', 'token': 848, 'token_str': ' पल'}, {'score': 0.07826550304889679, 'sequence': 'हम आपके सुखद समय की कामना करते हैं', 'token': 453, 'token_str': ' समय'}, {'score': 0.06304813921451569, 'sequence': 'हम आपके सुखद पहल की कामना करते हैं', 'token': 404, 'token_str': ' पहल'}, {'score': 0.058322224766016006, 'sequence': 'हम आपके सुखद अवसर की कामना करते हैं', 'token': 857, 'token_str': ' अवसर'}] ``` ## Training data The RoBERTa Hindi model was pretrained on the reunion of the following datasets: - [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. - [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. - [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark. - [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language. - [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites. - [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites. - [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>`. - We had to perform cleanup of **mC4** and **oscar** datasets by removing all non hindi (non Devanagari) characters from the datasets. - We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic-glue/) benchmark by [manual labelling](https://github.com/amankhandelia/roberta_hindi/blob/master/wikiner_incorrect_eval_set.csv) where the actual labels were not correct and modifying the [downstream evaluation dataset](https://github.com/amankhandelia/roberta_hindi/blob/master/utils.py). The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of **mC4, oscar** and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf-flax-roberta-hindi). ## Evaluation Results RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below. | Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi | |-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------| | BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 | | WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** | | IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 | | IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** | ## Team Members - Aman K ([amankhandelia](https://huggingface.co/amankhandelia)) - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk)) - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv)) - Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal)) - Rahul Dev ([mlkorra](https://huggingface.co/mlkorra)) ## Credits Huge thanks to Hugging Face 🤗 & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week. <img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium>
google/bert_uncased_L-10_H-256_A-4
652c66397cc8b8db62c1c35ab290d55ef3239c44
2021-05-19T17:23:44.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-10_H-256_A-4
38
null
transformers
6,580
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
google/tapas-medium-finetuned-wikisql-supervised
c0429384d7ca2bd8bfe64a8e1d2f00d519299b5b
2021-11-29T13:06:28.000Z
[ "pytorch", "tf", "tapas", "table-question-answering", "en", "dataset:wikisql", "arxiv:2004.02349", "arxiv:2010.00571", "arxiv:1709.00103", "transformers", "license:apache-2.0" ]
table-question-answering
false
google
null
google/tapas-medium-finetuned-wikisql-supervised
38
null
transformers
6,581
--- language: en tags: - tapas license: apache-2.0 datasets: - wikisql --- # TAPAS medium model fine-tuned on WikiSQL (in a supervised fashion) his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_medium` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL. ## Intended uses & limitations You can use this model for answering questions related to a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts. ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1709-00103, author = {Victor Zhong and Caiming Xiong and Richard Socher}, title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, journal = {CoRR}, volume = {abs/1709.00103}, year = {2017}, url = {http://arxiv.org/abs/1709.00103}, archivePrefix = {arXiv}, eprint = {1709.00103}, timestamp = {Mon, 13 Aug 2018 16:48:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
gurkan08/bert-turkish-text-classification
1a8ddc4b9c3818cb734b5c14b90141338a8e64d1
2021-05-19T17:50:18.000Z
[ "pytorch", "jax", "bert", "text-classification", "tr", "transformers" ]
text-classification
false
gurkan08
null
gurkan08/bert-turkish-text-classification
38
1
transformers
6,582
--- language: tr --- # Turkish News Text Classification Turkish text classification model obtained by fine-tuning the Turkish bert model (dbmdz/bert-base-turkish-cased) # Dataset Dataset consists of 11 classes were obtained from https://www.trthaber.com/. The model was created using the most distinctive 6 classes. Dataset can be accessed at https://github.com/gurkan08/datasets/tree/master/trt_11_category. label_dict = { 'LABEL_0': 'ekonomi', 'LABEL_1': 'spor', 'LABEL_2': 'saglik', 'LABEL_3': 'kultur_sanat', 'LABEL_4': 'bilim_teknoloji', 'LABEL_5': 'egitim' } 70% of the data were used for training and 30% for testing. train f1-weighted score = %97 test f1-weighted score = %94 # Usage from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("gurkan08/bert-turkish-text-classification") model = AutoModelForSequenceClassification.from_pretrained("gurkan08/bert-turkish-text-classification") nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) text = ["Süper Lig'in 6. haftasında Sivasspor ile Çaykur Rizespor karşı karşıya geldi...", "Son 24 saatte 69 kişi Kovid-19 nedeniyle yaşamını yitirdi, 1573 kişi iyileşti"] out = nlp(text) label_dict = { 'LABEL_0': 'ekonomi', 'LABEL_1': 'spor', 'LABEL_2': 'saglik', 'LABEL_3': 'kultur_sanat', 'LABEL_4': 'bilim_teknoloji', 'LABEL_5': 'egitim' } results = [] for result in out: result['label'] = label_dict[result['label']] results.append(result) print(results) # > [{'label': 'spor', 'score': 0.9992026090621948}, {'label': 'saglik', 'score': 0.9972177147865295}]
huggingtweets/bitcoin
ef6f3c42f4645f74fca90669420836ae1b9032aa
2021-05-21T20:44:55.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/bitcoin
38
null
transformers
6,583
--- language: en thumbnail: https://www.huggingtweets.com/bitcoin/1612625608055/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/421692600446619648/dWAbC2wg_400x400.jpeg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Bitcoin 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@bitcoin bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@bitcoin's tweets](https://twitter.com/bitcoin). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3206</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>1190</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>390</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>1626</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9fss3789/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bitcoin's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pqrlo2u) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pqrlo2u/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/bitcoin'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/michaeljackson
f29d75b3690f5a345e11f376ee47e983010d8249
2021-05-22T14:22:13.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/michaeljackson
38
null
transformers
6,584
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo_share.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/556179314660478976/l_MadSiU_400x400.jpeg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Michael Jackson 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@michaeljackson bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@michaeljackson's tweets](https://twitter.com/michaeljackson). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>2671</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>24</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>32</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>2615</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3lg17rb5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @michaeljackson's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/lnx54cjj) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/lnx54cjj/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/michaeljackson'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
iamtarun/wav2vec-osr
74950d7c4d19662200990eba4085543a321296b1
2021-11-04T15:08:10.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "transformers", "audio", "speech to text", "license:apache-2.0" ]
automatic-speech-recognition
false
iamtarun
null
iamtarun/wav2vec-osr
38
null
transformers
6,585
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition - speech to text license: apache-2.0 widget: - example_title: OSR sample 1 src: https://github.com/TheSoundOfAIOSR/rg_speech_to_text/blob/main/data/finetuning-dataset/audiofiles/TA-5.wav?raw=true - example_title: OSR sample 2 src: https://github.com/TheSoundOfAIOSR/rg_speech_to_text/blob/main/data/finetuning-dataset/audiofiles/TK-17.wav?raw=true - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac --- # Wav2Vec-OSR Finetuned facebook's wav2vec2 model for speech to text module of [The Sound Of AI open source research group](https://thesoundofaiosr.github.io/). The original base model is pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. ## Paper Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli ## Abstract We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. The original model can also be found in hugging face public model repository [here](https://huggingface.co/facebook/wav2vec2-base-960h) ## Usage ```python from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import soundfile as sf import torch # load tokenizer, data_processor and model tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("iamtarun/wav2vec-osr") processor = Wav2Vec2Processor.from_pretrained("iamtarun/wav2vec-osr") model = Wav2Vec2ForCTC.from_pretrained("iamtarun/wav2vec-osr") model = model.eval() device = "cuda" if torch.cuda.is_available() else "cpu" model = model.to(device) # define function to read in sound file def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) # speech data is passed to data processor whose output is then fed to model input_values = processor(ds["speech"][:2], sampling_rate=rate, padding="longest", return_tensors="pt").input_values.to(device) # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim =-1) transcriptions = tokenizer.decode(predicted_ids[0]) print(transcriptions) ```
jordan-m-young/buzz-article-gpt-2
1893609bd968d9f413d0cc84c4e0859c67907024
2021-05-23T06:03:38.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
jordan-m-young
null
jordan-m-young/buzz-article-gpt-2
38
null
transformers
6,586
Entry not found
kssteven/ibert-roberta-large
202dedcec60c0aece82a3c4d424cb7505efcb31f
2021-05-10T05:34:01.000Z
[ "pytorch", "ibert", "fill-mask", "arxiv:1907.11692", "arxiv:2101.01321", "transformers", "autotrain_compatible" ]
fill-mask
false
kssteven
null
kssteven/ibert-roberta-large
38
null
transformers
6,587
# I-BERT large model This model, `ibert-roberta-large`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this papaer](https://arxiv.org/abs/2101.01321). I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic. In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations. This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU. The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model. ## Finetuning Procedure Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model. ### Full-precision finetuning Full-precision finetuning of I-BERT is similar to RoBERTa finetuning. For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task. ``` python examples/text-classification/run_glue.py \ --model_name_or_path kssteven/ibert-roberta-large \ --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ### Model Quantization Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`. ``` { "_name_or_path": "kssteven/ibert-roberta-large", "architectures": [ "IBertForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "finetuning_task": "mrpc", "force_dequant": "none", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "ibert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "quant_mode": true, "tokenizer_class": "RobertaTokenizer", "transformers_version": "4.4.0.dev0", "type_vocab_size": 1, "vocab_size": 50265 } ``` Then, your model will automatically run as the integer-only mode when you load the checkpoint. Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory. Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning. ### Integer-only finetuning (Quantization-aware training) Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified. Note that the only difference in the example command below is `model_name_or_path`. ``` python examples/text-classification/run_glue.py \ --model_name_or_path $CHECKPOINT_DIR --task_name MRPC \ --do_eval \ --do_train \ --evaluation_strategy epoch \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --save_steps 115 \ --learning_rate 1e-6 \ --num_train_epochs 10 \ --output_dir $OUTPUT_DIR ``` ## Citation info If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321). ``` @article{kim2021bert, title={I-BERT: Integer-only BERT Quantization}, author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt}, journal={arXiv preprint arXiv:2101.01321}, year={2021} } ```
describeai/gemini
aa5c52e95888664ccbb83d868de3b7b26ae123c3
2022-05-14T00:46:52.000Z
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "Explain code", "Code Summarization", "Summarization", "license:mit", "autotrain_compatible" ]
text2text-generation
false
describeai
null
describeai/gemini
38
0
transformers
6,588
--- language: en tags: - Explain code - Code Summarization - Summarization license: mit --- # Gemini For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini) ## Model description Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in: - Python - Javascript (mostly vanilla JS, however, it can handle frameworks like React as well) - Java - Ruby - Go And outputs a description in English. ## Intended uses Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations. ### How to use You can use this model directly with a pipeline for Text2Text generation, as shown below: ```python from transformers import pipeline, set_seed summarizer = pipeline('text2text-generation', model='describeai/gemini') code = "print('hello world!')" response = summarizer(code, max_length=100, num_beams=3) print("Summarized code: " + response[0]['generated_text']) ``` Which should yield something along the lines of: ``` Summarized code: The following code is greeting the world. ``` ### Model sizes - Gemini (this repo): 770 Million Parameters - Gemini-Small - 220 Million Parameters ### Limitations Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results. ### About Us A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community.
mervenoyan/PubMedBERT-QNLI
8bd833ad8f65c11553c0b7c9230323aed04c4df9
2021-08-26T10:27:15.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
mervenoyan
null
mervenoyan/PubMedBERT-QNLI
38
7
transformers
6,589
# PubMedBERT Abstract + Full Text Fine-Tuned on QNLI Task Use case: You can use it to search through a document for a given question, to see if your question is answered in that document. LABEL0 is "not entailment" meaning your question is not answered by the context and LABEL1 is "entailment" meaning your question is answered. > Example input: [CLS] Your question [SEP] The context to be searched in [SEP] Link to the original model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext Credits to the paper: > @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and > Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann > and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific > Language Model Pretraining for Biomedical Natural Language > Processing}, year = {2020}, eprint = {arXiv:2007.15779}, }
mrm8488/spanbert-base-finetuned-tacred
52aefc31f92dda6435312f6c785ddb7506e6d218
2021-05-20T00:53:07.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "en", "arxiv:1907.10529", "transformers" ]
feature-extraction
false
mrm8488
null
mrm8488/spanbert-base-finetuned-tacred
38
null
transformers
6,590
--- language: en thumbnail: --- # SpanBERT base fine-tuned on TACRED [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [TACRED](https://nlp.stanford.edu/projects/tacred/) dataset by [them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution) ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Dataset 📚 [TACRED](https://nlp.stanford.edu/projects/tacred/) A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types. ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_tacred.py \ --do_train \ --do_eval \ --data_dir <TACRED_DATA_DIR> \ --model spanbert-base-cased \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --max_seq_length 128 \ --output_dir tacred_dir \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | **68.2** (this one) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanish-t5-small-sqac-for-qa
a1da1f72dc52c2812517103a37ce98426d039430
2021-09-03T10:22:10.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "es", "dataset:BSC-TeMU/SQAC", "transformers", "QA", "Q&A", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/spanish-t5-small-sqac-for-qa
38
3
transformers
6,591
--- language: es tags: - QA - Q&A datasets: - BSC-TeMU/SQAC widget: - text: "question: ¿Cuál es el nombre que se le da a la unidad morfológica y funcional de los seres vivos? context: La célula (del latín cellula, diminutivo de cella, ‘celda’) es la unidad morfológica y funcional de todo ser vivo. De hecho, la célula es el elemento de menor tamaño que puede considerarse vivo.\u200b De este modo, puede clasificarse a los organismos vivos según el número de células que posean: si solo tienen una, se les denomina unicelulares (como pueden ser los protozoos o las bacterias, organismos microscópicos); si poseen más, se les llama pluricelulares. En estos últimos el número de células es variable: de unos pocos cientos, como en algunos nematodos, a cientos de billones (1014), como en el caso del ser humano. Las células suelen poseer un tamaño de 10 µm y una masa de 1 ng, si bien existen células mucho mayores." --- # Spanish T5 (small) fine-tuned on **SQAC** for Spanish **QA** 📖❓ [spanish-T5-small](https://huggingface.co/flax-community/spanish-t5-small) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task. ## Details of Spanish T5 (small) T5 (small) like arch trained from scatch on [large_spanish_corpus](https://huggingface.co/datasets/large_spanish_corpus) for **HuggingFace/Flax/Jax Week**. ## Details of the dataset 📚 This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode). This dataset can be used to build extractive-QA. ## Results on test dataset 📝 | Metric | # Value | | ------ | --------- | | **BLEU** | **41.94** | ## Model in Action 🚀 ```python from transformers import T5ForConditionalGeneration, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ckpt = 'mrm8488/spanish-t5-small-sqac-for-qa' tokenizer = AutoTokenizer.from_pretrained(ckpt) model = T5ForConditionalGeneration.from_pretrained(ckpt).to(device) def get_answer(question, context): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text ], padding='max_length', truncation=True, max_length=512, return_tensors='pt') output = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device)) return tokenizer.decode(output[0], skip_special_tokens=True) context = ''' La ex codirectora del grupo de investigación de IA ética de Google, Margaret Mitchell, quien fue despedida en febrero después de una controversia sobre un artículo crítico del que fue coautora, se unirá a HuggingFace para ayudar a que los algoritmos de IA sean más justos. ''' question = '¿Qué hará Margaret Mitchell en HuggingFace?' print(get_answer(context, question)) # ayudar a que los algoritmos de ia sean más justos ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-squadv2
7bea82cd43074683a78620325dc0474dc97d8e85
2021-05-06T16:25:28.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:squad_v2", "arxiv:1910.10683", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-small-finetuned-squadv2
38
1
transformers
6,592
--- language: en datasets: - squad_v2 --- # T5-small fine-tuned on SQuAD v2 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [(small)](https://huggingface.co/t5-small) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad_v2``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad_v2 | train | 130319 | | squad_v2 | valid | 11873 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) ## Results 📝 | Metric | # Value | | ------ | --------- | | **EM** | **69.46** | | **F1** | **73.01** | ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-squadv2") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel has created RuPERTa-base (a Spanish RoBERTa) with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) # output: 'HF-Transformers and Google' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
ncduy/roberta-imdb-sentiment-analysis
031415a39e5f4a2e92fff93c5637fbfb28c78674
2021-08-09T10:54:50.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
ncduy
null
ncduy/roberta-imdb-sentiment-analysis
38
null
transformers
6,593
Entry not found
nguyenvulebinh/spelling-oov
48687d18de1e89f1d0ddba0eb4e686d3a4d67264
2021-12-15T17:00:58.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
nguyenvulebinh
null
nguyenvulebinh/spelling-oov
38
null
transformers
6,594
```python from transformers import EncoderDecoderModel from importlib.machinery import SourceFileLoader from transformers.file_utils import cached_path, hf_bucket_url import torch import os ## Load model & tokenizer cache_dir='./cache' model_name='nguyenvulebinh/spelling-oov' def download_tokenizer_files(): resources = ['envibert_tokenizer.py', 'dict.txt', 'sentencepiece.bpe.model'] for item in resources: if not os.path.exists(os.path.join(cache_dir, item)): tmp_file = hf_bucket_url(model_name, filename=item) tmp_file = cached_path(tmp_file,cache_dir=cache_dir) os.rename(tmp_file, os.path.join(cache_dir, item)) download_tokenizer_files() spell_tokenizer = SourceFileLoader("envibert.tokenizer",os.path.join(cache_dir,'envibert_tokenizer.py')).load_module().RobertaTokenizer(cache_dir) spell_model = EncoderDecoderModel.from_pretrained(model_name) def oov_spelling(word, num_candidate=1): result = [] inputs = spell_tokenizer([word.lower()]) input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] inputs = { "input_ids": torch.tensor(input_ids), "attention_mask": torch.tensor(attention_mask) } outputs = spell_model.generate(**inputs, num_return_sequences=num_candidate) for output in outputs.cpu().detach().numpy().tolist(): result.append(spell_tokenizer.sp_model.DecodePieces(spell_tokenizer.decode(output, skip_special_tokens=True).split())) return result oov_spelling('spacespeaker') # output: ['x pây x pếch cơ'] ```
nyu-mll/roberta-med-small-1M-3
1c7750b0a6258ca88958e1b9741f77d07e0fd4d3
2021-05-20T19:09:09.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
nyu-mll
null
nyu-mll/roberta-med-small-1M-3
38
null
transformers
6,595
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
pierreguillou/t5-base-qa-squad-v1.1-portuguese
21c3e49be1ce1a083ba4278427dc7799592ab33c
2022-01-27T14:38:28.000Z
[ "pytorch", "t5", "text2text-generation", "pt", "dataset:squad", "dataset:squad_v1_pt", "transformers", "qa", "model-index", "autotrain_compatible" ]
text2text-generation
false
pierreguillou
null
pierreguillou/t5-base-qa-squad-v1.1-portuguese
38
4
transformers
6,596
--- language: - pt tags: - text2text-generation - t5 - pytorch - qa datasets: - squad - squad_v1_pt metrics: - precision - recall - f1 - accuracy - squad model-index: - name: checkpoints results: - task: name: text2text-generation type: text2text-generation dataset: name: squad type: squad metrics: - name: f1 type: f1 value: 79.3 - name: exact-match type: exact-match value: 67.3983 widget: - text: "question: Quando começou a pandemia de Covid-19 no mundo? context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano." - text: "question: Onde foi descoberta a Covid-19? context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano." --- # T5 base finetuned for Question Answering (QA) on SQUaD v1.1 Portuguese ![Exemple of what can do with a T5 model (for example: Question Answering finetuned on SQUAD v1.1 in Portuguese)](https://miro.medium.com/max/2000/1*zp9niaQzWNo8Pipd8zvL1w.png) ## Introduction **t5-base-qa-squad-v1.1-portuguese** is a QA model (Question Answering) in Portuguese that was finetuned on 27/01/2022 in Google Colab from the model [unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) of Neuralmind on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) by using a Test2Text-Generation objective. Due to the small size of T5 base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset: - **f1**: 79.3 - **exact_match**: 67.3983 Check our other QA models in Portuguese finetuned on SQUAD v1.1: - [Portuguese BERT base cased QA](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese) - [Portuguese BERT large cased QA](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese) - [Portuguese ByT5 small QA](https://huggingface.co/pierreguillou/byt5-small-qa-squad-v1.1-portuguese) ## Blog post [NLP nas empresas | Como eu treinei um modelo T5 em português na tarefa QA no Google Colab](https://medium.com/@pierre_guillou/nlp-nas-empresas-como-eu-treinei-um-modelo-t5-em-portugu%C3%AAs-na-tarefa-qa-no-google-colab-e8eb0dc38894) (27/01/2022) ## Widget & App You can test this model into the widget of this page. Use as well the [QA App | T5 base pt](https://huggingface.co/spaces/pierreguillou/question-answering-portuguese-t5-base) that allows using the model T5 base finetuned on the QA task with the SQuAD v1.1 pt dataset. ## Using the model for inference in production ```` # install pytorch: check https://pytorch.org/ # !pip install transformers from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # model & tokenizer model_name = "pierreguillou/t5-base-qa-squad-v1.1-portuguese" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # parameters max_target_length=32 num_beams=1 early_stopping=True input_text = 'question: Quando foi descoberta a Covid-19? context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano.' label = '1 de dezembro de 2019' inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(inputs["input_ids"], max_length=max_target_length, num_beams=num_beams, early_stopping=early_stopping ) pred = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) print('true answer |', label) print('pred |', pred) ```` You can use pipeline, too. However, it seems to have an issue regarding to the max_length of the input sequence. ```` !pip install transformers import transformers from transformers import pipeline # model model_name = "pierreguillou/t5-base-qa-squad-v1.1-portuguese" # parameters max_target_length=32 num_beams=1 early_stopping=True clean_up_tokenization_spaces=True input_text = 'question: Quando foi descoberta a Covid-19? context: A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano.' label = '1 de dezembro de 2019' text2text = pipeline( "text2text-generation", model=model_name, max_length=max_target_length, num_beams=num_beams, early_stopping=early_stopping, clean_up_tokenization_spaces=clean_up_tokenization_spaces ) pred = text2text(input_text) print('true answer |', label) print('pred |', pred) ```` ## Training procedure ### Notebook The notebook of finetuning ([HuggingFace_Notebook_t5-base-portuguese-vocab_question_answering_QA_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/HuggingFace_Notebook_t5_base_portuguese_vocab_question_answering_QA_squad_v11_pt.ipynb)) is in github. ### Hyperparameters ```` # do training and evaluation do_train = True do_eval= True # batch batch_size = 4 gradient_accumulation_steps = 3 per_device_train_batch_size = batch_size per_device_eval_batch_size = per_device_train_batch_size*16 # LR, wd, epochs learning_rate = 1e-4 weight_decay = 0.01 num_train_epochs = 10 fp16 = True # logs logging_strategy = "steps" logging_first_step = True logging_steps = 3000 # if logging_strategy = "steps" eval_steps = logging_steps # checkpoints evaluation_strategy = logging_strategy save_strategy = logging_strategy save_steps = logging_steps save_total_limit = 3 # best model load_best_model_at_end = True metric_for_best_model = "f1" #"loss" if metric_for_best_model == "loss": greater_is_better = False else: greater_is_better = True # evaluation num_beams = 1 ```` ### Training results ```` Num examples = 87510 Num Epochs = 10 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 12 Gradient Accumulation steps = 3 Total optimization steps = 72920 Step Training Loss Exact Match F1 3000 0.776100 61.807001 75.114517 6000 0.545900 65.260170 77.468930 9000 0.460500 66.556291 78.491938 12000 0.393400 66.821192 78.745397 15000 0.379800 66.603595 78.815515 18000 0.298100 67.578051 79.287899 21000 0.303100 66.991485 78.979669 24000 0.251600 67.275307 78.929923 27000 0.237500 66.972564 79.333612 30000 0.220500 66.915799 79.236574 33000 0.182600 67.029328 78.964212 36000 0.190600 66.982025 79.086125 ````
robkayinto/distilbert-base-uncased-finetuned-emotion
45bd8e9904cf14c16d892e78269d5b5be9aeb6a5
2022-05-31T15:17:31.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
robkayinto
null
robkayinto/distilbert-base-uncased-finetuned-emotion
38
null
transformers
6,597
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9195 - name: F1 type: f1 value: 0.919829815254287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2192 - Accuracy: 0.9195 - F1: 0.9198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8338 | 1.0 | 250 | 0.3114 | 0.9065 | 0.9044 | | 0.2467 | 2.0 | 500 | 0.2192 | 0.9195 | 0.9198 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
sentence-transformers/nli-bert-base-cls-pooling
e59f6ff65548f7fa407ddd9b4e2754454a7a36e5
2021-08-05T08:27:14.000Z
[ "pytorch", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/nli-bert-base-cls-pooling
38
null
sentence-transformers
6,598
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/nli-bert-base-cls-pooling This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-bert-base-cls-pooling') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-base-cls-pooling') model = AutoModel.from_pretrained('sentence-transformers/nli-bert-base-cls-pooling') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-base-cls-pooling) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
veronica320/TE-for-Event-Extraction
38f55b8c833f880a487358e42e6ed5419f93a039
2021-07-30T23:11:05.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
veronica320
null
veronica320/TE-for-Event-Extraction
38
null
transformers
6,599
# TE-for-Event-Extraction ## Model description This is a TE model as part of the event extraction system in the ACL2021 paper: [Zero-shot Event Extraction via Transfer Learning: Challenges and Insights](https://aclanthology.org/2021.acl-short.42/). The pretrained architecture is [roberta-large](https://huggingface.co/roberta-large) and the fine-tuning data is [MNLI](https://cims.nyu.edu/~sbowman/multinli/). The label mapping is: ``` LABEL_0: Contradiction LABEL_1: Neutral LABEL_2: Entailment ``` ## Demo To see how the model works, type a sentence and a hypothesis separated by "\<\/s\>\<\/s\>" in the right-hand-side textbox under "Hosted inference API". Example: - Input: ``` A car bomb exploded Thursday in a crowded outdoor market in the heart of Jerusalem. </s></s> This text is about an attack. ``` - Output: ``` LABEL_2 (Entailment) ``` ## Usage - To use the TE model independently, follow the [huggingface documentation on AutoModelForSequenceClassification](https://huggingface.co/transformers/task_summary.html#sequence-classification). - To use it as part of the event extraction system, please check out [our Github repo](https://github.com/veronica320/Zeroshot-Event-Extraction). ### BibTeX entry and citation info ``` @inproceedings{lyu-etal-2021-zero, title = "Zero-shot Event Extraction via Transfer Learning: {C}hallenges and Insights", author = "Lyu, Qing and Zhang, Hongming and Sulem, Elior and Roth, Dan", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-short.42", doi = "10.18653/v1/2021.acl-short.42", pages = "322--332", abstract = "Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answering (QA) queries (e.g. {``}A city was attacked{''} entails {``}There is an attack{''}), exploiting pretrained TE/QA models for direct transfer. On ACE-2005 and ERE, our system achieves acceptable results, yet there is still a large gap from supervised approaches, showing that current QA and TE technologies fail in transferring to a different domain. To investigate the reasons behind the gap, we analyze the remaining key challenges, their respective impact, and possible improvement directions.", } ```