modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
IIC/marimari-r2r-mlsum
6230c21c7210c1ab68f32fa56dc1c1f9d3cc27b6
2022-04-13T16:49:56.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "es", "dataset:mlsum", "arxiv:1907.12461", "transformers", "summarization", "seq2seq", "model-index", "autotrain_compatible" ]
summarization
false
IIC
null
IIC/marimari-r2r-mlsum
66
2
transformers
5,500
--- language: - es tags: - summarization # Example: audio - seq2seq # Example: automatic-speech-recognition datasets: - mlsum metrics: - rouge2 - rouge1 - rougel - rougelsum # Optional. Add this if you want to encode your eval results in a structured way. model-index: - name: marimari-r2r-mlsum results: - task: type: summarization # Required. Example: automatic-speech-recognition name: abstractive summarization # Optional. Example: Speech Recognition dataset: type: mlsum # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: mlsum-es # Required. Example: Common Voice zh-CN args: es # Optional. Example: zh-CN metrics: - type: rouge1 # Required. Example: wer value: 28.7802 # Required. Example: 20.90 name: rouge1 # Optional. Example: Test WER - type: rouge2 value: 10.6748 name: rouge2 - type: rougeL value: 23.0447 name: rougeL - type: rougeLsum value: 23.4055 name: rougeLsum --- <img src="https://huggingface.co/IIC/marimari-r2r-mlsum/resolve/main/marimariLogo.png"/> This is a model for text summarization in Spanish. It has been trained on the spanish portion of [mlsum](https://huggingface.co/datasets/mlsum). For that, MariMari was created. It is called like that because it is an EncoderDecoder model built from Maria model, specifically, the [roberta model from the Maria Project](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne). For building the Encoder Decoder model, [this paper was followed](https://arxiv.org/abs/1907.12461), which has a [direct implementation in transformers](https://huggingface.co/docs/transformers/master/model_doc/encoder-decoder). As there are no natural encoder decoder models in Spanish, such as BART or T5, we decided to leverage the capacity of the Roberta model of the MarIA project, as it has shown great results on several NLU tasks, therefore it was natural to think it could perform well on NLG tasks when trained properly. For tuning the hyperparameters of the model we used [Optuna](https://optuna.org/), with only 10 different trials and 7 initial random trials, as [the dataset chosen for training the model, mlsum](https://huggingface.co/datasets/mlsum) was huge. The set of hyperparameters used was the following: ```python def hp_space(trial): return { "learning_rate": trial.suggest_float( "learning_rate", 3e-5, 7e-5, log=True ), "num_train_epochs": trial.suggest_categorical( "num_train_epochs", [7] ), "per_device_train_batch_size": trial.suggest_categorical( "per_device_train_batch_size", [16]), "per_device_eval_batch_size": trial.suggest_categorical( "per_device_eval_batch_size", [32]), "gradient_accumulation_steps": trial.suggest_categorical( "gradient_accumulation_steps", [2, 4, 8]), "warmup_steps": trial.suggest_categorical( "warmup_steps", [50, 100, 500, 1000] ), "weight_decay": trial.suggest_float( "weight_decay", 0.0, 0.1 ), ``` The reported results are on the test split of mlsum. As you can see, MariMari-r2r-mlsum works better for summarization on mlsum than the previous best model in this regard, [beto2beto](https://huggingface.co/LeoCordoba/beto2beto-mlsum). The complete metrics on test are: ```json {"rouge1": 28.7802, "rouge2": 10.6748, "rougeL": 23.0447, "rougeLsum": 23.4055, "gen_len": 25.7803} ``` This model is really easy to use, and with the following lines of code you can just start summarizing your documents in Spanish: ```python from transformers import EncoderDecoderModel, AutoTokenizer text = "Hola esto es un ejemplo de texto a resumir. Poco hay que resumir aquí, pero es sólo de muestra." tokenizer = AutoTokenizer.from_pretrained("IIC/marimari-r2r-mlsum") model = EncoderDecoderModel.from_pretrained("IIC/marimari-r2r-mlsum") input_ids = tokenizer(text, return_tensors="pt").input_ids output_ids = model.generate(input_ids)[0] print(tokenizer.decode(output_ids, skip_special_tokens=True)) ``` ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model.
questgen/all-mpnet-base-v2-feature-extraction-pipeline
f88ec83121852d920f89e717d4adedf760a94b57
2022-05-15T06:29:59.000Z
[ "pytorch", "mpnet", "fill-mask", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "sentence-transformers", "feature-extraction", "sentence-similarity", "license:apache-2.0" ]
feature-extraction
false
questgen
null
questgen/all-mpnet-base-v2-feature-extraction-pipeline
66
null
sentence-transformers
5,501
--- pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
zhifei/autotrain-chinese-title-summarization-1060936832
e3884592b3c6f4f8b619888e41dad01dc14f9970
2022-06-30T12:23:58.000Z
[ "pytorch", "mt5", "text2text-generation", "unk", "dataset:zhifei/autotrain-data-chinese-title-summarization", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
zhifei
null
zhifei/autotrain-chinese-title-summarization-1060936832
66
null
transformers
5,502
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - zhifei/autotrain-data-chinese-title-summarization co2_eq_emissions: 3.841483701875158 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1060936832 - CO2 Emissions (in grams): 3.841483701875158 ## Validation Metrics - Loss: 0.5115200877189636 - Rouge1: 27.3016 - Rouge2: 10.4762 - RougeL: 27.3016 - RougeLsum: 27.1111 - Gen Len: 14.3619 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-chinese-title-summarization-1060936832 ```
gaunernst/bert-small-uncased
10b42d998a29a4ac8b43c866cc1184cb4880cdd4
2022-07-02T07:20:15.000Z
[ "pytorch", "bert", "transformers", "license:apache-2.0" ]
null
false
gaunernst
null
gaunernst/bert-small-uncased
66
null
transformers
5,503
--- license: apache-2.0 ---
yazinga/DialoGPT-medium-scout
81a1b77c99e99f78cde1d2271d5f1c0f9decc015
2022-07-21T20:19:50.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
yazinga
null
yazinga/DialoGPT-medium-scout
66
null
transformers
5,504
--- tags: - conversational --- # Scout DialoGPT Model
HooshvareLab/gpt2-fa-comment
a11c3401aa46429b13184a6c94088bd43728c7fd
2021-05-21T10:47:25.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "fa", "transformers", "license:apache-2.0" ]
text-generation
false
HooshvareLab
null
HooshvareLab/gpt2-fa-comment
65
null
transformers
5,505
--- language: fa license: apache-2.0 widget: - text: "<s>نمونه دیدگاه هم خوب هم بد به طور کلی <sep>" - text: "<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و طعم <sep>" - text: "<s>نمونه دیدگاه خوب از نظر بازی و کارگردانی <sep>" - text: "<s>نمونه دیدگاه خیلی خوب از نظر بازی و صحنه و داستان <sep>" - text: "<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و طعم و کیفیت <sep>" --- # Persian Comment Generator The model can generate comments based on your aspects, and the model was fine-tuned on [persiannlp/parsinlu](https://github.com/persiannlp/parsinlu). Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section. ## Comments Aspects ```text <s>نمونه دیدگاه هم خوب هم بد به طور کلی <sep> <s>نمونه دیدگاه خوب به طور کلی <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر ارزش غذایی و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت <sep> <s>نمونه دیدگاه منفی از نظر کیفیت <sep> <s>نمونه دیدگاه خوب از نظر طعم <sep> <s>نمونه دیدگاه خیلی خوب به طور کلی <sep> <s>نمونه دیدگاه خوب از نظر بسته بندی <sep> <s>نمونه دیدگاه منفی از نظر کیفیت و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارسال و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و طعم <sep> <s>نمونه دیدگاه منفی به طور کلی <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر ارسال <sep> <s>نمونه دیدگاه منفی از نظر طعم <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید <sep> <s>نمونه دیدگاه نظری ندارم به طور کلی <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم <sep> <s>نمونه دیدگاه خیلی منفی به طور کلی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و کیفیت و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت <sep> <s>نمونه دیدگاه منفی از نظر طعم و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر طعم و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارسال <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و طعم <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و طعم و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و ارسال <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر طعم و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر طعم و کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر ارسال و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر طعم و ارزش غذایی <sep> <s>نمونه دیدگاه منفی از نظر ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و طعم <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر طعم و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر طعم و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و ارسال و طعم و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و طعم و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی و کیفیت و طعم <sep> <s>نمونه دیدگاه خوب از نظر ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی و ارزش غذایی و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر طعم و ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خوب از نظر بسته بندی و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و طعم <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارزش خرید و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش غذایی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و ارزش خرید <sep> <s>نمونه دیدگاه منفی از نظر طعم و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و طعم <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارزش غذایی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر طعم و کیفیت و ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی و طعم و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر بسته بندی و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر ارسال و طعم <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و ارسال <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و ارزش خرید و بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر ارزش غذایی و ارسال <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و طعم و ارزش خرید و ارسال <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارسال و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارسال و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش خرید و ارسال <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش خرید و طعم <sep> <s>نمونه دیدگاه خوب از نظر بسته بندی و کیفیت <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی و ارسال <sep> <s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و طعم و ارزش خرید <sep> <s>نمونه دیدگاه نظری ندارم از نظر بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و بسته بندی و طعم <sep> <s>نمونه دیدگاه خوب از نظر طعم و بسته بندی <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر ارزش خرید و ارزش غذایی <sep> <s>نمونه دیدگاه منفی از نظر طعم و بسته بندی <sep> <s>نمونه دیدگاه منفی از نظر کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش غذایی و بسته بندی <sep> <s>نمونه دیدگاه خوب از نظر ارسال و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارسال <sep> <s>نمونه دیدگاه نظری ندارم از نظر طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی <sep> <s>نمونه دیدگاه منفی از نظر ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارسال و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و بسته بندی <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و بسته بندی و ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر طعم و بسته بندی و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال و ارزش خرید <sep> <s>نمونه دیدگاه نظری ندارم از نظر ارزش غذایی <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارسال و ارزش خرید و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و طعم و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال و بسته بندی <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و ارسال <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارسال و کیفیت <sep> <s>نمونه دیدگاه خوب از نظر کیفیت و ارسال <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و ارزش غذایی <sep> <s>نمونه دیدگاه خوب از نظر ارزش غذایی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و ارزش غذایی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارسال و بسته بندی و کیفیت <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی و طعم <sep> <s>نمونه دیدگاه منفی از نظر بسته بندی و ارزش غذایی <sep> <s>نمونه دیدگاه منفی از نظر طعم و کیفیت و ارزش خرید <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش غذایی و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و ارزش خرید <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و کیفیت و طعم <sep> <s>نمونه دیدگاه منفی از نظر ارزش خرید و کیفیت و طعم <sep> <s>نمونه دیدگاه منفی از نظر کیفیت و طعم و ارزش غذایی <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارسال و کیفیت و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر طعم و بسته بندی و ارسال <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و طعم <sep> <s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش غذایی و کیفیت <sep> <s>نمونه دیدگاه منفی از نظر ارزش خرید و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی منفی از نظر کیفیت و طعم و بسته بندی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر ارسال و ارزش خرید <sep> <s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و طعم و کیفیت <sep> <s>نمونه دیدگاه خیلی منفی از نظر طعم و ارسال <sep> <s>نمونه دیدگاه منفی از نظر موسیقی و بازی <sep> <s>نمونه دیدگاه منفی از نظر داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر صدا <sep> <s>نمونه دیدگاه خیلی منفی از نظر داستان <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و فیلمبرداری و کارگردانی و بازی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بازی <sep> <s>نمونه دیدگاه منفی از نظر داستان و بازی <sep> <s>نمونه دیدگاه منفی از نظر بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر داستان و کارگردانی و بازی <sep> <s>نمونه دیدگاه خیلی منفی از نظر داستان و بازی <sep> <s>نمونه دیدگاه خوب از نظر بازی <sep> <s>نمونه دیدگاه خیلی منفی از نظر بازی و داستان و کارگردانی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی <sep> <s>نمونه دیدگاه خوب از نظر بازی و داستان <sep> <s>نمونه دیدگاه خوب از نظر داستان و بازی <sep> <s>نمونه دیدگاه خوب از نظر داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر داستان و بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان <sep> <s>نمونه دیدگاه خیلی منفی از نظر داستان و کارگردانی و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی منفی از نظر بازی <sep> <s>نمونه دیدگاه خیلی منفی از نظر کارگردانی <sep> <s>نمونه دیدگاه منفی از نظر کارگردانی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی <sep> <s>نمونه دیدگاه خوب از نظر کارگردانی و بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر صحنه و کارگردانی <sep> <s>نمونه دیدگاه منفی از نظر بازی و کارگردانی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان و کارگردانی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر فیلمبرداری <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی و فیلمبرداری و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و موسیقی <sep> <s>نمونه دیدگاه خوب از نظر صحنه و بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و موسیقی و کارگردانی <sep> <s>نمونه دیدگاه خوب از نظر داستان و کارگردانی <sep> <s>نمونه دیدگاه خوب از نظر بازی و کارگردانی <sep> <s>نمونه دیدگاه خیلی منفی از نظر بازی و کارگردانی <sep> <s>نمونه دیدگاه منفی از نظر کارگردانی و موسیقی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بازی و داستان <sep> <s>نمونه دیدگاه خوب از نظر کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر بازی و کارگردانی <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و داستان <sep> <s>نمونه دیدگاه خیلی منفی از نظر داستان و کارگردانی <sep> <s>نمونه دیدگاه خیلی خوب از نظر داستان و کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان <sep> <s>نمونه دیدگاه خوب از نظر بازی و داستان و موسیقی و کارگردانی و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی منفی از نظر داستان و بازی و کارگردانی <sep> <s>نمونه دیدگاه خیلی منفی از نظر بازی و داستان <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی و کارگردانی <sep> <s>نمونه دیدگاه منفی از نظر بازی و داستان <sep> <s>نمونه دیدگاه خوب از نظر فیلمبرداری و صحنه و موسیقی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و کارگردانی و بازی <sep> <s>نمونه دیدگاه نظری ندارم از نظر بازی <sep> <s>نمونه دیدگاه منفی از نظر داستان و کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی و صحنه <sep> <s>نمونه دیدگاه خوب از نظر کارگردانی و داستان و بازی و فیلمبرداری <sep> <s>نمونه دیدگاه خوب از نظر بازی و صحنه و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و صحنه و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و موسیقی و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و صحنه <sep> <s>نمونه دیدگاه خیلی خوب از نظر فیلمبرداری و صحنه و داستان و کارگردانی <sep> <s>نمونه دیدگاه منفی از نظر کارگردانی و بازی <sep> <s>نمونه دیدگاه منفی از نظر کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر داستان و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر فیلمبرداری و بازی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و داستان و صحنه <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر موسیقی و کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر موسیقی و صحنه <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر صحنه و فیلمبرداری و داستان و بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان و موسیقی و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی منفی از نظر کارگردانی و صدا و صحنه و داستان <sep> <s>نمونه دیدگاه خوب از نظر داستان و کارگردانی و بازی <sep> <s>نمونه دیدگاه منفی از نظر داستان و بازی و کارگردانی <sep> <s>نمونه دیدگاه خوب از نظر داستان و بازی و موسیقی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی <sep> <s>نمونه دیدگاه خیلی منفی از نظر کارگردانی و بازی و صحنه <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی و بازی <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر صحنه و فیلمبرداری و داستان <sep> <s>نمونه دیدگاه خوب از نظر موسیقی و داستان <sep> <s>نمونه دیدگاه منفی از نظر موسیقی و بازی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر صدا و بازی <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و صحنه و فیلمبرداری <sep> <s>نمونه دیدگاه خیلی منفی از نظر بازی و فیلمبرداری و داستان و کارگردانی <sep> <s>نمونه دیدگاه خیلی منفی از نظر صحنه <sep> <s>نمونه دیدگاه منفی از نظر داستان و صحنه <sep> <s>نمونه دیدگاه منفی از نظر بازی و صحنه و صدا <sep> <s>نمونه دیدگاه خیلی منفی از نظر فیلمبرداری و صدا <sep> <s>نمونه دیدگاه خیلی خوب از نظر موسیقی <sep> <s>نمونه دیدگاه خوب از نظر بازی و کارگردانی و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و فیلمبرداری و موسیقی و کارگردانی و داستان <sep> <s>نمونه دیدگاه هم خوب هم بد از نظر فیلمبرداری و داستان و بازی <sep> <s>نمونه دیدگاه منفی از نظر صحنه و فیلمبرداری و داستان <sep> <s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی و داستان <sep> ``` ## Questions? Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo.
KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation
c8a9275fdc413f4f8e113cc924861f1994b04d5a
2021-12-10T00:34:26.000Z
[ "pytorch", "roberta", "token-classification", "lzh", "transformers", "classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation
65
1
transformers
5,506
--- language: - "lzh" tags: - "classical chinese" - "literary chinese" - "ancient chinese" - "sentence segmentation" - "token-classification" license: "apache-2.0" pipeline_tag: "token-classification" widget: - text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" --- # roberta-classical-chinese-base-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p))) ``` ## Reference Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
addy88/perceiver_image_classifier
d13564b81f342b5223ac5001a216ef89cfe2c8a4
2022-01-02T13:05:37.000Z
[ "pytorch", "perceiver", "image-classification", "transformers" ]
image-classification
false
addy88
null
addy88/perceiver_image_classifier
65
null
transformers
5,507
### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned import requests from PIL import Image feature_extractor = PerceiverFeatureExtractor.from_pretrained("addy88/perceiver_image_classifier") model = PerceiverForImageClassificationLearned.from_pretrained("addy88/perceiver_image_classifier") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # prepare input encoding = feature_extractor(image, return_tensors="pt") inputs = encoding.pixel_values # forward pass outputs = model(inputs) logits = outputs.logits print("Predicted class:", model.config.id2label[logits.argmax(-1).item()]) >>> should print Predicted class: tabby, tabby cat ```
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
7e66c09a995a61e7f37d29d72e064383ff7ca13e
2021-10-17T00:32:35.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
amansolanki
null
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
65
null
transformers
5,508
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - amansolanki/autonlp-data-Tweet-Sentiment-Extraction co2_eq_emissions: 3.651199395353127 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 20114061 - CO2 Emissions (in grams): 3.651199395353127 ## Validation Metrics - Loss: 0.5046541690826416 - Accuracy: 0.8036219581211093 - Macro F1: 0.807095210403678 - Micro F1: 0.8036219581211093 - Weighted F1: 0.8039634739225368 - Macro Precision: 0.8076842795233988 - Micro Precision: 0.8036219581211093 - Weighted Precision: 0.8052135235094771 - Macro Recall: 0.8075241470527056 - Micro Recall: 0.8036219581211093 - Weighted Recall: 0.8036219581211093 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
ddobokki/electra-small-nli-sts
3246a95b1b7fba01723b63e394543900f9deaeb5
2022-03-28T07:49:33.000Z
[ "pytorch", "electra", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers", "ko" ]
sentence-similarity
false
ddobokki
null
ddobokki/electra-small-nli-sts
65
1
sentence-transformers
5,509
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - ko --- # ddobokki/electra-small-nli-sts This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ddobokki/electra-small-nli-sts') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ddobokki/electra-small-nli-sts') model = AutoModel.from_pretrained('ddobokki/electra-small-nli-sts') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ddobokki/electra-small-nli-sts) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 9039 with parameters: ``` {'batch_size': 64} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 903, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 904, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/jesusisathembo
df9d11183402edc0c16c9fa7357808e68220719e
2021-05-22T09:38:03.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/jesusisathembo
65
null
transformers
5,510
--- language: en thumbnail: https://www.huggingtweets.com/jesusisathembo/1614096400764/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1331300707216068609/s4UcWg76_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">serial experiments maK 🤖 AI Bot </div> <div style="font-size: 15px">@jesusisathembo bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@jesusisathembo's tweets](https://twitter.com/jesusisathembo). | Data | Quantity | | --- | --- | | Tweets downloaded | 3161 | | Retweets | 1164 | | Short tweets | 243 | | Tweets kept | 1754 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bqj16zu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jesusisathembo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kuiuxq9x) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kuiuxq9x/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jesusisathembo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LACAI/DialoGPT-small-SGD
577683f7fd450f9167ea72c5e398bf2046490ade
2022-01-02T04:08:07.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
LACAI
null
LACAI/DialoGPT-small-SGD
65
1
transformers
5,511
Base model: [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) Fine tuned for dialogue response generation on the [Schema Guided Dialogue Dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue) (Rastogi et al., 2019) Three additional special tokens were added during the fine-tuning process: - <|pad|> padding token - <|user|> speaker control token to prompt user responses - <|system|> speaker control token to prompt system responses
raynardj/ner-gene-dna-rna-jnlpba-pubmed
231b91cf27c49cb398a112e24e439dc407a884f3
2021-11-05T07:32:32.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:jnlpba", "transformers", "ner", "gene", "protein", "rna", "bioinfomatics", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
raynardj
null
raynardj/ner-gene-dna-rna-jnlpba-pubmed
65
1
transformers
5,512
--- language: - en tags: - ner - gene - protein - rna - bioinfomatics license: apache-2.0 datasets: - jnlpba widget: - text: "It consists of 25 exons encoding a 1,278-amino acid glycoprotein that is composed of 13 transmembrane domains" --- # NER to find Gene & Gene products > The model was trained on jnlpba dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed) All the labels, the possible token classes. ```json {"label2id": { "DNA": 2, "O": 0, "RNA": 5, "cell_line": 4, "cell_type": 3, "protein": 1 } } ``` Notice, we removed the 'B-','I-' etc from data label.🗡 ## This is the template we suggest for using the model ```python from transformers import pipeline PRETRAINED = "raynardj/ner-gene-dna-rna-jnlpba-pubmed" ner = pipeline(task="ner",model=PRETRAINED, tokenizer=PRETRAINED) ner("Your text", aggregation_strategy="first") ``` And here is to make your output more consecutive ⭐️ ```python import pandas as pd from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(PRETRAINED) def clean_output(outputs): results = [] current = [] last_idx = 0 # make to sub group by position for output in outputs: if output["index"]-1==last_idx: current.append(output) else: results.append(current) current = [output, ] last_idx = output["index"] if len(current)>0: results.append(current) # from tokens to string strings = [] for c in results: tokens = [] starts = [] ends = [] for o in c: tokens.append(o['word']) starts.append(o['start']) ends.append(o['end']) new_str = tokenizer.convert_tokens_to_string(tokens) if new_str!='': strings.append(dict( word=new_str, start = min(starts), end = max(ends), entity = c[0]['entity'] )) return strings def entity_table(pipeline, **pipeline_kw): if "aggregation_strategy" not in pipeline_kw: pipeline_kw["aggregation_strategy"] = "first" def create_table(text): return pd.DataFrame( clean_output( pipeline(text, **pipeline_kw) ) ) return create_table # will return a dataframe entity_table(ner)(YOUR_VERY_CONTENTFUL_TEXT) ``` > check our NER model on * [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed) * [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed). * [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed)
razent/spbert-mlm-base
e98ed8adec9e4cd04a2743cf3f5a10ccabe8a4db
2022-03-15T03:25:56.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "code", "arxiv:2106.09997", "transformers", "question-answering", "knowledge-graph", "autotrain_compatible" ]
question-answering
false
razent
null
razent/spbert-mlm-base
65
null
transformers
5,513
--- language: - code tags: - question-answering - knowledge-graph --- # SPBERT MLM (Initialized) ## Introduction Paper: [SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs](https://arxiv.org/abs/2106.09997) Authors: _Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen_ ## How to use For more details, do check out [our Github repo](https://github.com/heraclex12/NLP2SPARQL). Here is an example in Pytorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-base') model = AutoModel.from_pretrained("razent/spbert-mlm-base") text = "select * where brack_open var_a var_b var_c sep_dot brack_close" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` or Tensorflow ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-base') model = TFAutoModel.from_pretrained("razent/spbert-mlm-base") text = "select * where brack_open var_a var_b var_c sep_dot brack_close" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Citation ``` @misc{tran2021spbert, title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs}, author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen}, year={2021}, eprint={2106.09997}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
kazandaev/opus-mt-en-ru-finetuned-v3
37eb193690cff71e2e0c82e2a09523e0a99becce
2022-03-08T10:50:47.000Z
[ "pytorch", "tensorboard", "rust", "marian", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
kazandaev
null
kazandaev/opus-mt-en-ru-finetuned-v3
65
null
transformers
5,514
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-en-ru-finetuned-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ru-finetuned-v3 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8300 - Bleu: 36.9731 - Gen Len: 29.5504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 85 - eval_batch_size: 85 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 1.073 | 1.0 | 71642 | 0.8276 | 37.1982 | 29.5248 | | 1.0769 | 2.0 | 143284 | 0.8289 | 37.1495 | 29.5305 | | 1.0772 | 3.0 | 214926 | 0.8294 | 37.1719 | 29.5099 | | 1.0767 | 4.0 | 286568 | 0.8297 | 37.0152 | 29.5522 | | 1.0668 | 5.0 | 358210 | 0.8300 | 36.9731 | 29.5504 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
clapika2010/adult_finetuned
87bb8dcfb670a100d351462b78c17a1a568d849e
2022-03-13T00:53:24.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
clapika2010
null
clapika2010/adult_finetuned
65
null
transformers
5,515
Entry not found
fabiochiu/t5-small-medium-title-generation
512b08cb241e724542ff9791bd01937294593dd2
2022-05-17T08:46:22.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "model-index", "autotrain_compatible" ]
text2text-generation
false
fabiochiu
null
fabiochiu/t5-small-medium-title-generation
65
null
transformers
5,516
--- tags: - generated_from_keras_callback model-index: - name: t5-small-medium-title-generation results: [] widget: - text: "summarize: Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in 'how it started, and how it is going,' financial institutions were looking for ways to automate solutions to help get back to 'normal' levels of customer service. This resulted in a change from the 'future of conversational AI' to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a 'dumb' question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down." example_title: "Banking on Bots" --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Model description This model is [t5-small](https://huggingface.co/t5-small) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article titles using the article textual content as input. There are two versions of the model: - [t5-small-medium-title-generation](https://huggingface.co/fabiochiu/t5-small-medium-title-generation): trained from [t5-small](https://huggingface.co/t5-small). - [t5-base-medium-title-generation](https://huggingface.co/fabiochiu/t5-base-medium-title-generation): trained from [t5-base](https://huggingface.co/t5-base). Visit the [title-generation space](https://huggingface.co/spaces/fabiochiu/title-generation) to try the model with different text generation parameters. # How to use the model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import nltk nltk.download('punkt') tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-small-medium-title-generation") model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-small-medium-title-generation") text = """ Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in “how it started, and how it is going,” financial institutions were looking for ways to automate solutions to help get back to “normal” levels of customer service. This resulted in a change from the “future of conversational AI” to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a “dumb” question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down. """ inputs = ["summarize: " + text] inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt") output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=64) decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0] predicted_title = nltk.sent_tokenize(decoded_output.strip())[0] print(predicted_title) # Conversational AI: The Future of Customer Service ``` ## Training and evaluation data The model has been trained on a single epoch spanning about 16000 articles, evaluating on 1000 random articles not used during training. ### Training results The model has been evaluated on a random dataset split of 1000 articles not used during training and validation. - Rouge-1: 27.8% - Rouge-2: 14.9% - Rouge-L: 26.9% - Rouge-Lsum: 26.9% - Average length of the generated titles: 13 tokens (about 9 English words) ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
khanhld/wav2vec2-base-vietnamese-160h
0c8ad9977189dc089a5c5f55c7b6dbaa79d8c5fd
2022-05-13T14:13:49.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "vi", "dataset:vivos", "dataset:common_voice", "dataset:FOSD", "dataset:VLSP", "transformers", "audio", "speech", "Transformer", "vietnamese", "license:cc-by-nc-4.0", "model-index" ]
automatic-speech-recognition
false
khanhld
null
khanhld/wav2vec2-base-vietnamese-160h
65
1
transformers
5,517
--- language: vi datasets: - vivos - common_voice - FOSD - VLSP metrics: - wer pipeline_tag: automatic-speech-recognition tags: - audio - speech - Transformer - wav2vec2 - automatic-speech-recognition - vietnamese license: cc-by-nc-4.0 widget: - example_title: common_voice_vi_30519758.mp3 src: https://huggingface.co/khanhld/wav2vec2-base-vietnamese-160h/raw/main/examples/common_voice_vi_30519758.mp3 - example_title: VIVOSDEV15_020.wav src: https://huggingface.co/khanhld/wav2vec2-base-vietnamese-160h/raw/main/examples/VIVOSDEV15_020.wav model-index: - name: Wav2vec2 Base Vietnamese 160h results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: common-voice-vietnamese type: common_voice args: vi metrics: - name: Test WER type: wer value: 10.78 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: VIVOS type: vivos args: vi metrics: - name: Test WER type: wer value: 15.05 --- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/wav2vec2-base-vietnamese-160h/speech-recognition-on-common-voice-vi)](https://paperswithcode.com/sota/speech-recognition-on-common-voice-vi?p=wav2vec2-base-vietnamese-160h) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/wav2vec2-base-vietnamese-160h/speech-recognition-on-vivos)](https://paperswithcode.com/sota/speech-recognition-on-vivos?p=wav2vec2-base-vietnamese-160h) # Vietnamese Speech Recognition using Wav2vec 2.0 ### Table of contents 1. [Model Description](#description) 2. [Implementation](#implementation) 3. [Benchmark Result](#benchmark) 4. [Example Usage](#example) 5. [Evaluation](#evaluation) 6. [Citation](#citation) 7. [Contact](#contact) <a name = "description" ></a> ### Model Description Fine-tuned the Wav2vec2-based model on about 160 hours of Vietnamese speech dataset from different resources, including [VIOS](https://huggingface.co/datasets/vivos), [COMMON VOICE](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VLSP 100h](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view). We have not yet incorporated the Language Model into our ASR system but still gained a promising result. <a name = "implementation" ></a> ### Implementation We also provide code for Pre-training and Fine-tuning the Wav2vec2 model. If you wish to train on your dataset, check it out here: - [Pre-train code](https://github.com/khanld/ASR-Wav2vec-Pretrain) (not available for now but will release soon) - [Fine-tune code](https://github.com/khanld/ASR-Wa2vec-Finetune) <a name = "benchmark" ></a> ### Benchmark WER Result | | [VIVOS](https://huggingface.co/datasets/vivos) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) | |---|---|---| |without LM| 15.05 | 10.78 | |with LM| in progress | in progress | <a name = "example" ></a> ### Example Usage [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1blz1KclnIfbOp8o2fW3WJgObOQ9SMGBo?usp=sharing) ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import librosa import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h") model = Wav2Vec2ForCTC.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h") model.to(device) def transcribe(wav): input_values = processor(wav, sampling_rate=16000, return_tensors="pt").input_values logits = model(input_values.to(device)).logits pred_ids = torch.argmax(logits, dim=-1) pred_transcript = processor.batch_decode(pred_ids)[0] return pred_transcript wav, _ = librosa.load('path/to/your/audio/file', sr = 16000) print(f"transcript: {transcribe(wav)}") ``` <a name = "evaluation"></a> ### Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1XQCq4YGLnl23tcKmYeSwaksro4IgC_Yi?usp=sharing) ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch import re from datasets import load_dataset, load_metric, Audio wer = load_metric("wer") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # load processor and model processor = Wav2Vec2Processor.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h") model = Wav2Vec2ForCTC.from_pretrained("khanhld/wav2vec2-base-vietnamese-160h") model.to(device) model.eval() # Load dataset test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "vi", split="test", use_auth_token="your_huggingface_auth_token") test_dataset = test_dataset.cast_column("audio", Audio(sampling_rate=16000)) chars_to_ignore = r'[,?.!\-;:"“%\'�]' # ignore special characters # preprocess data def preprocess(batch): audio = batch["audio"] batch["input_values"] = audio["array"] batch["transcript"] = re.sub(chars_to_ignore, '', batch["sentence"]).lower() return batch # run inference def inference(batch): input_values = processor(batch["input_values"], sampling_rate=16000, return_tensors="pt").input_values logits = model(input_values.to(device)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_transcript"] = processor.batch_decode(pred_ids) return batch test_dataset = test_dataset.map(preprocess) result = test_dataset.map(inference, batched=True, batch_size=1) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_transcript"], references=result["transcript"]))) ``` **Test Result**: 10.78% <a name = "citation" ></a> ### Citation [![DOI](https://zenodo.org/badge/491468343.svg)](https://zenodo.org/badge/latestdoi/491468343) <strong>BibTeX</strong> ``` @mics{Duy_Khanh_Finetune_Wav2vec_2_0_2022, author = {Duy Khanh, Le}, doi = {10.5281/zenodo.6542357}, license = {CC-BY-NC-4.0}, month = {5}, title = {{Finetune Wav2vec 2.0 For Vietnamese Speech Recognition}}, url = {https://github.com/khanld/ASR-Wa2vec-Finetune}, year = {2022} } ``` <strong>APA</strong> ``` Duy Khanh, L. (2022). Finetune Wav2vec 2.0 For Vietnamese Speech Recognition [Data set]. https://doi.org/10.5281/zenodo.6542357 ``` <a name = "contact"></a> ### Contact - [email protected] - [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/) - [![LinkedIn](https://img.shields.io/badge/linkedin-%230077B5.svg?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/khanhld257/)
tolmanneo/convo-gpt-j-6b
1f275b63604eaa09629e494fb83ba796c1993133
2022-06-09T16:08:57.000Z
[ "pytorch", "gptj", "text-generation", "transformers", "conversational" ]
conversational
false
tolmanneo
null
tolmanneo/convo-gpt-j-6b
65
null
transformers
5,518
--- tags: conversational ---
tolmanneo/convo-gpt-j-6b-10x
c5a51f30a7ac057fc844f1115c897471c8ebad86
2022-06-09T15:44:39.000Z
[ "pytorch", "gptj", "text-generation", "transformers", "conversational" ]
conversational
false
tolmanneo
null
tolmanneo/convo-gpt-j-6b-10x
65
null
transformers
5,519
--- tags: - conversational ---
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_all_TEST_null__second_train_set_NULL_False
33f750518ad0ba3a89ab96dd58c9ae2299cd114f
2022-06-16T09:29:42.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_all_TEST_null__second_train_set_NULL_False
65
null
transformers
5,520
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased_token_itr0_0.0001_TRAIN_all_TEST_null__second_train_set_NULL_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_TRAIN_all_TEST_null__second_train_set_NULL_False This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0650 - Precision: 0.9847 - Recall: 0.9864 - F1: 0.9856 - Accuracy: 0.9719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 25 | 0.2530 | 0.9106 | 0.8321 | 0.8696 | 0.7793 | | No log | 2.0 | 50 | 0.1882 | 0.9855 | 0.6891 | 0.8111 | 0.7116 | | No log | 3.0 | 75 | 0.1879 | 0.9467 | 0.7173 | 0.8162 | 0.7105 | | No log | 4.0 | 100 | 0.1987 | 0.9567 | 0.7108 | 0.8156 | 0.7120 | | No log | 5.0 | 125 | 0.1949 | 0.9511 | 0.7136 | 0.8154 | 0.7105 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
luoyixin/marian-finetuned-kde4-en-to-zh
575d97dab284cba5ff1cc0202ac4a9a4f9a99ab6
2022-06-20T10:13:41.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:kde4", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
luoyixin
null
luoyixin/marian-finetuned-kde4-en-to-zh
65
null
transformers
5,521
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-zh_CN metrics: - name: Bleu type: bleu value: 40.678005282996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9338 - Bleu: 40.6780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Yehor/wav2vec2-xls-r-base-uk-with-small-lm
b958c7f8aecb6ad9de8702c2ec63462a3d02b62c
2022-07-30T07:00:59.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "uk", "transformers", "license:cc-by-nc-sa-4.0" ]
automatic-speech-recognition
false
Yehor
null
Yehor/wav2vec2-xls-r-base-uk-with-small-lm
65
null
transformers
5,522
--- language: - uk license: "cc-by-nc-sa-4.0" --- **NOTE**: Look on a better model https://huggingface.co/Yehor/wav2vec2-xls-r-base-uk-with-cv-lm 🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk ⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk This model was trained using the base model https://huggingface.co/fav-kky/wav2vec2-base-cs-80k-ClTRUS (pre-trained from 80 thousand hours of Czech speech) This model has apostrophes and hyphens. Metrics: Without LM: - WER: 0.4745 - CER: 0.1104 --- SMALL LM (this repository): - WER: 0.303 - CER: 0.0818 WIKI LM (https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-wiki-lm/tree/main/language_model): - WER: 0.2807 - CER: 0.0785 NEWS LM (https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-news-lm/tree/main/language_model): - WER: 0.2633 - CER: 0.0753
WindowsRegedit/zuowen
b35bd2ea0489d52b0af56cbcfd84a198d8c55df6
2022-06-23T12:47:18.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
WindowsRegedit
null
WindowsRegedit/zuowen
65
null
transformers
5,523
### 作文模型 使用方法,请参考[Python 自动写作文库](https://github.com/WindowsRegedit/zuowen)
tner/bert-base-tweetner-2020
cccef3955fffead783edbbb495b5444db567bea5
2022-07-08T11:18:19.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/bert-base-tweetner-2020
65
null
transformers
5,524
Entry not found
Evelyn18/BECASV4.1
d02d756d758240bbf1bbab8f3f9810d36ce863b6
2022-07-19T03:34:08.000Z
[ "pytorch", "roberta", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/BECASV4.1
65
null
transformers
5,525
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: roberta-base-spanish-squades-modelo-robertav1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-spanish-squades-modelo-robertav1 This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 2.7840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 1.9356 | | No log | 2.0 | 10 | 1.7489 | | No log | 3.0 | 15 | 2.0573 | | No log | 4.0 | 20 | 2.3975 | | No log | 5.0 | 25 | 2.6796 | | No log | 6.0 | 30 | 2.7238 | | No log | 7.0 | 35 | 2.7616 | | No log | 8.0 | 40 | 2.7840 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Amrrs/wav2vec2-large-xlsr-53-tamil
017e2f0cf4b196d1656e310c87403fe446e39581
2021-07-05T14:14:42.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "ta", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Amrrs
null
Amrrs/wav2vec2-large-xlsr-53-tamil
64
1
transformers
5,526
--- language: ta datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Tamil by Amrrs results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ta type: common_voice args: ta metrics: - name: Test WER type: wer value: 82.94 --- # Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 82.94 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
Helsinki-NLP/opus-mt-pap-en
38a0afea7e9b1b124d2ec116e42d8a55a0bfc884
2021-09-10T14:00:36.000Z
[ "pytorch", "marian", "text2text-generation", "pap", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-pap-en
64
null
transformers
5,527
--- tags: - translation license: apache-2.0 --- ### opus-mt-pap-en * source languages: pap * target languages: en * OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.en | 47.3 | 0.634 | | Tatoeba.pap.en | 63.2 | 0.684 |
HooshvareLab/gpt2-fa-poetry
35d19a3dba707d9d1ca075e35340f0477c8d9e0b
2021-05-21T10:50:14.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "fa", "transformers", "license:apache-2.0" ]
text-generation
false
HooshvareLab
null
HooshvareLab/gpt2-fa-poetry
64
null
transformers
5,528
--- language: fa license: apache-2.0 widget: - text: "<s>رودکی<|startoftext|>" - text: "<s>فردوسی<|startoftext|>" - text: "<s>خیام<|startoftext|>" - text: "<s>عطار<|startoftext|>" - text: "<s>نظامی<|startoftext|>" --- # Persian Poet GPT2 ## Poets The model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the [fine-tuning notebook](https://colab.research.google.com/github/hooshvare/parsgpt/blob/master/notebooks/Persian_Poetry_FineTuning.ipynb). ```text <s>رودکی<|startoftext|> <s>فردوسی<|startoftext|> <s>کسایی<|startoftext|> <s>ناصرخسرو<|startoftext|> <s>منوچهری<|startoftext|> <s>فرخی سیستانی<|startoftext|> <s>مسعود سعد سلمان<|startoftext|> <s>ابوسعید ابوالخیر<|startoftext|> <s>باباطاهر<|startoftext|> <s>فخرالدین اسعد گرگانی<|startoftext|> <s>اسدی توسی<|startoftext|> <s>هجویری<|startoftext|> <s>خیام<|startoftext|> <s>نظامی<|startoftext|> <s>عطار<|startoftext|> <s>سنایی<|startoftext|> <s>خاقانی<|startoftext|> <s>انوری<|startoftext|> <s>عبدالواسع جبلی<|startoftext|> <s>نصرالله منشی<|startoftext|> <s>مهستی گنجوی<|startoftext|> <s>باباافضل کاشانی<|startoftext|> <s>مولوی<|startoftext|> <s>سعدی<|startoftext|> <s>خواجوی کرمانی<|startoftext|> <s>عراقی<|startoftext|> <s>سیف فرغانی<|startoftext|> <s>حافظ<|startoftext|> <s>اوحدی<|startoftext|> <s>شیخ محمود شبستری<|startoftext|> <s>عبید زاکانی<|startoftext|> <s>امیرخسرو دهلوی<|startoftext|> <s>سلمان ساوجی<|startoftext|> <s>شاه نعمت‌الله ولی<|startoftext|> <s>جامی<|startoftext|> <s>هلالی جغتایی<|startoftext|> <s>وحشی<|startoftext|> <s>محتشم کاشانی<|startoftext|> <s>شیخ بهایی<|startoftext|> <s>عرفی<|startoftext|> <s>رضی‌الدین آرتیمانی<|startoftext|> <s>صائب تبریزی<|startoftext|> <s>فیض کاشانی<|startoftext|> <s>بیدل دهلوی<|startoftext|> <s>هاتف اصفهانی<|startoftext|> <s>فروغی بسطامی<|startoftext|> <s>قاآنی<|startoftext|> <s>ملا هادی سبزواری<|startoftext|> <s>پروین اعتصامی<|startoftext|> <s>ملک‌الشعرای بهار<|startoftext|> <s>شهریار<|startoftext|> <s>رهی معیری<|startoftext|> <s>اقبال لاهوری<|startoftext|> <s>خلیل‌الله خلیلی<|startoftext|> <s>شاطرعباس صبوحی<|startoftext|> <s>نیما یوشیج ( آوای آزاد )<|startoftext|> <s>احمد شاملو<|startoftext|> <s>سهراب سپهری<|startoftext|> <s>فروغ فرخزاد<|startoftext|> <s>سیمین بهبهانی<|startoftext|> <s>مهدی اخوان ثالث<|startoftext|> <s>محمدحسن بارق شفیعی<|startoftext|> <s>شیون فومنی<|startoftext|> <s>کامبیز صدیقی کسمایی<|startoftext|> <s>بهرام سالکی<|startoftext|> <s>عبدالقهّار عاصی<|startoftext|> <s>اِ لیـــار (جبار محمدی )<|startoftext|> ``` ## Questions? Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo.
Irina/fantasy_GPT3Medium
e88c9457179f14b78966559711a4f6039bb31b11
2021-11-26T05:29:14.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Irina
null
Irina/fantasy_GPT3Medium
64
null
transformers
5,529
Entry not found
alireza7/PEGASUS-persian-base-voa-title
314282bf0721fcebe6260a8a6f6da340c15750ac
2021-09-29T19:26:07.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/PEGASUS-persian-base-voa-title
64
null
transformers
5,530
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
banjtheman/distilbert-base-uncased-helpful-amazon
53364fd30e3bfa78a706b9701cc31ba218c7ff60
2022-02-04T21:22:32.000Z
[ "pytorch", "distilbert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
banjtheman
null
banjtheman/distilbert-base-uncased-helpful-amazon
64
null
transformers
5,531
--- license: apache-2.0 --- ## Overview This model was trained with data from https://registry.opendata.aws/helpful-sentences-from-reviews/ to predict how "helpful" a review is. The model was fine-tuned from the `distilbert-base-uncased` model ### Labels LABEL_0 - Not helpful LABEL_1 - Helpful ### How to use The following code shows how to make a prediction with this model ```python from transformers import ( AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline, ) tokenizer = AutoTokenizer.from_pretrained("banjtheman/distilbert-base-uncased-helpful-amazon") model = AutoModelForSequenceClassification.from_pretrained( "banjtheman/distilbert-base-uncased-helpful-amazon" ) pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer) result = pipe("This was a Christmas gift for my grandson.") print(result) #[{'label': 'LABEL_0', 'score': 0.998775064945221}] # This is NOT A HELPFUL comment ```
castorini/monobert-large-msmarco
0a97706f3827389da43b83348d5d18c9d53876fa
2020-05-29T03:41:44.000Z
[ "pytorch", "transformers" ]
null
false
castorini
null
castorini/monobert-large-msmarco
64
null
transformers
5,532
Entry not found
ensamblador/gpt2-twitter-politico
768a296746bf63b6f6bd9960c50a0e696622f74b
2021-05-21T15:54:38.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
ensamblador
null
ensamblador/gpt2-twitter-politico
64
null
transformers
5,533
Entry not found
gagan3012/wav2vec2-xlsr-khmer
2b626a577ac629a05d1ab01ac5ba3ad14740de54
2021-07-06T03:58:05.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "km", "dataset:OpenSLR", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
gagan3012
null
gagan3012/wav2vec2-xlsr-khmer
64
null
transformers
5,534
--- language: km datasets: - OpenSLR - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: wav2vec2-xlsr-Khmer by Gagan Bhatia results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR km type: OpenSLR args: km metrics: - name: Test WER type: wer value: 24.96 --- # Wav2Vec2-Large-XLSR-53-khmer Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Khmer using the [Common Voice](https://huggingface.co/datasets/common_voice), and [OpenSLR Kh](http://www.openslr.org/42/). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor !wget https://www.openslr.org/resources/42/km_kh_male.zip !unzip km_kh_male.zip !ls km_kh_male colnames=['path','sentence'] df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames) df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav' train, test = train_test_split(df, test_size=0.1) test.to_csv('/content/km_kh_male/line_index_test.csv') test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train') processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-nepali") model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-nepali") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\\\\\\\\\\\\\\\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` #### Result Prediction: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ'] Reference: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ'] ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re from sklearn.model_selection import train_test_split import pandas as pd from datasets import load_dataset !wget https://www.openslr.org/resources/42/km_kh_male.zip !unzip km_kh_male.zip !ls km_kh_male colnames=['path','sentence'] df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames) df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav' train, test = train_test_split(df, test_size=0.1) test.to_csv('/content/km_kh_male/line_index_test.csv') test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train') wer = load_metric("wer") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-khmer") model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-khmer") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tbatch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower() \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): \\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \\twith torch.no_grad(): \\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits \\tpred_ids = torch.argmax(logits, dim=-1) \\tbatch["pred_strings"] = processor.batch_decode(pred_ids) \\treturn batch cer = load_metric("cer") result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"]))) print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["text"]))) ``` **Test Result**: 24.96 % WER: 24.962519 CER: 6.950925 ## Training The script used for training can be found [here](https://colab.research.google.com/drive/1yo_OTMH8FHQrAKCkKdQGMqpkj-kFhS_2?usp=sharing)
huggingtweets/dogdick420cum
5d7a5f21ba570ac5f7e530167238886a77aee20e
2021-05-22T01:55:30.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/dogdick420cum
64
null
transformers
5,535
--- language: en thumbnail: https://www.huggingtweets.com/dogdick420cum/1615429013878/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359563991467646982/9ZPrurHY_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">🥞 🤖 AI Bot </div> <div style="font-size: 15px">@dogdick420cum bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@dogdick420cum's tweets](https://twitter.com/dogdick420cum). | Data | Quantity | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 148 | | Short tweets | 512 | | Tweets kept | 2582 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nzltah4f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dogdick420cum's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29pe6wy0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29pe6wy0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dogdick420cum') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lighteternal/nli-xlm-r-greek
7fabdac688edc6007f9f5adf212bdb8e2deeb752
2021-09-21T16:01:42.000Z
[ "pytorch", "xlm-roberta", "text-classification", "el", "en", "dataset:multi_nli", "dataset:snli", "dataset:allnli_greek", "arxiv:1908.10084", "transformers", "xlm-roberta-base", "license:apache-2.0", "zero-shot-classification" ]
zero-shot-classification
false
lighteternal
null
lighteternal/nli-xlm-r-greek
64
null
transformers
5,536
--- language: - el - en tags: - xlm-roberta-base datasets: - multi_nli - snli - allnli_greek metrics: - accuracy pipeline_tag: zero-shot-classification widget: - text: "Η Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας." candidate_labels: "τεχνολογία, πολιτική, αθλητισμός" multi_class: false license: apache-2.0 --- # Cross-Encoder for Greek Natural Language Inference (Textual Entailment) & Zero-Shot Classification ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data The model was trained on the the combined Greek+English version of the AllNLI dataset(sum of [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)). The Greek part was created using the EN2EL NMT model available [here](https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased). The model can be used in two ways: * NLI/Textual Entailment: For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. * Zero-shot classification through the Huggingface pipeline: Given a sentence and a set of labels/topics, it will output the likelihood of the sentence belonging to each of the topic. Under the hood, the logit for entailment between the sentence and each label is taken as the logit for the candidate label being valid. ## Performance Evaluation on classification accuracy (entailment, contradiction, neutral) on mixed (Greek+English) AllNLI-dev set: | Metric | Value | | --- | --- | | Accuracy | 0.8409 | ## To use the model for NLI/Textual Entailment #### Usage with sentence_transformers Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('lighteternal/nli-xlm-r-greek') scores = model.predict([('Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'), ('Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο'), ('Δυο γυναίκες μιλάνε στο κινητό', 'Το τραπέζι ήταν πράσινο')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] print(scores, labels) # Οutputs #[[-3.1526504 2.9981945 -0.3108107] # [ 5.0549307 -2.757949 -1.6220676] # [-0.5124733 -2.2671669 3.1630592]] ['entailment', 'contradiction', 'neutral'] ``` #### Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('lighteternal/nli-xlm-r-greek') tokenizer = AutoTokenizer.from_pretrained('lighteternal/nli-xlm-r-greek') features = tokenizer(['Δύο άνθρωποι συναντιούνται στο δρόμο', 'Ο δρόμος έχει κόσμο'], ['Ένα μαύρο αυτοκίνητο ξεκινάει στη μέση του πλήθους.', 'Ένας άντρας οδηγάει σε ένα μοναχικό δρόμο.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## To use the model for Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='lighteternal/nli-xlm-r-greek') sent = "Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας" candidate_labels = ["πολιτική", "τεχνολογία", "αθλητισμός"] res = classifier(sent, candidate_labels) print(res) #outputs: #{'sequence': 'Το Facebook κυκλοφόρησε τα πρώτα «έξυπνα» γυαλιά επαυξημένης πραγματικότητας', 'labels': ['τεχνολογία', 'αθλητισμός', 'πολιτική'], 'scores': [0.8380699157714844, 0.09086982160806656, 0.07106029987335205]} ``` ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) ### Citation info Citation for the Greek model TBA. Based on the work [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) Kudos to @nreimers (Nils Reimers) for his support on Github .
lkwate/legal-bigbird-us
a67b19db8f36aa5a8d662aadb739afff12e2405a
2021-08-21T22:38:29.000Z
[ "pytorch", "big_bird", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
lkwate
null
lkwate/legal-bigbird-us
64
1
transformers
5,537
Entry not found
mrm8488/T5-base-finetuned-cuad
b26f9f3466cd0b250a9a813e8da4bd4a393f5ece
2021-12-28T20:13:19.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:cuad", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/T5-base-finetuned-cuad
64
2
transformers
5,538
--- language: - en license: mit tags: - generated_from_trainer datasets: - cuad model-index: - name: T5-base-cuad-512 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5-base fine-tuned on CUAD for Legal Contract Review (via QA) This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the cuad dataset. It achieves the following results on the evaluation set: - Loss: 0.2209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2809 | 1.0 | 2795 | 0.2331 | | 0.2459 | 2.0 | 5590 | 0.2253 | | 0.2355 | 3.0 | 8385 | 0.2220 | | 0.2212 | 4.0 | 11180 | 0.2203 | | 0.2068 | 5.0 | 13975 | 0.2197 | | 0.2085 | 6.0 | 16770 | 0.2194 | | 0.1968 | 7.0 | 19565 | 0.2199 | | 0.1906 | 8.0 | 22360 | 0.2200 | | 0.1909 | 9.0 | 25155 | 0.2208 | | 0.1788 | 10.0 | 27950 | 0.2209 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrm8488/t5-base-finetuned-news-titles-classification
e25716f46582a471dacb76058bbc7b7ca67af95c
2021-06-23T12:52:30.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-news-titles-classification
64
null
transformers
5,539
Entry not found
patrickvonplaten/bert2bert-tiny
ca68b0275d9094f9a582a543519119f6f287239b
2020-10-18T19:23:27.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
patrickvonplaten
null
patrickvonplaten/bert2bert-tiny
64
null
transformers
5,540
Entry not found
tanay/layoutlm-funsd
0a0442220e4d47fb4373c8d6911542706ec0f2e8
2021-06-30T07:21:21.000Z
[ "pytorch", "layoutlm", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tanay
null
tanay/layoutlm-funsd
64
null
transformers
5,541
Entry not found
turing-usp/FinBertPTBR
109ec9242690581dad9458abd9fef57a71d77d38
2021-12-03T01:30:42.000Z
[ "pytorch", "bert", "text-classification", "pt", "transformers", "license:apache-2.0" ]
text-classification
false
turing-usp
null
turing-usp/FinBertPTBR
64
2
transformers
5,542
--- language: pt license: apache-2.0 widget: - text: "O futuro de DI caiu 20 bps nesta manhã" example_title: "Example 1" - text: "O Nubank decidiu cortar a faixa de preço da oferta pública inicial (IPO) após revés no humor dos mercados internacionais com as fintechs." example_title: "Example 2" - text: "O Ibovespa acompanha correção do mercado e fecha com alta moderada" example_title: "Example 3" --- # FinBertPTBR : Financial Bert PT BR FinBertPTBR is a pre-trained NLP model to analyze sentiment of Brazilian Portuguese financial texts. It is built by further training the BERTimbau language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. ## Usage ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("turing-usp/FinBertPTBR") model = AutoModel.from_pretrained("turing-usp/FinBertPTBR") ``` ## Authors - [Vinicius Carmo](https://www.linkedin.com/in/vinicius-cleves/) - [Julia Pocciotti](https://www.linkedin.com/in/juliapocciotti/) - [Luísa Heise](https://www.linkedin.com/in/lu%C3%ADsa-mendes-heise/) - [Lucas Leme](https://www.linkedin.com/in/lucas-leme-santos/)
vasudevgupta/mbart-bhasha-hin-eng
14aee5dd763df50ed065a1e9bc8d1650fe9ff2db
2021-05-12T03:36:02.000Z
[ "pytorch", "mbart", "text2text-generation", "dataset:pib", "transformers", "autotrain_compatible" ]
text2text-generation
false
vasudevgupta
null
vasudevgupta/mbart-bhasha-hin-eng
64
null
transformers
5,543
--- datasets: pib widget: - text: "नमस्ते! मैं वासुदेव गुप्ता हूं" --- mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective. Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on all samples (~260K) from Bhasha (pib_v1.3) Hindi-English parallel corpus. This checkpoint gives decent results for Hindi-english translation.
lighteternal/fact-or-opinion-xlmr-el
bf405d95926471dbf248ab13af8441d8d87b3da7
2022-02-27T19:41:57.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "text-classification", "en", "el", "multilingual", "transformers", "fact-or-opinion", "license:apache-2.0" ]
text-classification
false
lighteternal
null
lighteternal/fact-or-opinion-xlmr-el
64
null
transformers
5,544
--- language: - en - el - multilingual tags: - text-classification - fact-or-opinion - transformers widget: - text: "Ξεχωρίζει η καθηλωτική ερμηνεία του πρωταγωνιστή." - text: "Η Ελλάδα είναι χώρα της Ευρώπης." - text: "Tolkien was an English writer" - text: "Tolkien is my favorite writer." pipeline_tag: text-classification license: apache-2.0 --- # Fact vs. opinion binary classifier, trained on a mixed EN-EL annotated corpus. ### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content. You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences. Legend of HuggingFace API labels: * Label 0: Opinion/Subjective sentence * Label 1: Fact/Objective sentence ## Dataset training info The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset. The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab. ## Evaluation Results on test set | accuracy | precision | recall | f1 | | ----------- | ----------- | ----------- | ----------- | |0.952 | 0.945 | 0.960 | 0.952 | ## Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
grayson124/chatbotwaifu
f5f60f481106748b56ced7af5b696b16249ba5dd
2022-03-01T02:49:54.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
grayson124
null
grayson124/chatbotwaifu
64
null
transformers
5,545
--- tags: - conversational --- #waifu bot
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-05_56_32
1bee69d0d0d1f4fb6ecc8b62693ca3889d5b4f41
2022-03-05T04:58:11.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-05_56_32
64
null
transformers
5,546
Entry not found
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_essays_05_03_2022-06_16_34
4d2afbc201073279ce4d2f56c02bb4c71b31f1f2
2022-03-05T05:18:59.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_essays_05_03_2022-06_16_34
64
null
transformers
5,547
Entry not found
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_editorials_05_03_2022-06_24_13
d23e9f1ed74b383fc7deb0e5e9eec9b83a913cc4
2022-03-05T05:26:43.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_editorials_05_03_2022-06_24_13
64
null
transformers
5,548
Entry not found
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_webDiscourse_05_03_2022-06_29_29
e34093958db3a32457de4358d97a2f13c4adc513
2022-03-05T05:31:06.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ali2066
null
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_webDiscourse_05_03_2022-06_29_29
64
null
transformers
5,549
Entry not found
meedan/paraphrase-filipino-mpnet-base-v2
20d8b8d26840bc2f9d47e06ad9f576fa9ef86af3
2022-05-11T09:50:47.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
meedan
null
meedan/paraphrase-filipino-mpnet-base-v2
64
null
sentence-transformers
5,550
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # paraphrase-filipino-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was trained using the student--teacher approach outlined in [Reimers and Gurevych (2020)](https://aclanthology.org/2020.emnlp-main.365/). The teacher model was [sentence-transformers/paraphrase-mpnet-base-v2](), and the student model was [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](), which is based on XLM-R. We trained the model for 2 epoch using a batch size of 64 on parallel data English--Tagalog and English--Filipino data from OPUS. We found the data to be of variable quality and filtered it to only include sentence pairs that the Compact Language Detection kit (CLDv3) identified reliably as being in Tagalog or Filipino. Other parameters were left unchanged from the example [make_multilingual_sys.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/multilingual/make_multilingual_sys.py) code in the sentence-transformers code base. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer from scipy.spatial import distance import itertools model = SentenceTransformer('meedan/paraphrase-filipino-mpnet-base-v2') sentences = ["saan pong mga lugar available ang pfizer vaccine? Thank you!","Ask ko lang po saan meron available na vaccine","Where is the vaccine available?"] embeddings = model.encode(sentences) dist=[distance.cosine(i,j) for i,j in itertools.combinations(embeddings,2)] print(dist) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results We machine translated the STS data from [SentEval](https://github.com/facebookresearch/SentEval) to Filipino using the Google Translation API and used this for evaluation alongside the original English-language STS data. We used Spearman's rank correlation coefficient. We found roughly the same performance as the original base model (sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on English while substantial gains were made for Filipino. For English, the average correlation is 0.80. For Filipino, it is 0.75. For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 79097 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
avichr/Legal-heBERT
dd775034faa5d4668da364d21a396e1c51773ec1
2022-07-07T07:31:39.000Z
[ "pytorch", "bert", "fill-mask", "arxiv:1911.03090", "arxiv:2010.02559", "transformers", "autotrain_compatible" ]
fill-mask
false
avichr
null
avichr/Legal-heBERT
64
null
transformers
5,551
# Legal-HeBERT Legal-HeBERT is a BERT model for Hebrew legal and legislative domains. It is intended to improve the legal NLP research and tools development in Hebrew. We release two versions of Legal-HeBERT. The first version is a fine-tuned model of [HeBERT](https://github.com/avichaychriqui/HeBERT) applied on legal and legislative documents. The second version uses [HeBERT](https://github.com/avichaychriqui/HeBERT)'s architecture guidlines to train a BERT model from scratch. <br> We continue collecting legal data, examining different architectural designs, and performing tagged datasets and legal tasks for evaluating and to development of a Hebrew legal tools. ## Training Data Our training datasets are: | Name | Hebrew Description | Size (GB) | Documents | Sentences | Words | Notes | |----------------------------------------------------------------------------------------------------------------------------------- |-------------------------------------------------------------------------- |----------- |----------- |------------ |------------- |----------------------------------------- | | The Israeli Law Book | ספר החוקים הישראלי | 0.05 | 2338 | 293352 | 4851063 | | | Judgments of the Supreme Court | מאגר פסקי הדין של בית המשפט העליון | 0.7 | 212348 | 5790138 | 79672415 | | | custody courts | החלטות בתי הדין למשמורת | 2.46 | 169,708 | 8,555,893 | 213,050,492 | | | Law memoranda, drafts of secondary legislation and drafts of support tests that have been distributed to the public for comment | תזכירי חוק, טיוטות חקיקת משנה וטיוטות מבחני תמיכה שהופצו להערות הציבור | 0.4 | 3,291 | 294,752 | 7,218,960 | | | Supervisors of Land Registration judgments | מאגר פסקי דין של המפקחים על רישום המקרקעין | 0.02 | 559 | 67,639 | 1,785,446 | | | Decisions of the Labor Court - Corona | מאגר החלטות בית הדין לעניין שירות התעסוקה – קורונה | 0.001 | 146 | 3505 | 60195 | | | Decisions of the Israel Lands Council | החלטות מועצת מקרקעי ישראל | | 118 | 11283 | 162692 | aggregate file | | Judgments of the Disciplinary Tribunal and the Israel Police Appeals Tribunal | פסקי דין של בית הדין למשמעת ובית הדין לערעורים של משטרת ישראל | 0.02 | 54 | 83724 | 1743419 | aggregate files | | Disciplinary Appeals Committee in the Ministry of Health | ועדת ערר לדין משמעתי במשרד הבריאות | 0.004 | 252 | 21010 | 429807 | 465 files are scanned and didn't parser | | Attorney General's Positions | מאגר התייצבויות היועץ המשפטי לממשלה | 0.008 | 281 | 32724 | 813877 | | | Legal-Opinion of the Attorney General | מאגר חוות דעת היועץ המשפטי לממשלה | 0.002 | 44 | 7132 | 188053 | | | | | | | | | | | total | | 3.665 | 389,139 | 15,161,152 | 309,976,419 | | We thank <b>Yair Gardin</b> for the referring to the governance data, <b>Elhanan Schwarts</b> for collecting and parsing The Israeli law book, and <b>Jonathan Schler</b> for collecting the judgments of the supreme court. ## Training process * Vocabulary size: 50,000 tokens * 4 epochs (1M steps±) * lr=5e-5 * mlm_probability=0.15 * batch size = 32 (for each gpu) * NVIDIA GeForce RTX 2080 TI + NVIDIA GeForce RTX 3090 (1 week training) ### Additional training settings: <b>Fine-tuned [HeBERT](https://github.com/avichaychriqui/HeBERT) model:</b> The first eight layers were freezed (like [Lee et al. (2019)](https://arxiv.org/abs/1911.03090) suggest)<br> <b>Legal-HeBERT trained from scratch:</b> The training process is similar to [HeBERT](https://github.com/avichaychriqui/HeBERT) and inspired by [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559) <br> ## How to use The models can be found in huggingface hub and can be fine-tunned to any down-stream task: ``` # !pip install transformers==4.14.1 from transformers import AutoTokenizer, AutoModel model_name = 'avichr/Legal-heBERT_ft' # for the fine-tuned HeBERT model model_name = 'avichr/Legal-heBERT' # for legal HeBERT model trained from scratch tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) from transformers import pipeline fill_mask = pipeline( "fill-mask", model=model_name, ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ## Stay tuned! We are still working on our models and the datasets. We will edit this page as we progress. We are open for collaborations. ## If you used this model please cite us as : Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai, Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts (June 27, 2022). Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4147127 ``` @article{chriqui2021hebert, title={Legal HeBERT: A BERT-based NLP Model for Hebrew Legal, Judicial and Legislative Texts}, author={Chriqui, Avihay, Yahav, Inbal and Bar-Siman-Tov, Ittai}, journal={SSRN preprint:4147127}, year={2022} } ``` ## Contact us [Avichay Chriqui](mailto:[email protected]), The Coller AI Lab <br> [Inbal yahav](mailto:[email protected]), The Coller AI Lab <br> [Ittai Bar-Siman-Tov](mailto:[email protected]), the BIU Innovation Lab for Law, Data-Science and Digital Ethics <br> Thank you, תודה, شكرا <br>
ml6team/keyphrase-extraction-kbir-kpcrowd
04963560cd70e18a161b3f70e3d90eb97e13fcdc
2022-06-16T14:19:57.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:midas/kpcrowd", "arxiv:2112.08547", "transformers", "keyphrase-extraction", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ml6team
null
ml6team/keyphrase-extraction-kbir-kpcrowd
64
null
transformers
5,552
--- language: en license: mit tags: - keyphrase-extraction datasets: - midas/kpcrowd metrics: - seqeval widget: - text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text." example_title: "Example 1" - text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks." example_title: "Example 2" model-index: - name: DeDeckerThomas/keyphrase-extraction-kbir-kpcrowd results: - task: type: keyphrase-extraction name: Keyphrase Extraction dataset: type: midas/kpcrowd name: kpcrowd metrics: - type: F1 (Seqeval) value: 0.427 name: F1 (Seqeval) - type: F1@M value: 0.335 name: F1@M --- # 🔑 Keyphrase Extraction Model: KBIR-KPCrowd Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. ## 📓 Model Description This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [KPCrowd dataset](https://huggingface.co/datasets/midas/kpcrowd). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC). You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547). Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not. | Label | Description | | ----- | ------------------------------- | | B-KEY | At the beginning of a keyphrase | | I-KEY | Inside a keyphrase | | O | Outside a keyphrase | Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021). Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020. ## ✋ Intended Uses & Limitations ### 🛑 Limitations * This keyphrase extraction model is very dataset-specific. It's not recommended to use this model for other domains, but you are free to test it out. * Only works for English documents. * Large number of annotated keyphrases. * For a custom model, please consult the [training notebook]() for more information. ### ❓ How To Use ```python from transformers import ( TokenClassificationPipeline, AutoModelForTokenClassification, AutoTokenizer, ) from transformers.pipelines import AggregationStrategy import numpy as np # Define keyphrase extraction pipeline class KeyphraseExtractionPipeline(TokenClassificationPipeline): def __init__(self, model, *args, **kwargs): super().__init__( model=AutoModelForTokenClassification.from_pretrained(model), tokenizer=AutoTokenizer.from_pretrained(model), *args, **kwargs ) def postprocess(self, model_outputs): results = super().postprocess( model_outputs=model_outputs, aggregation_strategy=AggregationStrategy.SIMPLE, ) return np.unique([result.get("word").strip() for result in results]) ``` ```python # Load pipeline model_name = "ml6team/keyphrase-extraction-kbir-kpcrowd" extractor = KeyphraseExtractionPipeline(model=model_name) ``` ```python # Inference text = """ Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. """.replace("\n", " ") keyphrases = extractor(text) print(keyphrases) ``` ``` # Output ['Artificial Intelligence' 'Classical' 'Keyphrase' 'Keyphrase extraction' 'classical' 'content' 'context' 'disadvantage' 'document' 'documents' 'extract' 'extraction' 'extraction process' 'frequency' 'human' 'humans' 'important' 'keyphrases' 'learning' 'linguistic' 'long-term' 'machine learning' 'meaning' 'methods' 'neural approaches' 'occurrence' 'process' 'quickly' 'semantic' 'statistical' 'technique' 'text' 'text analysis' 'understand' 'widely' 'words' 'work'] ``` ## 📚 Training Dataset [KPCrowd](https://huggingface.co/datasets/midas/kpcrowd) is a broadcast news transcription dataset consisting of 500 English broadcast news stories from 10 different categories (art and culture, business, crime, fashion, health, politics us, politics world, science, sports, technology) with 50 docs per category. This dataset is annotated by multiple annotators that were required to look at the same news story and assign a set of keyphrases from the text itself. You can find more information in the [paper](https://arxiv.org/abs/1306.4606). ## 👷‍♂️ Training Procedure For more in detail information, you can take a look at the [training notebook](). ### Training Parameters | Parameter | Value | | --------- | ------| | Learning Rate | 1e-4 | | Epochs | 50 | | Early Stopping Patience | 3 | ### Preprocessing The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens. ```python from datasets import load_dataset from transformers import AutoTokenizer # Labels label_list = ["B", "I", "O"] lbl2idx = {"B": 0, "I": 1, "O": 2} idx2label = {0: "B", 1: "I", 2: "O"} # Tokenizer tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR", add_prefix_space=True) max_length = 512 # Dataset parameters dataset_full_name = "midas/kpcrowd" dataset_subset = "raw" dataset_document_column = "document" dataset_biotags_column = "doc_bio_tags" def preprocess_fuction(all_samples_per_split): tokenized_samples = tokenizer.batch_encode_plus( all_samples_per_split[dataset_document_column], padding="max_length", truncation=True, is_split_into_words=True, max_length=max_length, ) total_adjusted_labels = [] for k in range(0, len(tokenized_samples["input_ids"])): prev_wid = -1 word_ids_list = tokenized_samples.word_ids(batch_index=k) existing_label_ids = all_samples_per_split[dataset_biotags_column][k] i = -1 adjusted_label_ids = [] for wid in word_ids_list: if wid is None: adjusted_label_ids.append(lbl2idx["O"]) elif wid != prev_wid: i = i + 1 adjusted_label_ids.append(lbl2idx[existing_label_ids[i]]) prev_wid = wid else: adjusted_label_ids.append( lbl2idx[ f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}" ] ) total_adjusted_labels.append(adjusted_label_ids) tokenized_samples["labels"] = total_adjusted_labels return tokenized_samples # Load dataset dataset = load_dataset(dataset_full_name, dataset_subset) # Preprocess dataset tokenized_dataset = dataset.map(preprocess_fuction, batched=True) ``` ### Postprocessing (Without Pipeline Function) If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed. ```python # Define post_process functions def concat_tokens_by_tag(keyphrases): keyphrase_tokens = [] for id, label in keyphrases: if label == "B": keyphrase_tokens.append([id]) elif label == "I": if len(keyphrase_tokens) > 0: keyphrase_tokens[len(keyphrase_tokens) - 1].append(id) return keyphrase_tokens def extract_keyphrases(example, predictions, tokenizer, index=0): keyphrases_list = [ (id, idx2label[label]) for id, label in zip( np.array(example["input_ids"]).squeeze().tolist(), predictions[index] ) if idx2label[label] in ["B", "I"] ] processed_keyphrases = concat_tokens_by_tag(keyphrases_list) extracted_kps = tokenizer.batch_decode( processed_keyphrases, skip_special_tokens=True, clean_up_tokenization_spaces=True, ) return np.unique([kp.strip() for kp in extracted_kps]) ``` ## 📝 Evaluation results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. The model achieves the following results on the Inspec test set: | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:| | Inspec Test Set | 0.47 | 0.07 | 0.12 | 0.46 | 0.13 | 0.20 | 0.37 | 0.33 | 0.33 | For more information on the evaluation process, you can take a look at the keyphrase extraction [evaluation notebook](). ## 🚨 Issues Please feel free to start discussions in the Community Tab.
RonEliav/QA_discourse
b892506b6a932bbfc88437e67391895d1a3ca937
2022-07-07T20:14:43.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
RonEliav
null
RonEliav/QA_discourse
64
1
transformers
5,553
--- license: afl-3.0 ---
Gunulhona/tbsentmodel_v1
03cbb401984398fc6ec2021621e66cefd123ef39
2022-06-22T07:13:09.000Z
[ "pytorch", "bart", "feature-extraction", "transformers" ]
feature-extraction
false
Gunulhona
null
Gunulhona/tbsentmodel_v1
64
null
transformers
5,554
Entry not found
cynthiachan/CTI_bert_base_cased
340feea82746c8b0985c0836359ec9ab0d0afa0e
2022-07-14T06:57:23.000Z
[ "pytorch", "bert", "token-classification", "dataset:cynthiachan/FeedRef_10pct", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
cynthiachan
null
cynthiachan/CTI_bert_base_cased
64
null
transformers
5,555
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cynthiachan/FeedRef_10pct metrics: - precision - recall - f1 - accuracy model-index: - name: training results: - task: name: Token Classification type: token-classification dataset: name: cynthiachan/FeedRef_10pct type: cynthiachan/FeedRef_10pct args: FeedRef_10pct metrics: - name: Precision type: precision value: 0.8222222222222222 - name: Recall type: recall value: 0.7662721893491125 - name: F1 type: f1 value: 0.7932618683001531 - name: Accuracy type: accuracy value: 0.9940363242071022 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the cynthiachan/FeedRef_10pct dataset. It achieves the following results on the evaluation set: - Loss: 0.0447 - Precision: 0.8222 - Recall: 0.7663 - F1: 0.7933 - Accuracy: 0.9940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1445 | 0.37 | 500 | 0.1098 | 0.2707 | 0.4556 | 0.3396 | 0.9747 | | 0.0817 | 0.75 | 1000 | 0.0691 | 0.5357 | 0.4882 | 0.5108 | 0.9877 | | 0.0729 | 1.12 | 1500 | 0.0780 | 0.7853 | 0.4112 | 0.5398 | 0.9887 | | 0.0512 | 1.5 | 2000 | 0.0538 | 0.7311 | 0.6598 | 0.6936 | 0.9915 | | 0.0564 | 1.87 | 2500 | 0.0624 | 0.7581 | 0.5562 | 0.6416 | 0.9909 | | 0.0459 | 2.25 | 3000 | 0.0522 | 0.7841 | 0.6982 | 0.7387 | 0.9929 | | 0.0408 | 2.62 | 3500 | 0.0482 | 0.8098 | 0.7308 | 0.7683 | 0.9935 | | 0.0378 | 3.0 | 4000 | 0.0447 | 0.8222 | 0.7663 | 0.7933 | 0.9940 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
chizhikchi/Spanish_disease_finder
5634b0c39174d2f41c29aec1fe28b31e60182e37
2022-07-15T11:19:42.000Z
[ "pytorch", "roberta", "token-classification", "es", "transformers", "biomedical", "clinical", "ner", "license:cc-by-4.0", "autotrain_compatible" ]
token-classification
false
chizhikchi
null
chizhikchi/Spanish_disease_finder
64
1
transformers
5,556
--- license: cc-by-4.0 language: - es tags: - biomedical - clinical - ner metrics: - f1 widget: - text: "Se realizó angiotomografía urgente de arterias pulmonares, que mostró tromboembolia pulmonar bilateral con dilatación ventricular derecha, además de opacidades periféricas parcheadas compatibles con neumonía por SARS-CoV-2, que se confirmó en la PCR." example_title: "COVID-19" - text: "El paciente presenta HTA en tratamiento con IECA y alfa-bloqueante, artritis reumatoide en tratamiento con corticoesteroide oral." example_title: "Oncology" - text: "Otros antecedentes de importancia son la captura de 30 insectos dentro de la vivienda, de los cuales tres fueron positivos a la infección por Trypanosoma cruzi y las características de la vivienda con materiales de construcción considerados de riesgo para la presencia del transmisor" example_title: "Tropical medicine" - text: "Tras la evaluación de la paciente por medio de exploración psicopatológica, la orientación diagnóstica es de trastorno adaptativo tipo mixto." example_title: "Psychiatry" - text: "Los hallazgos descritos son compatibles con quiste braquial del segundo arco complicado con proceso inflamatorio - infeccioso, sin poder descartar proceso maligno subyacente." example_title: "Otorhinolaryngology" --- # Disease mention recognizer for Spanish clinical texts 🦠🔬 This model derives from participation of SINAI team in [DISease TExt Mining Shared Task (DISTEMIST)](https://temu.bsc.es/distemist/). The DISTEMIST-entities subtrack required automatically finding disease mentions in clinical cases. Taking into account the length of clinical texts in the dataset, we opted for a sentence-level NER approach based on fine-tuning of a [RoBERTa model pre-trained on Spanish biomedical corpora](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-es). # Evaluation and results Using the biomedical model on EHRs can be considered as cross-domain experiment and the fact that our biomedical system exhibits encouraging results on the NER task highlights the existence of domain transfer potential between biomedical and clinical fields. Table below summarizes the official micro-average scores obtained by this model during the official evaluation. Team standings are available [here](http://participants-area.bioasq.org/results/DisTEMIST/). | Precision | Recall | F1-score | |-----------|--------|----------| | 0.7520 | 0.7259 | 0.7387 | # System description paper and citation System description [paper](http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2022/paper-17.pdf) is published in proceedings of 10th BioASQ Workshop, which will be held as a Lab in CLEF 2022 on September 5-8, 2022: ```bibtex: @inproceedings{ChizhikovaEtAl:CLEF2022, title = {SINAI at CLEF 2022: Leveraging biomedical transformers to detect and normalize disease mentions}, author = {Mariia Chizhikova and Jaime Collado-Montañéz and Pilar López-Úbeda and Manuel C. Díaz-Galiano and L. Alfonso Ureña-López and M. Teresa Martín-Valdivia}, pages = {265--273}, url = {http://ceur-ws.org/Vol-XXX/#paper-17}, crossref = {CLEF2022}} ```
ZakaryaRouzki/t5-punctuation
20efe8343037e45c47b4656e2d4f26ef0a771cda
2022-07-04T09:48:53.000Z
[ "pytorch", "t5", "text2text-generation", "fr", "dataset:orange_sum", "dataset:mlsum", "transformers", "french", "punctuation", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
ZakaryaRouzki
null
ZakaryaRouzki/t5-punctuation
64
0
transformers
5,557
--- language: - "fr" tags: - t5 - french - punctuation license: apache-2.0 datasets: - orange_sum - mlsum --- # 🚀 Text Punctuator Based on Transformers model T5. T5 model fine-tuned for punctuation restoration. Model currently supports only French Language. More language supports will be added later using mT5. Train Datasets : Model trained using 2 french datasets (around 500k records): - [orange_sum](https://huggingface.co/datasets/orange_sum) - [mlsum](https://huggingface.co/datasets/mlsum) (only french text) More info will be added later. ## 🚀 Usage **TextPunctuator as a wrapper of the model.** 1. Install the package. ```bash pip install TextPunctuator ``` 2. Simple example ```python from Punctuator import TextPunctuator punctuator = TextPunctuator(use_gpu=False) # text input text = "Sur la base de ces échanges Blake Lemoine a donc jugé que le système avait atteint \ un niveau de conscience lui permettant d'être sensible Ce dernier a ensuite envoyé \ par email un rapport sur la sensibilité supposée de LaMDA à deux cents employés de \ Google Très vite les dirigeants de l’entreprise ont rejeté les allégations" text_punctuated = punctuator.punctuate(text, lang='fr') text_punctuated # output : """ Sur la base de ces échanges, Blake Lemoine a donc jugé que le système avait atteint un niveau de conscience lui permettant d’être sensible. Ce dernier a ensuite envoyé par email un rapport sur la sensibilité supposée de LaMDA à deux cents employés de Google. Très vite, les dirigeants de l’entreprise ont rejeté les allégations. """ ``` ## ☕ Contact Contact [Zakarya ROUZKI ](mailto:[email protected]) or at [Linkedin](https://linkedin.com/in/rouzki).
adamnik/electra-event-detection
c20cddf32a6c2f5fef7f2da87f2f08dce3771e4d
2022-07-20T01:27:14.000Z
[ "pytorch", "electra", "text-classification", "transformers", "license:mit" ]
text-classification
false
adamnik
null
adamnik/electra-event-detection
64
null
transformers
5,558
--- license: mit ---
sudo-s/modeversion1_m7_e4
eacd70d56ffdc0559d8384e642ed4b1fe0964610
2022-07-23T22:44:11.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
sudo-s
null
sudo-s/modeversion1_m7_e4
64
null
transformers
5,559
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: modeversion1_m7_e4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modeversion1_m7_e4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem7 dataset. It achieves the following results on the evaluation set: - Loss: 0.0902 - Accuracy: 0.9731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.073 | 0.06 | 100 | 3.9370 | 0.1768 | | 3.4186 | 0.12 | 200 | 3.2721 | 0.2590 | | 2.6745 | 0.18 | 300 | 2.6465 | 0.3856 | | 2.2806 | 0.23 | 400 | 2.2600 | 0.4523 | | 1.9275 | 0.29 | 500 | 1.9653 | 0.5109 | | 1.6958 | 0.35 | 600 | 1.6815 | 0.6078 | | 1.2797 | 0.41 | 700 | 1.4514 | 0.6419 | | 1.3772 | 0.47 | 800 | 1.3212 | 0.6762 | | 1.1765 | 0.53 | 900 | 1.1476 | 0.7028 | | 1.0152 | 0.59 | 1000 | 1.0357 | 0.7313 | | 0.7861 | 0.64 | 1100 | 1.0230 | 0.7184 | | 1.0262 | 0.7 | 1200 | 0.9469 | 0.7386 | | 0.8905 | 0.76 | 1300 | 0.8184 | 0.7756 | | 0.6919 | 0.82 | 1400 | 0.8083 | 0.7711 | | 0.7494 | 0.88 | 1500 | 0.7601 | 0.7825 | | 0.5078 | 0.94 | 1600 | 0.6884 | 0.8056 | | 0.7134 | 1.0 | 1700 | 0.6311 | 0.8160 | | 0.4328 | 1.06 | 1800 | 0.5740 | 0.8252 | | 0.4971 | 1.11 | 1900 | 0.5856 | 0.8290 | | 0.5207 | 1.17 | 2000 | 0.6219 | 0.8167 | | 0.4027 | 1.23 | 2100 | 0.5703 | 0.8266 | | 0.5605 | 1.29 | 2200 | 0.5217 | 0.8372 | | 0.2723 | 1.35 | 2300 | 0.4805 | 0.8565 | | 0.401 | 1.41 | 2400 | 0.4811 | 0.8490 | | 0.3419 | 1.47 | 2500 | 0.4619 | 0.8608 | | 0.301 | 1.52 | 2600 | 0.4318 | 0.8712 | | 0.2872 | 1.58 | 2700 | 0.4698 | 0.8573 | | 0.2451 | 1.64 | 2800 | 0.4210 | 0.8729 | | 0.2211 | 1.7 | 2900 | 0.3645 | 0.8851 | | 0.3145 | 1.76 | 3000 | 0.4139 | 0.8715 | | 0.2001 | 1.82 | 3100 | 0.3605 | 0.8864 | | 0.3095 | 1.88 | 3200 | 0.4274 | 0.8675 | | 0.1915 | 1.93 | 3300 | 0.2910 | 0.9101 | | 0.2465 | 1.99 | 3400 | 0.2726 | 0.9103 | | 0.1218 | 2.05 | 3500 | 0.2742 | 0.9129 | | 0.0752 | 2.11 | 3600 | 0.2572 | 0.9183 | | 0.1067 | 2.17 | 3700 | 0.2584 | 0.9203 | | 0.0838 | 2.23 | 3800 | 0.2458 | 0.9212 | | 0.1106 | 2.29 | 3900 | 0.2412 | 0.9237 | | 0.092 | 2.34 | 4000 | 0.2232 | 0.9277 | | 0.1056 | 2.4 | 4100 | 0.2817 | 0.9077 | | 0.0696 | 2.46 | 4200 | 0.2334 | 0.9285 | | 0.0444 | 2.52 | 4300 | 0.2142 | 0.9363 | | 0.1046 | 2.58 | 4400 | 0.2036 | 0.9352 | | 0.066 | 2.64 | 4500 | 0.2115 | 0.9365 | | 0.0649 | 2.7 | 4600 | 0.1730 | 0.9448 | | 0.0513 | 2.75 | 4700 | 0.2148 | 0.9339 | | 0.0917 | 2.81 | 4800 | 0.1810 | 0.9438 | | 0.0879 | 2.87 | 4900 | 0.1971 | 0.9388 | | 0.1052 | 2.93 | 5000 | 0.1602 | 0.9508 | | 0.0362 | 2.99 | 5100 | 0.1475 | 0.9556 | | 0.041 | 3.05 | 5200 | 0.1328 | 0.9585 | | 0.0156 | 3.11 | 5300 | 0.1389 | 0.9571 | | 0.0047 | 3.17 | 5400 | 0.1224 | 0.9638 | | 0.0174 | 3.22 | 5500 | 0.1193 | 0.9651 | | 0.0087 | 3.28 | 5600 | 0.1276 | 0.9622 | | 0.0084 | 3.34 | 5700 | 0.1134 | 0.9662 | | 0.0141 | 3.4 | 5800 | 0.1239 | 0.9631 | | 0.0291 | 3.46 | 5900 | 0.1199 | 0.9645 | | 0.0049 | 3.52 | 6000 | 0.1103 | 0.9679 | | 0.0055 | 3.58 | 6100 | 0.1120 | 0.9662 | | 0.0061 | 3.63 | 6200 | 0.1071 | 0.9668 | | 0.0054 | 3.69 | 6300 | 0.1032 | 0.9697 | | 0.0041 | 3.75 | 6400 | 0.0961 | 0.9711 | | 0.0018 | 3.81 | 6500 | 0.0930 | 0.9718 | | 0.0032 | 3.87 | 6600 | 0.0918 | 0.9730 | | 0.0048 | 3.93 | 6700 | 0.0906 | 0.9732 | | 0.002 | 3.99 | 6800 | 0.0902 | 0.9731 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
SIMAS-UN/blaming_migrants
4d36a7b17fe31507d758d6a8d2cc872ebe1a6c88
2022-07-24T03:56:22.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
SIMAS-UN
null
SIMAS-UN/blaming_migrants
64
null
transformers
5,560
Entry not found
AI-Lab-Makerere/en_lg
d74d872401475fa5fce5768f44aaa8f4bfdf41e0
2022-06-28T08:38:46.000Z
[ "pytorch", "marian", "text2text-generation", "unk", "dataset:Eric Peter/autonlp-data-EN-LUG", "transformers", "autonlp", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
AI-Lab-Makerere
null
AI-Lab-Makerere/en_lg
63
null
transformers
5,561
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - Eric Peter/autonlp-data-EN-LUG co2_eq_emissions: 133.0219882109991 --- # Model Trained Using AutoNLP - Problem type: Machine Translation - Model ID: 474612462 - CO2 Emissions (in grams): 133.0219882109991 ## Validation Metrics - Loss: 1.336498737335205 - Rouge1: 52.5404 - Rouge2: 31.6639 - RougeL: 50.1696 - RougeLsum: 50.3398 - Gen Len: 39.046 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/EricPeter/autonlp-EN-LUG-474612462 ```
Helsinki-NLP/opus-mt-hu-fr
c2885a2c79baeea474b021a129a25b561498e1dd
2021-09-09T22:10:59.000Z
[ "pytorch", "marian", "text2text-generation", "hu", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-hu-fr
63
null
transformers
5,562
--- tags: - translation license: apache-2.0 --- ### opus-mt-hu-fr * source languages: hu * target languages: fr * OPUS readme: [hu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.hu.fr | 50.3 | 0.660 |
Helsinki-NLP/opus-mt-sl-fi
db1452a4a0ec2dfaa1eee5ed4b9881298f84cd50
2021-09-10T14:03:42.000Z
[ "pytorch", "marian", "text2text-generation", "sl", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sl-fi
63
null
transformers
5,563
--- tags: - translation license: apache-2.0 --- ### opus-mt-sl-fi * source languages: sl * target languages: fi * OPUS readme: [sl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sl.fi | 23.4 | 0.517 |
Kowsher/bangla-bert
f82455bb9aad640c5a1ce8f259e3d0952079fa27
2022-03-06T15:42:48.000Z
[ "pytorch", "bert", "fill-mask", "bn", "dataset:BanglaLM dataset", "arxiv:1810.04805", "transformers", "Bert base Bangla", "Bengali Bert", "Bengali lm", "Bangla Base Bert", "Bangla Bert language model", "Bangla Bert", "autotrain_compatible" ]
fill-mask
false
Kowsher
null
Kowsher/bangla-bert
63
1
transformers
5,564
--- language: bn tags: - Bert base Bangla - Bengali Bert - Bengali lm - Bangla Base Bert - Bangla Bert language model - Bangla Bert datasets: - BanglaLM dataset --- # Bangla BERT Base Here we published a pretrained Bangla bert language model as **bangla-bert**! which is now available in huggingface model hub. Here we described [bangla-bert](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert) ## Corpus Details We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB. After downloading the dataset, we went on the way to mask LM. **bangla-bert Tokenizer** ```py from transformers import AutoTokenizer, AutoModel bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bangla-bert") text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি" bnbert_tokenizer.tokenize(text) # output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি'] ``` **MASK Generation** here, we can use bert base bangla model as for masked language modeling: ```py from transformers import BertForMaskedLM, BertTokenizer, pipeline model = BertForMaskedLM.from_pretrained("Kowsher/bangla-bert") tokenizer = BertTokenizer.from_pretrained("Kowsher/bangla-bert") nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'} nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'তুই রাজাকার তুই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'} nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'} ``` **Cite this work** Kowsher, Md., BERT Base Bangla: A Pretrained Transformer Based Bangla Bert Model (September 15, 2021). Research on Computational Language, Available at SSRN: https://ssrn.com/abstract= ## Author [Kowsher](http://kowsher.org/)
Maunish/ecomm-sbert
f4fc1e544a76a2bc66970a4dad3b60d6fdc69855
2022-02-09T17:47:29.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Maunish
null
Maunish/ecomm-sbert
63
null
transformers
5,565
--- license: apache-2.0 ---
Milos/slovak-gpt-j-162M
3ebb80b851786d0823af43ca854ce619f62d8aa4
2022-02-18T14:02:12.000Z
[ "pytorch", "gptj", "text-generation", "sk", "arxiv:2104.09864", "transformers", "Slovak GPT-J", "causal-lm", "license:gpl-3.0" ]
text-generation
false
Milos
null
Milos/slovak-gpt-j-162M
63
null
transformers
5,566
--- language: - sk tags: - Slovak GPT-J - pytorch - causal-lm license: gpl-3.0 --- # Slovak GPT-J-162M Slovak GPT-J-162M is the first model released in Slovak GPT-J series and the very first publicly available transformer trained predominantly on Slovak corpus. Since the initial release two other models were made public, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and the largest [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B). ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 162M trainable parameters. <figure> | Hyperparameter | Value | |----------------------|-------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 162,454,608 | | \\(n_{layers}\\) | 12 | | \\(d_{model}\\) | 768 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate parts of the corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for almost 37 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was 3.065. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-162M") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-162M") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Moje najobľubenejšie mesto na severe Slovenska je" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Moje najobľubenejšie mesto na severe Slovenska je Žilina.\n\nV Žiline sa nachádza množstvo zaujímavých miest' ``` ### Capabilities, Limitations, and Biases First and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :) Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela věrná.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release _substantially_ larger versions of Slovak GPT-J models that are way more capable. If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-162m, author = {Kondela, Milos}, title = {{Slovak GPT-J-162M}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-162M}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
Nicki/gpt3-base
656bb22aff7fd5afb1f92dfd7a940a9e0592fed0
2021-07-29T05:53:53.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Nicki
null
Nicki/gpt3-base
63
null
transformers
5,567
Entry not found
Prompsit/paraphrase-bert-en
75a238bbb26c0f0dca7ec37b5e16b6381f7cdc56
2021-12-23T12:03:17.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers" ]
text-classification
false
Prompsit
null
Prompsit/paraphrase-bert-en
63
2
transformers
5,568
--- pipeline_tag: text-classification inference: false language: en tags: - transformers --- # Prompsit/paraphrase-bert-en This model allows to evaluate paraphrases for a given phrase. We have fine-tuned this model from pretrained "bert-base-uncased". Model built under a TSI-100905-2019-4 project, co-financed by Ministry of Economic Affairs and Digital Transformation from the Government of Spain. # How to use it The model answer the following question: Is "phrase B" a paraphrase of "phrase A". Please note that we're considering phrases instead of sentences. Therefore, we must take into account that the model doesn't expect to find punctuation marks or long pieces of text. Resulting probabilities correspond to classes: * 0: Not a paraphrase * 1: It's a paraphrase So, considering the phrase "may be addressed" and a candidate paraphrase like "could be included", you can use the model like this: ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Prompsit/paraphrase-bert-en") model = AutoModelForSequenceClassification.from_pretrained("Prompsit/paraphrase-bert-en") input = tokenizer('may be addressed','could be included',return_tensors='pt') logits = model(**input).logits soft = torch.nn.Softmax(dim=1) print(soft(logits)) ``` Code output is: ``` tensor([[0.1592, 0.8408]], grad_fn=<SoftmaxBackward>) ``` As the probability of 1 (=It's a paraphrase) is 0.84 and the probability of 0 (=It is not a paraphrase) is 0.15, we can conclude, for our previous example, that "could be included" is a paraphrase of "may be addressed". # Evaluation results We have used as test dataset 16500 pairs of phrases human tagged. Metrics obtained are: ``` metrics={ 'test_loss': 0.5660144090652466, 'test_accuracy': 0.8170742794799527, 'test_precision': 0.7043977055449331, 'test_recall': 0.5978578383641675, 'test_f1': 0.6467696629213483, 'test_matthews_correlation': 0.5276716223607356, 'test_runtime': 19.3345, 'test_samples_per_second': 568.88, 'test_steps_per_second': 17.792 } ```
SkolkovoInstitute/ruT5-base-detox
f986f748295a7e9ba6427ff7e1be6b348649c5a0
2021-12-29T09:13:41.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
SkolkovoInstitute
null
SkolkovoInstitute/ruT5-base-detox
63
null
transformers
5,569
This is the detoxification baseline model trained on the [train](https://github.com/skoltech-nlp/russe_detox_2022/blob/main/data/input/train.tsv) part of "RUSSE 2022: Russian Text Detoxification Based on Parallel Corpora" competition. The source sentences are Russian toxic messages from Odnoklassniki, Pikabu, and Twitter platforms. The base model is [ruT5](https://huggingface.co/sberbank-ai/ruT5-base) provided from Sber. **How to use** ```python from transformers import T5ForConditionalGeneration, AutoTokenizer base_model_name = 'sberbank-ai/ruT5-base' model_name = 'SkolkovoInstitute/ruT5-base-detox' tokenizer = AutoTokenizer.from_pretrained(base_model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
ainize/gpt2-spongebob-script-large
698090c3da4ad7e39a7d6d0ba71ecc7bd9046729
2021-05-21T12:18:42.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
ainize
null
ainize/gpt2-spongebob-script-large
63
null
transformers
5,570
### Model information Fine tuning data: https://www.kaggle.com/mikhailgaerlan/spongebob-squarepants-completed-transcripts License: CC-BY-SA Base model: gpt-2 large Epoch: 50 Train runtime: 14723.0716 secs Loss: 0.0268 API page: [Ainize](https://ainize.ai/fpem123/GPT2-Spongebob?branch=master) Demo page: [End-point](https://master-gpt2-spongebob-fpem123.endpoint.ainize.ai/) ### ===Teachable NLP=== ### To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
deepset/xlm-roberta-base-squad2-distilled
350a9ad58048570793a7f0899dd988c69c641a51
2022-07-26T09:06:02.000Z
[ "pytorch", "xlm-roberta", "question-answering", "multilingual", "dataset:squad_v2", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/xlm-roberta-base-squad2-distilled
63
3
transformers
5,571
--- language: multilingual datasets: - squad_v2 license: mit thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg tags: - exbert --- # deepset/xlm-roberta-base-squad2-distilled - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model. ## Overview **Language model:** deepset/xlm-roberta-base-squad2-distilled **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` batch_size = 56 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 3 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled") # or reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled") ``` For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system) ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set ``` "exact": 74.06721131980123% "f1": 76.39919553344667% ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q
f74da44e63bbe972163212b46ce195eca92eb35a
2021-07-25T21:32:52.000Z
[ "pytorch", "mpnet", "fill-mask", "arxiv:2102.07033", "arxiv:2104.08727", "sentence-transformers", "feature-extraction", "sentence-similarity" ]
sentence-similarity
false
flax-sentence-embeddings
null
flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q
63
null
sentence-transformers
5,572
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-QA_v1-mpnet-asymmetric-Q ## Model Description SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used two separate pretrained [mpnet-base](https://huggingface.co/microsoft/mpnet-base) models and trained them using contrastive learning objective. Question and answer pairs from StackExchange and other datasets were used as training data to make the model robust to Question / Answer embedding similarity. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. ## Intended uses This model set is intended to be used as a sentence encoder for a search engine. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. Two models should be used on conjunction for Semantic Search purposes. 1. [multi-QA_v1-mpnet-asymmetric-Q](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q) - Model to encode Questions 1. [multi-QA_v1-mpnet-asymmetric-Q](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A) - Model to encode Answers ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model_Q = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q') model_A = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A') question = "Replace me by any question you'd like." question_embbedding = model_Q.encode(text) answer = "Replace me by any answer you'd like." answer_embbedding = model_A.encode(text) answer_likeliness = cosine_similarity(question_embedding, answer_embedding) ``` # Training procedure ## Pre-training We use the pretrained [`Mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 | | [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | SearchQA | - | 582,261 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
jasminejwebb/KeywordIdentifier
ff14521a2d83c63f90b8d9cd7a0c1710a06c46f0
2022-02-11T21:48:06.000Z
[ "pytorch", "xlnet", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
jasminejwebb
null
jasminejwebb/KeywordIdentifier
63
4
transformers
5,573
Entry not found
jcblaise/roberta-tagalog-large
acb6b204dfb1afdd7476eae5da234cbcf8899846
2021-11-12T03:25:48.000Z
[ "pytorch", "tf", "roberta", "fill-mask", "tl", "transformers", "tagalog", "filipino", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
jcblaise
null
jcblaise/roberta-tagalog-large
63
null
transformers
5,574
--- language: tl tags: - roberta - tagalog - filipino license: cc-by-sa-4.0 inference: false --- # RoBERTa Tagalog Large Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This model is a cased model. We do not release uncased RoBERTa models. ## Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @article{cruz2021improving, title={Improving Large-scale Language Models and Resources for Filipino}, author={Jan Christian Blaise Cruz and Charibeth Cheng}, journal={arXiv preprint arXiv:2111.06053}, year={2021} } ``` ## Data and Other Resources Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com ## Contact If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
jshu/gpt2-medium-ontapdoc-gen-2
bf64adef84bb910d69b9630a9894ff189f254ff3
2021-11-19T18:29:36.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
jshu
null
jshu/gpt2-medium-ontapdoc-gen-2
63
null
transformers
5,575
Entry not found
mrm8488/mbart-large-finetuned-opus-it-en-translation
040c4a41cf5b171be94ffc43d22e174da8eb69a8
2021-01-27T13:19:19.000Z
[ "pytorch", "mbart", "text2text-generation", "it", "en", "dataset:opus100", "transformers", "translation", "autotrain_compatible" ]
translation
false
mrm8488
null
mrm8488/mbart-large-finetuned-opus-it-en-translation
63
1
transformers
5,576
--- tags: - translation language: - it - en datasets: - opus100 --- ### mbart-large-it-en This is mbart-large-cc25, finetuned on opus100 for Italian to English translation. It scores BLEU **25.82** on test set.
philippelaban/keep_it_simple
e8fe51874a3c97787bbfe6b2ef115b4b409ccdad
2022-02-09T22:42:47.000Z
[ "pytorch", "gpt2", "text-generation", "en", "dataset:cnn_dailymail", "transformers", "simplification", "license:apache-2.0" ]
text-generation
false
philippelaban
null
philippelaban/keep_it_simple
63
1
transformers
5,577
--- language: - en tags: - simplification license: apache-2.0 datasets: - cnn_dailymail widget: - text: "A capsule containing asteroid soil samples landed in the Australian Outback. The precision required to carry out the mission thrilled many.<|endoftext|>" example_title: "Example 1" --- # Try out in the Hosted inference API In the right panel, you can try the model (although it only handles a short sequence length). Feel free to try Example 1, and modify it to inspect model ability. # Model Loading The model can be loaded in the following way: ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("philippelaban/keep_it_simple") kis_model = AutoModelForCausalLM.from_pretrained("philippelaban/keep_it_simple") ``` # Example use And then used by first inputting a paragraph for simplification, followed by a `bos_token` to indicate to the model to start simplifying. Imagine we want to simplify the following paragraph: ``` A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6. The extremely high precision required to carry out the mission thrilled many in Japan, who said they took pride in its success. ``` The following code can be run: ``` paragraph = """A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6. The extremely high precision required to carry out the mission thrilled many in Japan, who said they took pride in its success.""" start_id = tokenizer.bos_token_id tokenized_paragraph = [(tokenizer.encode(text=paragraph) + [start_id])] input_ids = torch.LongTensor(tokenized_paragraph) output_ids = kis_model.generate(input_ids, max_length=150, num_beams=4, do_sample=True, num_return_sequences=8) output_ids = output_ids[:, input_ids.shape[1]:] output = tokenizer.batch_decode(output_ids) output = [o.replace(tokenizer.eos_token, "") for o in output] for o in output: print("----") print(o) ``` # Example output When run, an output similar to the following should be obtained: A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise, said many in Japan, and they took pride in its success. A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise and well thought-out, said many in Japan, who took pride in the mission. A small capsule containing soil samples that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was designed to test the performance of the country's space fleet, which many said took pride in its success. A small capsule containing soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 probe was followed by a landing on the Outback. The precise timing of the mission thrilled many in Japan, who said they took pride in its success. # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/tingofurro/keep_it_simple
projecte-aina/roberta-base-ca-cased-sts
1e644c69274edfbb239eccbf565dd6a38d2cef27
2022-06-16T08:04:23.000Z
[ "pytorch", "roberta", "text-classification", "ca", "dataset:projecte-aina/sts-ca", "arxiv:1907.11692", "transformers", "catalan", "semantic textual similarity", "sts-ca", "CaText", "Catalan Textual Corpus", "license:apache-2.0", "model-index" ]
text-classification
false
projecte-aina
null
projecte-aina/roberta-base-ca-cased-sts
63
null
transformers
5,578
--- language: - ca pipeline_tag: text-classification license: apache-2.0 tags: - "catalan" - "semantic textual similarity" - "sts-ca" - "CaText" - "Catalan Textual Corpus" datasets: - "projecte-aina/sts-ca" metrics: - "pearson" model-index: - name: roberta-base-ca-cased-sts results: - task: type: text-classification dataset: type: projecte-aina/sts-ca name: sts-ca metrics: - type: pearson value: 0.7973 --- # Catalan BERTa (RoBERTa-base) finetuned for Semantic Textual Similarity. The **roberta-base-ca-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details). ## Datasets We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation. ## Evaluation and results We evaluated the _roberta-base-ca-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines: | Model | STS-ca (Pearson) | |:------------|:----| | roberta-base-ca-cased-sts | **79.73** | | mBERT | 76.34 | | XLM-RoBERTa | 75.40 | | WikiBERT-ca | 77.18 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## How to use To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code: ```python from transformers import pipeline, AutoTokenizer from scipy.special import logit model = 'projecte-aina/roberta-base-ca-cased-sts' tokenizer = AutoTokenizer.from_pretrained(model) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) def prepare(sentence_pairs): sentence_pairs_prep = [] for s1, s2 in sentence_pairs: sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}") return sentence_pairs_prep sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."), ("M'agrades.", "T'estimo."), ("M'agrada el sol i la calor", "A la Garrotxa plou molt.")] predictions = pipe(prepare(sentence_pairs), add_special_tokens=False) # convert back to scores to the original 1 and 5 interval for prediction in predictions: prediction['score'] = logit(prediction['score']) print(predictions) ``` Expected output: ``` [{'label': 'SIMILARITY', 'score': 2.4280577200108384}, {'label': 'SIMILARITY', 'score': 2.132843521240822}, {'label': 'SIMILARITY', 'score': 1.615101695426227}] ``` <sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._ ## Citing If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ```
razent/cotext-2-cc
c44d0273e79da8a1ec3273c80b300ade6277a493
2022-03-15T03:03:51.000Z
[ "pytorch", "tf", "jax", "t5", "feature-extraction", "code", "dataset:code_search_net", "transformers" ]
feature-extraction
false
razent
null
razent/cotext-2-cc
63
null
transformers
5,579
--- language: code datasets: - code_search_net --- # CoText (2-CC) ## Introduction Paper: [CoTexT: Multi-task Learning with Code-Text Transformer](https://aclanthology.org/2021.nlp4prog-1.5.pdf) Authors: _Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, Yanfang Ye_ ## How to use Supported languages: ```shell "go" "java" "javascript" "php" "python" "ruby" ``` For more details, do check out [our Github repo](https://github.com/justinphan3110/CoTexT). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/cotext-2-cc") model = AutoModelForSeq2SeqLM.from_pretrained("razent/cotext-2-cc") ​ sentence = "def add(a, b): return a + b" text = "python: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ``` ## Citation ``` @inproceedings{phan-etal-2021-cotext, title = "{C}o{T}ex{T}: Multi-task Learning with Code-Text Transformer", author = "Phan, Long and Tran, Hieu and Le, Daniel and Nguyen, Hieu and Annibal, James and Peltekian, Alec and Ye, Yanfang", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.nlp4prog-1.5", doi = "10.18653/v1/2021.nlp4prog-1.5", pages = "40--47" } ```
wietsedv/xlm-roberta-base-ft-udpos28-tr
86a24b17a2c653386b580c843d3e38e3791c0c44
2022-02-25T09:59:31.000Z
[ "pytorch", "xlm-roberta", "token-classification", "tr", "dataset:universal_dependencies", "transformers", "part-of-speech", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
wietsedv
null
wietsedv/xlm-roberta-base-ft-udpos28-tr
63
null
transformers
5,580
--- language: - tr license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-tr results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 74.4 - type: accuracy name: Dutch Test accuracy value: 73.7 - type: accuracy name: German Test accuracy value: 73.5 - type: accuracy name: Italian Test accuracy value: 73.2 - type: accuracy name: French Test accuracy value: 71.4 - type: accuracy name: Spanish Test accuracy value: 71.1 - type: accuracy name: Russian Test accuracy value: 77.9 - type: accuracy name: Swedish Test accuracy value: 74.5 - type: accuracy name: Norwegian Test accuracy value: 69.2 - type: accuracy name: Danish Test accuracy value: 73.8 - type: accuracy name: Low Saxon Test accuracy value: 45.8 - type: accuracy name: Akkadian Test accuracy value: 39.8 - type: accuracy name: Armenian Test accuracy value: 80.9 - type: accuracy name: Welsh Test accuracy value: 62.9 - type: accuracy name: Old East Slavic Test accuracy value: 63.7 - type: accuracy name: Albanian Test accuracy value: 71.5 - type: accuracy name: Slovenian Test accuracy value: 62.3 - type: accuracy name: Guajajara Test accuracy value: 41.3 - type: accuracy name: Kurmanji Test accuracy value: 68.0 - type: accuracy name: Turkish Test accuracy value: 88.4 - type: accuracy name: Finnish Test accuracy value: 81.1 - type: accuracy name: Indonesian Test accuracy value: 71.5 - type: accuracy name: Ukrainian Test accuracy value: 76.8 - type: accuracy name: Polish Test accuracy value: 74.3 - type: accuracy name: Portuguese Test accuracy value: 76.7 - type: accuracy name: Kazakh Test accuracy value: 81.1 - type: accuracy name: Latin Test accuracy value: 68.2 - type: accuracy name: Old French Test accuracy value: 47.5 - type: accuracy name: Buryat Test accuracy value: 62.6 - type: accuracy name: Kaapor Test accuracy value: 24.6 - type: accuracy name: Korean Test accuracy value: 63.7 - type: accuracy name: Estonian Test accuracy value: 82.0 - type: accuracy name: Croatian Test accuracy value: 72.3 - type: accuracy name: Gothic Test accuracy value: 24.1 - type: accuracy name: Swiss German Test accuracy value: 41.1 - type: accuracy name: Assyrian Test accuracy value: 23.0 - type: accuracy name: North Sami Test accuracy value: 45.2 - type: accuracy name: Naija Test accuracy value: 36.0 - type: accuracy name: Latvian Test accuracy value: 80.0 - type: accuracy name: Chinese Test accuracy value: 55.9 - type: accuracy name: Tagalog Test accuracy value: 56.2 - type: accuracy name: Bambara Test accuracy value: 30.0 - type: accuracy name: Lithuanian Test accuracy value: 81.2 - type: accuracy name: Galician Test accuracy value: 72.4 - type: accuracy name: Vietnamese Test accuracy value: 57.0 - type: accuracy name: Greek Test accuracy value: 80.2 - type: accuracy name: Catalan Test accuracy value: 69.1 - type: accuracy name: Czech Test accuracy value: 75.8 - type: accuracy name: Erzya Test accuracy value: 52.7 - type: accuracy name: Bhojpuri Test accuracy value: 50.8 - type: accuracy name: Thai Test accuracy value: 49.0 - type: accuracy name: Marathi Test accuracy value: 77.9 - type: accuracy name: Basque Test accuracy value: 66.8 - type: accuracy name: Slovak Test accuracy value: 75.1 - type: accuracy name: Kiche Test accuracy value: 43.1 - type: accuracy name: Yoruba Test accuracy value: 31.7 - type: accuracy name: Warlpiri Test accuracy value: 48.6 - type: accuracy name: Tamil Test accuracy value: 79.5 - type: accuracy name: Maltese Test accuracy value: 34.1 - type: accuracy name: Ancient Greek Test accuracy value: 58.5 - type: accuracy name: Icelandic Test accuracy value: 68.9 - type: accuracy name: Mbya Guarani Test accuracy value: 33.6 - type: accuracy name: Urdu Test accuracy value: 60.5 - type: accuracy name: Romanian Test accuracy value: 69.6 - type: accuracy name: Persian Test accuracy value: 71.3 - type: accuracy name: Apurina Test accuracy value: 50.2 - type: accuracy name: Japanese Test accuracy value: 44.4 - type: accuracy name: Hungarian Test accuracy value: 86.4 - type: accuracy name: Hindi Test accuracy value: 63.2 - type: accuracy name: Classical Chinese Test accuracy value: 36.3 - type: accuracy name: Komi Permyak Test accuracy value: 51.0 - type: accuracy name: Faroese Test accuracy value: 59.5 - type: accuracy name: Sanskrit Test accuracy value: 38.3 - type: accuracy name: Livvi Test accuracy value: 65.4 - type: accuracy name: Arabic Test accuracy value: 64.4 - type: accuracy name: Wolof Test accuracy value: 38.9 - type: accuracy name: Bulgarian Test accuracy value: 72.4 - type: accuracy name: Akuntsu Test accuracy value: 49.1 - type: accuracy name: Makurap Test accuracy value: 23.3 - type: accuracy name: Kangri Test accuracy value: 46.5 - type: accuracy name: Breton Test accuracy value: 55.4 - type: accuracy name: Telugu Test accuracy value: 80.7 - type: accuracy name: Cantonese Test accuracy value: 54.3 - type: accuracy name: Old Church Slavonic Test accuracy value: 42.9 - type: accuracy name: Karelian Test accuracy value: 70.5 - type: accuracy name: Upper Sorbian Test accuracy value: 67.1 - type: accuracy name: South Levantine Arabic Test accuracy value: 58.3 - type: accuracy name: Komi Zyrian Test accuracy value: 47.6 - type: accuracy name: Irish Test accuracy value: 60.3 - type: accuracy name: Nayini Test accuracy value: 50.0 - type: accuracy name: Munduruku Test accuracy value: 41.9 - type: accuracy name: Manx Test accuracy value: 37.5 - type: accuracy name: Skolt Sami Test accuracy value: 47.4 - type: accuracy name: Afrikaans Test accuracy value: 71.3 - type: accuracy name: Old Turkish Test accuracy value: 53.4 - type: accuracy name: Tupinamba Test accuracy value: 53.6 - type: accuracy name: Belarusian Test accuracy value: 76.9 - type: accuracy name: Serbian Test accuracy value: 72.2 - type: accuracy name: Moksha Test accuracy value: 50.0 - type: accuracy name: Western Armenian Test accuracy value: 70.5 - type: accuracy name: Scottish Gaelic Test accuracy value: 54.1 - type: accuracy name: Khunsari Test accuracy value: 50.0 - type: accuracy name: Hebrew Test accuracy value: 79.2 - type: accuracy name: Uyghur Test accuracy value: 70.8 - type: accuracy name: Chukchi Test accuracy value: 40.8 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Turkish This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-tr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-tr") ```
GermanT5/t5-efficient-gc4-german-base-nl36
756f657882367c9929e220d9bfba87641cf4a641
2022-07-22T05:27:06.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "de", "transformers", "german", "deutsch", "license:mit", "autotrain_compatible" ]
text2text-generation
false
GermanT5
null
GermanT5/t5-efficient-gc4-german-base-nl36
63
3
transformers
5,581
--- language: de license: mit tags: - german - deutsch --- # Creators - [Stefan Schweter](https://github.com/stefan-it) ([Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/) / [Open Source @ DBMDZ](https://github.com/dbmdz)) - [Philip May](https://may.la) ([T-Systems onsite](https://www.t-systems-onsite.de/)) - [Philipp Schmid ](https://www.philschmid.de/) ([Hugging Face](https://huggingface.co/)) # Evaluation Evaluation was done on a summarization task with: - train data: [Swisstext](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) - test data: [MLSUM](https://huggingface.co/datasets/mlsum) - GPUs: 4 (V100) for details see: <https://github.com/GermanT5/german-t5-eval> # Tips for training on GPUs This model is too big to fit on a normal 16GB GPU in FP32 mode. For various reasons, T5 models cannot be trained in FP16 mode. However, mixed precision training is not yet supported on many GPUs. For example, it does not work on V100 GPUs. On A100, however, it does. That is why we suggest to use [DeepSpeed](https://github.com/microsoft/DeepSpeed) for training. In particular, we recommend the [ZeRO-3 Example](https://huggingface.co/docs/transformers/main_classes/deepspeed#zero3-example) `auto` configuration. > ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. see [ZeRO-Offload](https://www.deepspeed.ai/features/#zero-offload) ## License - The MIT License Copyright 2022 Stefan Schweter<br> Copyright 2022 Philip May, T-Systems onsite<br> Copyright 2022 P. S. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
microsoft/tapex-large-finetuned-wikisql
aff5c522ffb429d0ad290e900a1863ce63a8325f
2022-07-14T10:10:52.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:wikisql", "arxiv:2107.07653", "transformers", "tapex", "table-question-answering", "license:mit", "autotrain_compatible" ]
table-question-answering
false
microsoft
null
microsoft/tapex-large-finetuned-wikisql
63
1
transformers
5,582
--- language: en tags: - tapex - table-question-answering datasets: - wikisql license: mit --- # TAPEX (large-sized model) TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining). ## Model description TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. This model is the `tapex-base` model fine-tuned on the [WikiSQL](https://huggingface.co/datasets/wikisql) dataset. ## Intended Uses You can use the model for table question answering on relatively simple questions. Some **solveable** questions are shown below (corresponding tables now shown): | Question | Answer | |:---: |:---:| | tell me what the notes are for south australia | no slogan on current series | | what position does the player who played for butler cc (ks) play? | guard-forward | | how many schools did player number 3 play at? | 1.0 | | how many winning drivers in the kraco twin 125 (r2) race were there? | 1.0 | | for the episode(s) aired in the u.s. on 4 april 2008, what were the names? | "bust a move" part one, "bust a move" part two | ### How to Use Here is how to use this model in transformers: ```python from transformers import TapexTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wikisql") model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-large-finetuned-wikisql") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) # tapex accepts uncased input since it is pre-trained on the uncased corpus query = "In which year did beijing host the Olympic Games?" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model.generate(**encoding) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # [' 2008.0'] ``` ### How to Eval Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex). ### BibTeX entry and citation info ```bibtex @inproceedings{ liu2022tapex, title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor}, author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=O50443AsCP} } ```
adnankhawaja/RomanUrdu-RoBERTa-AFT
71f73fefec1f583a7e23874fbdf3a6ea6a6472f8
2022-04-07T06:05:31.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
adnankhawaja
null
adnankhawaja/RomanUrdu-RoBERTa-AFT
63
null
transformers
5,583
Entry not found
speechbrain/asr-wav2vec2-librispeech
cab1aa4c274467daba8e395b6832f88da8684576
2022-06-08T14:40:42.000Z
[ "wav2vec2", "feature-extraction", "en", "dataset:librispeech", "arxiv:2106.04624", "speechbrain", "automatic-speech-recognition", "CTC", "Attention", "Transformer", "pytorch", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
speechbrain
null
speechbrain/asr-wav2vec2-librispeech
63
null
speechbrain
5,584
--- language: - en thumbnail: null pipeline_tag: automatic-speech-recognition tags: - automatic-speech-recognition - CTC - Attention - Transformer - pytorch - speechbrain - hf-asr-leaderboard license: apache-2.0 datasets: - librispeech metrics: - wer - cer model-index: - name: wav2vec2+CTC by SpeechBrain results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.90 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.96 --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC trained on LibriSpeech This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (English Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 24-03-22 | 1.90 | 3.96 | 1xA100 40GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into characters and trained with the train transcriptions (EN). - Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self)) is combined with two DNN layers and finetuned on LibriSpeech. The obtained final acoustic representation is given to the CTC. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-librispeech", savedir="pretrained_models/asr-wav2vec2-librispeech") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-en/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/CTC python train_with_wav2vec.py hparams/train_en_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1pg0QzW-LqAISG8Viw_lUTGjXwOqh7gkl?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
shengnan/visualize-v0-pre10w-preseed1
06b5ab7354aeff0486a62b88161e304f36171582
2022-07-18T02:36:21.000Z
[ "pytorch", "t5", "transformers" ]
null
false
shengnan
null
shengnan/visualize-v0-pre10w-preseed1
63
null
transformers
5,585
Entry not found
SharpAI/mal_tls-bert-base
dd28f9eb446b6506df6ae5caed873ab5d9534df6
2022-07-27T20:51:25.000Z
[ "pytorch", "tf", "bert", "text-classification", "transformers", "generated_from_keras_callback", "model-index" ]
text-classification
false
SharpAI
null
SharpAI/mal_tls-bert-base
63
null
transformers
5,586
--- tags: - generated_from_keras_callback model-index: - name: mal_tls-bert-base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mal_tls-bert-base This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
Helsinki-NLP/opus-mt-en-grk
aaa456e34dfc48aca1e9e13a8e4b0ee357464e03
2021-01-18T08:08:31.000Z
[ "pytorch", "marian", "text2text-generation", "en", "el", "grk", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-grk
62
null
transformers
5,587
--- language: - en - el - grk tags: - translation license: apache-2.0 --- ### eng-grk * source group: English * target group: Greek languages * OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md) * model: transformer * source language(s): eng * target language(s): ell grc_Grek * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 | | Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 | | Tatoeba-test.eng.multi | 45.6 | 0.677 | ### System Info: - hf_name: eng-grk - source_languages: eng - target_languages: grk - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'el', 'grk'] - src_constituents: {'eng'} - tgt_constituents: {'grc_Grek', 'ell'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: grk - short_pair: en-grk - chrF2_score: 0.677 - bleu: 45.6 - brevity_penalty: 1.0 - ref_len: 59951.0 - src_name: English - tgt_name: Greek languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: grk - prefer_old: False - long_pair: eng-grk - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ro-fr
a71843f60cb4f44315caf92df79fb079912c4adc
2021-09-10T14:02:10.000Z
[ "pytorch", "marian", "text2text-generation", "ro", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ro-fr
62
null
transformers
5,588
--- tags: - translation license: apache-2.0 --- ### opus-mt-ro-fr * source languages: ro * target languages: fr * OPUS readme: [ro-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ro.fr | 54.5 | 0.697 |
RecordedFuture/Swedish-Sentiment-Fear
7764733e64267aae40785babd36014a0163b250a
2021-05-18T22:00:42.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "sv", "transformers", "license:mit" ]
text-classification
false
RecordedFuture
null
RecordedFuture/Swedish-Sentiment-Fear
62
null
transformers
5,589
--- language: sv license: mit --- ## Swedish BERT models for sentiment analysis [Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task. The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes. The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums. The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data. The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified. ### Swedish-Sentiment-Fear The model can be imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear") classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear") When the model and tokenizer are initialized the model can be used for inference. #### Sentiment definitions #### The strong sentiment includes but are not limited to Texts that: - Hold an expressive emphasis on fear and/ or anxiety #### The weak sentiment includes but are not limited to Texts that: - Express fear and/ or anxiety in a neutral way #### Verification metrics During training, the model had maximized validation metrics at the following classification breakpoint. | Classification Breakpoint | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.45 | 0.8754 | 0.8618 | 0.8895 | #### Swedish-Sentiment-Violence The model be can imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence") classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence") When the model and tokenizer are initialized the model can be used for inference. ### Sentiment definitions #### The strong sentiment includes but are not limited to Texts that: - Referencing highly violent acts - Hold an aggressive tone #### The weak sentiment includes but are not limited to Texts that: - Include general violent statements that do not fall under the strong sentiment #### Verification metrics During training, the model had maximized validation metrics at the following classification breakpoint. | Classification Breakpoint | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.35 | 0.7677 | 0.7456 | 0.791 |
TransQuest/monotransquest-da-any_en
386039f2027514b199054024102504cdb2a4d794
2021-06-03T19:01:07.000Z
[ "pytorch", "xlm-roberta", "text-classification", "multilingual-en", "transformers", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-da-any_en
62
null
transformers
5,590
--- language: multilingual-en tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-any_en", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TuhinColumbia/russianpoetrymany
036331d815ebd4bec0c902ef01b0f02c9d15372f
2021-08-31T15:52:43.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
TuhinColumbia
null
TuhinColumbia/russianpoetrymany
62
1
transformers
5,591
Entry not found
TurkuNLP/wikibert-base-ja-cased
dcef59fb3e01803fdb4bc28b449f1fb0d5011e0d
2020-05-24T20:00:52.000Z
[ "pytorch", "transformers" ]
null
false
TurkuNLP
null
TurkuNLP/wikibert-base-ja-cased
62
null
transformers
5,592
Entry not found
VictorSanh/bart-base-finetuned-xsum
f7347d6b497512cb9997f14914ea6b309b8f6b7c
2020-08-17T15:02:57.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
VictorSanh
null
VictorSanh/bart-base-finetuned-xsum
62
null
transformers
5,593
Entry not found
aychang/distilbert-base-cased-trec-coarse
70316ea77d74b8e1e5c939fbf2499594091cab77
2021-01-24T20:14:42.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:trec", "transformers", "license:mit" ]
text-classification
false
aychang
null
aychang/distilbert-base-cased-trec-coarse
62
null
transformers
5,594
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - trec metrics: --- # TREC 6-class Task: distilbert-base-cased ## Model description A simple base distilBERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/distilbert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/distilbert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=500, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.97, 'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414, 0.97560976]), 'eval_loss': 0.14275787770748138, 'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614, 0.96385542]), 'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044, 0.98765432]), 'eval_runtime': 0.9731, 'eval_samples_per_second': 513.798} ```
cardiffnlp/twitter-roberta-base-dec2020
54381a7fb92f904744f8417a1904157260f0dafe
2022-02-09T11:15:03.000Z
[ "pytorch", "roberta", "fill-mask", "arxiv:2202.03829", "transformers", "autotrain_compatible" ]
fill-mask
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-dec2020
62
null
transformers
5,595
# Twitter December 2020 (RoBERTa-base, 107M) This is a RoBERTa-base model trained on 107.06M tweets until the end of December 2020. More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829). Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms). For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models). ## Preprocess Text Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data). ```python def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) ``` ## Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer MODEL = "cardiffnlp/twitter-roberta-base-dec2020" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def print_candidates(): for i in range(5): token = tokenizer.decode(candidates[i]['token']) score = candidates[i]['score'] print("%d) %.5f %s" % (i+1, score, token)) texts = [ "So glad I'm <mask> vaccinated.", "I keep forgetting to bring a <mask>.", "Looking forward to watching <mask> Game tonight!", ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) print_candidates() ``` Output: ``` ------------------------------ So glad I'm <mask> vaccinated. 1) 0.42239 not 2) 0.23834 getting 3) 0.10684 fully 4) 0.07550 being 5) 0.02097 already ------------------------------ I keep forgetting to bring a <mask>. 1) 0.08145 mask 2) 0.05051 laptop 3) 0.04620 book 4) 0.03910 bag 5) 0.03824 blanket ------------------------------ Looking forward to watching <mask> Game tonight! 1) 0.57602 the 2) 0.25120 The 3) 0.02610 End 4) 0.02324 this 5) 0.00690 This ``` ## Example Tweet Embeddings ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np from scipy.spatial.distance import cosine from collections import Counter def get_embedding(text): text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) return features_mean MODEL = "cardiffnlp/twitter-roberta-base-dec2020" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModel.from_pretrained(MODEL) query = "The book was awesome" tweets = ["I just ordered fried chicken 🐣", "The movie was great", "What time is the next game?", "Just finished reading 'Embeddings in NLP'"] sims = Counter() for tweet in tweets: sim = 1 - cosine(get_embedding(query), get_embedding(tweet)) sims[tweet] = sim print('Most similar to: ', query) print(f"{'-'*30}") for idx, (tweet, sim) in enumerate(sims.most_common()): print("%d) %.5f %s" % (idx+1, sim, tweet)) ``` Output: ``` Most similar to: The book was awesome ------------------------------ 1) 0.99084 The movie was great 2) 0.96618 Just finished reading 'Embeddings in NLP' 3) 0.96127 I just ordered fried chicken 🐣 4) 0.95315 What time is the next game? ``` ## Example Feature Extraction ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np MODEL = "cardiffnlp/twitter-roberta-base-dec2020" tokenizer = AutoTokenizer.from_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) # Pytorch model = AutoModel.from_pretrained(MODEL) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) #features_max = np.max(features[0], axis=0) # # Tensorflow # model = TFAutoModel.from_pretrained(MODEL) # encoded_input = tokenizer(text, return_tensors='tf') # features = model(encoded_input) # features = features[0].numpy() # features_mean = np.mean(features[0], axis=0) # #features_max = np.max(features[0], axis=0) ```
echarlaix/bart-base-cnn-r2-19.4-d35-hybrid
45d5d5098dcd6a347301af116f393e672f13cc2c
2021-08-20T09:56:33.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:cnn_dailymail", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
echarlaix
null
echarlaix/bart-base-cnn-r2-19.4-d35-hybrid
62
null
transformers
5,596
--- language: en license: apache-2.0 tags: - summarization datasets: - cnn_dailymail metrics: - R1 - R2 - RL --- ## facebook/bart-base model fine-tuned on CNN/DailyMail This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **35%** of the original weights. The model contains **53%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). <div class="graph"><script src="/echarlaix/bart-base-cnn-r2-19.4-d35-hybrid/raw/main/model_card/density_info.js" id="c0afb977-b30c-485d-ac75-afc874392380"></script></div> ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/facebook/bart-base). A side-effect of the block pruning is that some of the attention heads are completely removed: 38 heads were removed on a total of 216 (17.6%). ## Details of the CNN/DailyMail dataset | Dataset | Split | # samples | | ------------- | ----- | --------- | | CNN/DailyMail | train | 287K | | CNN/DailyMail | eval | 13K | ### Results | Metric | # Value | | ----------- | --------- | | **Rouge 1** | **42.18** | | **Rouge 2** | **19.44** | | **Rouge L** | **39.17** |
huggingartists/oxxxymiron
ff2682609172f5173182077e563b36a0cc057c69
2022-07-06T16:17:56.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "dataset:huggingartists/oxxxymiron", "transformers", "huggingartists", "lyrics", "lm-head", "causal-lm" ]
text-generation
false
huggingartists
null
huggingartists/oxxxymiron
62
null
transformers
5,597
--- language: en datasets: - huggingartists/oxxxymiron tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/57ecbbdaf70c671be2d8b7bd39112db0.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Oxxxymiron</div> <a href="https://genius.com/artists/oxxxymiron"> <div style="text-align: center; font-size: 14px;">@oxxxymiron</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Oxxxymiron. Dataset is available [here](https://huggingface.co/datasets/huggingartists/oxxxymiron). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/oxxxymiron") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/e254c9iz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Oxxxymiron's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1ggk9c4z) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1ggk9c4z/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/oxxxymiron') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/oxxxymiron") model = AutoModelWithLMHead.from_pretrained("huggingartists/oxxxymiron") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
lgris/bp_400h_xlsr2_300M
0cc8ef01e53c684cf94fdffc4b9933c44b3dd1fa
2022-04-01T20:32:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "transformers", "mozilla-foundation/common_voice_7_0", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
lgris
null
lgris/bp_400h_xlsr2_300M
62
1
transformers
5,598
--- language: - pt tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - pt - hf-asr-leaderboard license: apache-2.0 model-index: - name: bp_400h_xlsr2_300M results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: pt metrics: - name: Test WER type: wer value: 10.83 - name: Test CER type: cer value: 3.11 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: sv metrics: - name: Test WER type: wer value: 22.48 - name: Test CER type: cer value: 9.33 --- # bp_400h_xlsr2_300M
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli
fdb646ec000c7ec8076df9db55995839aec61b2d
2021-10-27T07:47:56.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "en", "dataset:mnli", "transformers", "textual-entailment", "nli", "license:mit" ]
text-classification
false
lighteternal
null
lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli
62
2
transformers
5,599
--- language: en tags: - textual-entailment - nli - pytorch datasets: - mnli license: mit widget : - text: "EpCAM is overexpressed in breast cancer. </s></s> EpCAM is downregulated in breast cancer." --- # BiomedNLP-PubMedBERT finetuned on textual entailment (NLI) The [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene) finetuned on the MNLI dataset. It should be useful in textual entailment tasks involving biomedical corpora. ## Usage Given two sentences (a premise and a hypothesis), the model outputs the logits of entailment, neutral or contradiction. You can test the model using the HuggingFace model widget on the side: - Input two sentences (premise and hypothesis) one after the other. - The model returns the probabilities of 3 labels: entailment(LABEL:0), neutral(LABEL:1) and contradiction(LABEL:2) respectively. To use the model locally on your machine: ```python # import torch # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") model = AutoModelForSequenceClassification.from_pretrained("lighteternal/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-mnli") premise = 'EpCAM is overexpressed in breast cancer' hypothesis = 'EpCAM is downregulated in breast cancer.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = model(x)[0] probs = logits.softmax(dim=1) print('Probabilities for entailment, neutral, contradiction \n', np.around(probs.cpu(). detach().numpy(),3)) # Probabilities for entailment, neutral, contradiction # 0.001 0.001 0.998 ``` ## Metrics Evaluation on classification accuracy (entailment, contradiction, neutral) on MNLI test set: | Metric | Value | | --- | --- | | Accuracy | 0.8338| See Training Metrics tab for detailed info.