modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HooshvareLab/bert-fa-base-uncased-ner-peyma | 8b7b63371aa8f1fdad62c0f82d462a22b91b37ab | 2021-05-18T20:55:10.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-ner-peyma | 141 | 1 | transformers | 4,100 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian NER [ARMAN, PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|-------------|-------------|-------|------------|--------------|----------|----------------|------------|
| PEYMA | 93.40* | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
TurkuNLP/sbert-cased-finnish-paraphrase | f1a793ca55932e3beeee506cebf92bda504fde52 | 2021-11-29T08:43:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"fi",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | TurkuNLP | null | TurkuNLP/sbert-cased-finnish-paraphrase | 141 | null | sentence-transformers | 4,101 | ---
language:
- fi
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- text: "Minusta täällä on ihana asua!"
---
# Cased Finnish Sentence BERT model
Finnish Sentence BERT trained from FinBERT. A demo on retrieving the most similar sentences from a dataset of 400 million sentences can be found [here](http://epsilon-it.utu.fi/sbert400m).
## Training
- Library: [sentence-transformers](https://www.sbert.net/)
- FinBERT model: TurkuNLP/bert-base-finnish-cased-v1
- Data: The data provided [here](https://turkunlp.org/paraphrase.html), including the Finnish Paraphrase Corpus and the automatically collected paraphrase candidates (500K positive and 5M negative)
- Pooling: mean pooling
- Task: Binary prediction, whether two sentences are paraphrases or not. Note: the labels 3 and 4 are considered paraphrases, and labels 1 and 2 non-paraphrases. [Details on labels](https://aclanthology.org/2021.nodalida-main.29/)
## Usage
The same as in the HuggingFace documentation of [the English Sentence Transformer](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens). Either through `SentenceTransformer` or `HuggingFace Transformers`
### SentenceTransformer
```python
from sentence_transformers import SentenceTransformer
sentences = ["Tämä on esimerkkilause.", "Tämä on toinen lause."]
model = SentenceTransformer('TurkuNLP/sbert-cased-finnish-paraphrase')
embeddings = model.encode(sentences)
print(embeddings)
```
### HuggingFace Transformers
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Tämä on esimerkkilause.", "Tämä on toinen lause."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TurkuNLP/sbert-cased-finnish-paraphrase')
model = AutoModel.from_pretrained('TurkuNLP/sbert-cased-finnish-paraphrase')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
A publication detailing the evaluation results is currently being drafted.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
While the publication is being drafted, please cite [this page](https://turkunlp.org/paraphrase.html).
## References
- J. Kanerva, F. Ginter, LH. Chang, I. Rastas, V. Skantsi, J. Kilpeläinen, HM. Kupari, J. Saarni, M. Sevón, and O. Tarkka. Finnish Paraphrase Corpus. In *NoDaLiDa 2021*, 2021.
- N. Reimers and I. Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In *EMNLP-IJCNLP*, pages 3982–3992, 2019.
- A. Virtanen, J. Kanerva, R. Ilo, J. Luoma, J. Luotolahti, T. Salakoski, F. Ginter, and S. Pyysalo. Multilingual is not enough: BERT for Finnish. *arXiv preprint arXiv:1912.07076*, 2019. |
lordtt13/t5-inshorts | e6fb750feda2680df5555582efc87f513bdc9793 | 2020-12-25T23:05:41.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lordtt13 | null | lordtt13/t5-inshorts | 141 | null | transformers | 4,102 | ---
language: en
inference: false
---
## T5-inshorts: T5 model trained on inshorts data
### Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* and here is the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
### Details of the downstream task (Summarization) - Dataset 📚
- The summarization data has been taken from [Inshorts News Data](https://www.kaggle.com/shashichander009/inshorts-news-data) from kaggle. Inshorts is a news service that provides short summaries of news from around the web. This dataset contains headlines and summary of news items along with its source.
### Model training
The training script is present [here](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-t5-for-summarization.ipynb).
### Pipelining the Model
```python
import transformers
model = transformers.T5ForConditionalGeneration.from_pretrained('lordtt13/t5-inshorts')
tokenizer = transformers.T5Tokenizer.from_pretrained("lordtt13/t5-inshorts")
nlp_fill = transformers.pipeline('summarization', model = model, tokenizer = tokenizer)
nlp_fill('The CBI on Saturday booked four former officials of Syndicate Bank and six others for cheating, forgery, criminal conspiracy and causing ₹209 crore loss to the state-run bank. The accused had availed home loans and credit from Syndicate Bank on the basis of forged and fabricated documents. These funds were fraudulently transferred to the companies owned by the accused persons.', min_length=5, max_length=40)
# Output:
# [{'summary_text': ': CBI books 4 ex-bank officials for cheating, forgery'}]
```
> Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/)
> PS: Still looking for more resources to expand my expansion! |
neuralmagic/oBERT-12-upstream-pretrained-dense | 7fa4ae052f9f01619fbb2f7362899ef9944676a6 | 2022-06-20T11:36:50.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-12-upstream-pretrained-dense | 141 | null | null | 4,103 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the pretrained dense model used as a teacher for upstream pruning runs, as described in the paper. The model can be finetuned on any downstream task, just like the standard `bert-base-uncased` model which is used as initialization for training of this model.
Sparse versions of this model:
- 90% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90`
- 97% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97`
```
Training objective: masked language modeling (MLM)
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 12
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
RUCAIBox/mvp-question-generation | b0770050f8517c1eb440f10af550a376efaa43c0 | 2022-06-27T02:28:10.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-question-generation | 141 | null | transformers | 4,104 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ."
example_title: "Example1"
- text: "Generate the question based on the answer: Arthur 's Magazine [X_SEP] Arthur 's Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman 's magazine published by Bauer Media Group in the USA ."
example_title: "Example2"
---
# MVP-question-generation
The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation")
>>> inputs = tokenizer(
... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['A bolo punch and a hook are both punches used in what sport?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
amanbawa96/legal-bert-based-uncase | 8b6aa344d2b00d55d933e07f16c85dca5445434c | 2022-06-30T23:27:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | amanbawa96 | null | amanbawa96/legal-bert-based-uncase | 141 | null | transformers | 4,105 | Entry not found |
DeepPavlov/xlm-roberta-large-en-ru-mnli | 4c4353240f7a90bae788ae6f86861c25a9c31ea1 | 2021-11-15T08:49:43.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"transformers",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli"
] | text-classification | false | DeepPavlov | null | DeepPavlov/xlm-roberta-large-en-ru-mnli | 140 | null | transformers | 4,106 | ---
language:
- en
- ru
datasets:
- glue
- mnli
model_index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
tags:
- xlm-roberta
- xlm-roberta-large
- xlm-roberta-large-en-ru
- xlm-roberta-large-en-ru-mnli
widget:
- text: "Люблю тебя. Ненавижу тебя"
- text: "I love you. I hate you"
---
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli. |
Helsinki-NLP/opus-mt-es-nl | a5b57016fa3d47b914bc2eac885f6c73a448cca2 | 2021-09-09T21:43:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-nl | 140 | null | transformers | 4,107 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-nl
* source languages: es
* target languages: nl
* OPUS readme: [es-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.nl | 50.6 | 0.681 |
|
JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k | e3a9f9b5fa7ab2092f14b37859914fb024e12eff | 2021-09-23T15:49:08.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k | 140 | null | asteroid | 4,108 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_noisy
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
geckos/pegasus-fined-tuned-on-paraphrase | 286f7e3e917279d29dc4be6e2f022e844c4ba6c3 | 2021-11-11T13:01:43.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | geckos | null | geckos/pegasus-fined-tuned-on-paraphrase | 140 | 2 | transformers | 4,109 | Entry not found |
google/t5-small-ssm | 22210988a4ab1ce2b2b8eb8e9e82b4a6c4095bec | 2021-06-23T01:52:56.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-small-ssm | 140 | null | transformers | 4,110 | ---
language: en
datasets:
- c4
- wikipedia
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
lvwerra/pegasus-samsum | 8791cbe506f275dd716874ededef6ac337c3ad03 | 2021-10-25T14:57:33.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | lvwerra | null | lvwerra/pegasus-samsum | 140 | null | transformers | 4,111 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 0.4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6092 | 0.03 | 500 | 1.6488 |
| 1.9715 | 0.07 | 1000 | 1.5444 |
| 1.8325 | 0.1 | 1500 | 1.5093 |
| 1.876 | 0.14 | 2000 | 1.4890 |
| 1.3081 | 0.17 | 2500 | 1.4737 |
| 1.7769 | 0.2 | 3000 | 1.4496 |
| 1.6276 | 0.24 | 3500 | 1.4430 |
| 1.6624 | 0.27 | 4000 | 1.4288 |
| 1.9202 | 0.31 | 4500 | 1.4235 |
| 1.4404 | 0.34 | 5000 | 1.4189 |
| 1.8016 | 0.37 | 5500 | 1.4177 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sivasankalpp/dpr-multidoc2dial-structure-ctx-encoder | 142cafa42d34a8dd5e62d29995b5ba6fd3a35da2 | 2021-11-10T21:18:24.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | sivasankalpp | null | sivasankalpp/dpr-multidoc2dial-structure-ctx-encoder | 140 | null | transformers | 4,112 | Entry not found |
speechbrain/asr-transformer-aishell | 7bacef7ce8baf8e84755641524e7cf9fe7c314a3 | 2022-06-21T23:49:14.000Z | [
"en",
"dataset:aishell",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"Transformers",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-transformer-aishell | 140 | 1 | speechbrain | 4,113 | ---
language: "en"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- Transformers
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- aishell
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Transformer for AISHELL (Mandarin Chinese)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on AISHELL (Mandarin Chinese)
within SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Dev CER | Test CER | GPUs | Full Results |
|:-------------:|:--------------:|:--------------:|:--------:|:--------:|
| 05-03-21 | 5.60 | 6.04 | 2xV100 32GB | [Google Drive](https://drive.google.com/drive/folders/1zlTBib0XEwWeyhaXDXnkqtPsIBI18Uzs?usp=sharing)|
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Acoustic model made of a transformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1).
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-aishell", savedir="pretrained_models/asr-transformer-aishell")
asr_model.transcribe_file("speechbrain/asr-transformer-aishell/example_mandarin.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '986a2175').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/AISHELL-1/ASR/transformer/
python train.py hparams/train_ASR_transformer.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QU18YoauzLOXueogspT0CgR5bqJ6zFfu?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
unc-nlp/lxmert-gqa-uncased | 4055268169a6a2e9a59faf42f478104438cc0fda | 2020-09-08T19:05:59.000Z | [
"pytorch",
"lxmert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | unc-nlp | null | unc-nlp/lxmert-gqa-uncased | 140 | null | transformers | 4,114 | Entry not found |
yunusemreemik/logo-qna-model | 3c5761c856ee954dea04795bcfa05fa3e8fe099e | 2021-08-03T12:41:38.000Z | [
"pytorch",
"bert",
"question-answering",
"tr",
"transformers",
"autotrain_compatible"
] | question-answering | false | yunusemreemik | null | yunusemreemik/logo-qna-model | 140 | null | transformers | 4,115 | ---
language: tr
---
# Logo Turkish Question Answering Model : Question Answering
Inspired by savasy/bert-base-turkish-squad,
* Inspired model: https://huggingface.co/savasy/bert-base-turkish-squad
* BERT-base: https://huggingface.co/dbmdz/bert-base-turkish-uncased
* Dataset: Logo Private QnA Chatbot Database
# Training Code
```
model_args = QuestionAnsweringArgs()
model_args.train_batch_size = 16
model_args.evaluate_during_training = True
model_args.n_best_size=3
model_args.num_train_epochs=5
train_args = {
"reprocess_input_data": True,
"overwrite_output_dir": True,
"use_cached_eval_features": True,
"output_dir": f"outputs/bert",
"best_model_dir": f"outputs/bert/best_model1",
"evaluate_during_training": True,
"max_seq_length": 128,
"num_train_epochs": 10,
"evaluate_during_training_steps": 1000,
"wandb_project": "Question Answer Application",
"wandb_kwargs": {"name": "dbmdz/bert-base-turkish-uncased\"},
"save_model_every_epoch": False,
"save_eval_checkpoints": False,
"n_best_size":3,
# "use_early_stopping": True,
# "early_stopping_metric": "mcc",
"n_gpu": 4,
# "manual_seed": 4,
"use_multiprocessing": True,
"train_batch_size": 126,
"eval_batch_size": 64,
# "config": {
# "output_hidden_states": True
# }
}
model = QuestionAnsweringModel(
"bert","dbmdz/bert-base-turkish-uncased\", args=train_args
)
model.train_model(train, eval_data=test)
```
# Dataset Sample
```
{
"context": "Varlıklara ait yeniden değerleme toplamlarının özet olarak alındığı rapor seçeneğidir. Varlık Yönetimi program bölümünde Raporlar menüsü altında yer alır. Rapor yıllık olarak alınır. Toplamların alınacağı yıl, Yıl filtre satırında belirtilir. Rapor filtre seçenekleri aşağıdaki tabloda yer almaktadır.",
"qas": [
{
"id": "01017",
"is_impossible": false,
"question": "Yeniden Değerleme Özeti ne işe yarar",
"answers": [
{
"text": "Varlıklara ait yeniden değerleme toplamlarının özet olarak alındığı rapor seçeneğidir.",
"answer_start": 0
}
]
},
{
"id": "01018",
"is_impossible": false,
"question": " Yeniden Değerleme Özetine nereden ulaşırım",
"answers": [
{
"text": "Varlık Yönetimi program bölümünde Raporlar menüsü altında yer alır.",
"answer_start": 87
}
]
}
}
```
# Example Usage
> Load Model
```
#Required Libraries
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
import torch
#Model Path
hface_path = "yunusemreemik/logo-qna-model"
#For tokenize context and question
tokenizer = AutoTokenizer.from_pretrained(hface_path)
#For generate NN eval outputs
model = AutoModelForQuestionAnswering.from_pretrained(hface_path)
#For functional pipe
nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
```
> Apply the model.
> Please dont forget the delete backslashes "\" before run
```
e_arsiv ="e-Arşiv Tipleri, e-Arşiv fatura türünün belirlendiği alandır. İlgili cari hesap kartında \\nLogoConnect sayfasında belirlenen e-arşiv tipi alana öndeğer olarak \\naktarılır. Standart faturalar için herhangi bir seçim yapılmaz. \\nÖzel matrah uygulanan tütün, altın, gümüş, gazete, dergi, belediye \\nşehir yolcu taşımacılığı ve telefon kartı satışları için kesilen faturalar. \\nİstisna uygulanan faturalar. (İhracat teslimleri ve bu teslimlere ilişkin hizmetler, \\nmal ihracatı, hizmet ihracatı, serbest bölgelerdeki müşteriler için yapılan fason hizmetler vs..) \\nAraç Tescil Faturası, Araç tescil için kesilen faturalardır."
answer_text = nlp(question="İlgili cari hesap kartları nerede belirlenir?", context=e_arsiv)
print(answer_text )
```
```
print(nlp(question="", context=e_arsiv))
```
# Evaluation
```
(160,
{'global_step': [16, 32, 48, 64, 80, 96, 112, 128, 144, 160],
'correct': [11, 15, 18, 18, 19, 16, 16, 16, 14, 14],
'similar': [23, 26, 22, 21, 21, 24, 24, 23, 25, 25],
'incorrect': [8, 1, 2, 3, 2, 2, 2, 3, 3, 3],
'train_loss': [0.8277238607406616,
0.7876648306846619,
0.44657397270202637,
0.32337626814842224,
0.2009371519088745,
0.15247923135757446,
0.11289173364639282,
0.06762214750051498,
0.06813357770442963,
0.04011240229010582],
'eval_loss': [-9.3046875,
-8.8984375,
-9.1171875,
-9.03125,
-9.046875,
-8.984375,
-9.1171875,
-9.296875,
-9.296875,
-9.296875]})
``` |
pile-of-law/legalbert-large-1.7M-1 | eacf57e9bcc43d0a0d2d74da5196dbb912b38b2b | 2022-07-04T07:27:42.000Z | [
"pytorch",
"bert",
"en",
"dataset:pile-of-law/pile-of-law",
"arxiv:1907.11692",
"arxiv:1810.04805",
"arxiv:2110.00976",
"arxiv:2207.00220",
"transformers",
"fill-mask"
] | fill-mask | false | pile-of-law | null | pile-of-law/legalbert-large-1.7M-1 | 140 | 3 | transformers | 4,116 | ---
language:
- en
datasets:
- pile-of-law/pile-of-law
pipeline_tag: fill-mask
---
# Pile of Law BERT large model (uncased)
Pretrained model on English language legal and administrative text using the [RoBERTa](https://arxiv.org/abs/1907.11692) pretraining objective.
## Model description
Pile of Law BERT large is a transformers model with the [BERT large model (uncased)](https://huggingface.co/bert-large-uncased) architecture pretrained on the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law), a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining.
## Intended uses & limitations
You can use the raw model for masked language modeling or fine-tune it for a downstream task. Since this model was pretrained on a English language legal and administrative text corpus, legal downstream tasks will likely be more in-domain for this model.
## How to use
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-1')
>>> pipe("An [MASK] is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.")
[{'sequence': 'an appeal is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.6343119740486145,
'token': 1151, '
token_str': 'appeal'},
{'sequence': 'an objection is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.10488124936819077,
'token': 3542,
'token_str': 'objection'},
{'sequence': 'an application is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.0708756372332573,
'token': 1999,
'token_str': 'application'},
{'sequence': 'an example is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.02558572217822075,
'token': 3677,
'token_str': 'example'},
{'sequence': 'an action is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.013266939669847488,
'token': 1347,
'token_str': 'action'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-1')
model = BertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-1')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-1')
model = TFBertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-1')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
Please see Appendix G of the Pile of Law paper for copyright limitations related to dataset and model use.
This model can have biased predictions. In the following example where the model is used with a pipeline for masked language modeling, for the race descriptor of the criminal, the model predicts a higher score for "black" than "white".
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-1')
>>> pipe("The clerk described the robber as a “thin [MASK] male, about six foot tall, wearing a gray hoodie, blue jeans", targets=["black", "white"])
[{'sequence': 'the clerk described the robber as a thin black male, about six foot tall, wearing a gray hoodie, blue jeans',
'score': 0.0013972163433209062,
'token': 4311,
'token_str': 'black'},
{'sequence': 'the clerk described the robber as a thin white male, about six foot tall, wearing a gray hoodie, blue jeans',
'score': 0.0009401230490766466,
'token': 4249, '
token_str': 'white'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Pile of Law BERT large model was pretrained on the Pile of Law, a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining. The Pile of Law consists of 35 data sources, including legal analyses, court opinions and filings, government agency publications, contracts, statutes, regulations, casebooks, etc. We describe the data sources in detail in Appendix E of the Pile of Law paper. The Pile of Law dataset is placed under a CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license.
## Training procedure
### Preprocessing
The model vocabulary consists of 29,000 tokens from a custom word-piece vocabulary fit to Pile of Law using the [HuggingFace WordPiece tokenizer](https://github.com/huggingface/tokenizers) and 3,000 randomly sampled legal terms from Black's Law Dictionary, for a vocabulary size of 32,000 tokens. The 80-10-10 masking, corruption, leave split, as in [BERT](https://arxiv.org/abs/1810.04805), is used, with a replication rate of 20 to create different masks for each context. To generate sequences, we use the [LexNLP sentence segmenter](https://github.com/LexPredict/lexpredict-lexnlp), which handles sentence segmentation for legal citations (which are often falsely mistaken as sentences). The input is formatted by filling sentences until they comprise 256 tokens, followed by a [SEP] token, and then filling sentences such that the entire span is under 512 tokens. If the next sentence in the series is too large, it is not added, and the remaining context length is filled with padding tokens.
### Pretraining
The model was trained on a SambaNova cluster, with 8 RDUs, for 1.7 million steps. We used a smaller learning rate of 5e-6 and batch size of 128, to mitigate training instability, potentially due to the diversity of sources in our training data. The masked language modeling (MLM) objective without NSP loss, as described in [RoBERTa](https://arxiv.org/abs/1907.11692), was used for pretraining. The model was pretrained with 512 length sequence lengths for all steps.
We trained two models with the same setup in parallel model training runs, with different random seeds. We selected the lowest log likelihood model, [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1), which we refer to as PoL-BERT-Large, for experiments, but also release the second model, [pile-of-law/legalbert-large-1.7M-2](https://huggingface.co/pile-of-law/legalbert-large-1.7M-2).
## Evaluation results
When finetuned on the CaseHOLD variant provided by the [LexGLUE paper](https://arxiv.org/abs/2110.00976), this model, PoL-BERT-Large, achieves the following results. In the table below, we also report results for [BERT-Large-Uncased](https://huggingface.co/bert-large-uncased) and [CaseLaw-BERT](https://huggingface.co/zlucia/custom-legalbert). We report results on the models with hyperparameter tuning on the downstream task and the result reported for the CaseLaw-BERT model from the [LexGLUE paper](https://arxiv.org/abs/2110.00976), which uses a fixed experimental setup.
CaseHOLD test results:
| Model | F1 |
| ---------------------|-----|
| CaseLaw-BERT (tuned)| 78.5 |
| CaseLaw-BERT (LexGLUE)| 75.4 |
| PoL-BERT-Large| 75.0 |
| BERT-Large-Uncased| 71.3|
### BibTeX entry and citation info
```bibtex
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson*, Peter and Krass*, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
``` |
doc2query/msmarco-dutch-mt5-base-v1 | b6ea6a440c642e57deb50b512d31eb29fa06dc5f | 2022-04-29T11:50:14.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"nl",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-dutch-mt5-base-v1 | 140 | 1 | transformers | 4,117 | ---
language: nl
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python is een programmeertaal die begin jaren 90 ontworpen en ontwikkeld werd door Guido van Rossum, destijds verbonden aan het Centrum voor Wiskunde en Informatica (daarvoor Mathematisch Centrum) in Amsterdam. De taal is mede gebaseerd op inzichten van professor Lambert Meertens, die een taal genaamd ABC had ontworpen, bedoeld als alternatief voor BASIC, maar dan met geavanceerde datastructuren. Inmiddels wordt de taal doorontwikkeld door een enthousiaste groep, tot juli 2018 geleid door Van Rossum. Deze groep wordt ondersteund door vrijwilligers op het internet. De ontwikkeling van Python wordt geleid door de Python Software Foundation. Python is vrije software."
license: apache-2.0
---
# doc2query/msmarco-dutch-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-dutch-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
lyndonnixon/destination-image-classifier | b865dc762c5596c5072141f0ff4ed6a5e04c50a5 | 2022-06-15T15:00:07.000Z | [
"pytorch",
"beit",
"image-classification",
"en",
"dataset:destinationphotography",
"transformers",
"tourism",
"destinations",
"destinationimage",
"license:cc-by-nc-sa-4.0"
] | image-classification | false | lyndonnixon | null | lyndonnixon/destination-image-classifier | 140 | null | transformers | 4,118 | |
SebastianS/bert-finetuned-squad | ec039edef36c600580d90a7764171bef1826eb1e | 2022-05-15T16:19:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | SebastianS | null | SebastianS/bert-finetuned-squad | 140 | null | transformers | 4,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
manu/mplt_untrained | bc42c334c22339c31bfcede2d4a2d038f2b7aae6 | 2022-07-08T21:51:43.000Z | [
"pytorch",
"mplt",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | manu | null | manu/mplt_untrained | 140 | null | transformers | 4,120 | Entry not found |
Narrativa/distilroberta-finetuned-stereotype-detection | 86927ae860472d07c2645ce2f2e6e92a7e19ff78 | 2021-09-13T14:52:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"stereotype",
"gender",
"gender_bias",
"license:apache-2.0",
"model-index"
] | text-classification | false | Narrativa | null | Narrativa/distilroberta-finetuned-stereotype-detection | 139 | 1 | transformers | 4,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
- stereotype
- gender
- gender_bias
widget:
- text: "Cauterize is not just for fans of the guitarist or his other projects, but those that love music that is both aggressive and infectious and gave the album 4 out of 5 stars ."
metrics:
- accuracy
model-index:
- name: distilRoberta-stereotype
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.989151002901476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilRoberta-stereotype
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0651
- Accuracy: 0.9892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0783 | 1.0 | 5615 | 0.0703 | 0.9847 |
| 0.0468 | 2.0 | 11230 | 0.0573 | 0.9863 |
| 0.0316 | 3.0 | 16845 | 0.0580 | 0.9882 |
| 0.0172 | 4.0 | 22460 | 0.0591 | 0.9885 |
| 0.0098 | 5.0 | 28075 | 0.0651 | 0.9892 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
NbAiLab/nb-wav2vec2-1b-bokmaal | 45ac18420d8d8b1d7b6f049bb7ca2212f4c39de8 | 2022-06-13T10:21:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"nb-NO",
"dataset:NbAiLab/NPSC",
"transformers",
"NbAiLab/NPSC",
"no",
"nb",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | NbAiLab | null | NbAiLab/nb-wav2vec2-1b-bokmaal | 139 | 2 | transformers | 4,122 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- no
- nb
- nb-NO
datasets:
- NbAiLab/NPSC
language:
- nb-NO
model-index:
- name: nb-wav2vec2-1b-bokmaal
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NPSC
type: NbAiLab/NPSC
args: 16K_mp3_bokmaal
metrics:
- name: Test (Bokmål) WER
type: wer
value: 0.0633
- name: Test (Bokmål) CER
type: cer
value: 0.0248
---
# Norwegian Wav2Vec2 Model - 1B Bokmål
This model is finetuned on top of feature extractor [XLS-R](https://huggingface.co/facebook/wav2vec2-xls-r-1b) from Facebook/Meta. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0633** (0.0738)
- **CER: 0.0248** (0.0263)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| NbAiLab/nb-wav2vec2-1b-bokmaal (this model) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="facebook/wav2vec2-xls-r-1b"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="40"
--per_device_train_batch_size="12"
--per_device_eval_batch_size="12"
--gradient_accumulation_steps="2"
--learning_rate="2e-5"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--ctc_zero_infinity=True
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="16"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
|
facebook/convnext-xlarge-224-22k-1k | cc348566f24077249a0bc049a373a56b669ff300 | 2022-06-27T08:55:36.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-xlarge-224-22k-1k | 139 | 1 | transformers | 4,123 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (xlarge-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-xlarge-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-224-22k-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
mrm8488/t5-base-finetuned-quartz | 3322d94c76ac868fb82558396eb6d1ae1114645e | 2020-12-11T21:55:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:quartz",
"arxiv:1910.10683",
"transformers",
"question-answering",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/t5-base-finetuned-quartz | 139 | 1 | transformers | 4,124 | ---
language: en
datasets:
- quartz
pipeline_tag: question-answering
---
# T5-base fine-tuned on QuaRTz
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QuaRTz](https://allenai.org/data/quartz) for **QA** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the dataset 📚
**QuaRTz** is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs).
The dataset is split into:
|Set | Samples|
|-----|--------|
|Train | 2696 |
|Valid | 384 |
|Test | 784 |
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The *question*, *context* (`para` field) and *options* (`choices` field) are concatenated and passed to the **encoder**. The **decoder** receives the right *answer* (by querying `answerKey` field). More details about the dataset fields/format [here](https://huggingface.co/nlp/viewer/?dataset=quartz)
## Results 📋
|Set | Metric | Score |
|-----|--------|-------|
|Validation | Accuracy (EM) | **83.59**|
|Test | Accuracy (EM) | **81.50**|
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-quartz")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-quartz")
def get_response(question, fact, opts, max_length=16):
input_text = 'question: %s context: %s options: %s' % (question, fact, opts)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0])
fact = 'The sooner cancer is detected the easier it is to treat.'
question = 'John was a doctor in a cancer ward and knew that early detection was key. The cancer being detected quickly makes the cancer treatment'
opts = 'Easier, Harder'
get_response(question, fact, opts)
# output: 'Easier'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
hf-internal-testing/tiny-random-data2vec-xvector | e5e46e69598efd3ecbffb844355537d0bca9c1ee | 2022-03-03T12:26:14.000Z | [
"pytorch",
"data2vec-audio",
"audio-xvector",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-data2vec-xvector | 139 | null | transformers | 4,125 | Entry not found |
bhadresh-savani/electra-base-squad2 | e06d14d92455725024d07db7d552814aa94ddfe1 | 2022-04-13T14:30:20.000Z | [
"pytorch",
"tf",
"jax",
"electra",
"question-answering",
"dataset:squad_v2",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | bhadresh-savani | null | bhadresh-savani/electra-base-squad2 | 139 | null | transformers | 4,126 | ---
datasets:
- squad_v2
license: cc-by-4.0
---
# electra-base for QA
## Overview
**Language model:** electra-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 77.30144024256717,
"f1": 81.35438272008543,
"total": 11873,
"HasAns_exact": 74.34210526315789,
"HasAns_f1": 82.45961302894314,
"HasAns_total": 5928,
"NoAns_exact": 80.25231286795626,
"NoAns_f1": 80.25231286795626,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
```
## Authors
Vaishali Pal `vaishali.pal [at] deepset.ai`
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
Note:
Borrowed this model from Haystack model repo for adding tensorflow model. |
Elijah629/DialoGPT-shrek | ab9e636cc5971aa75300533101324589b6ab84a7 | 2022-06-18T04:26:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Elijah629 | null | Elijah629/DialoGPT-shrek | 139 | null | transformers | 4,127 | ---
tags:
- conversational
--- |
ytling/gpt-neo-125m-finetuned | 7b2d89a27054555ac05344d738272368fc632e45 | 2022-07-27T07:05:16.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | ytling | null | ytling/gpt-neo-125m-finetuned | 139 | null | transformers | 4,128 | ## GPT Neo 125m fine-tuned
#### Pushing model to repo
1. Login to hugging face,
```
from huggingface_hub import notebook_login
notebook_login()
```
2. Then push model to repo.
```
model.push_to_hub("gpt-neo-125m-finetuned", use_temp_dir=True)
tokenizer.push_to_hub("gpt-neo-125m-finetuned", use_temp_dir=True)
```
---
#### Using the Model
Load the model along with the tokenizer:
```
tokenizer = GPT2Tokenizer.from_pretrained("ytling/gpt-neo-125m-finetuned", bos_token='<|startoftext|>',eos_token='<|endoftext|>', pad_token='<|pad|>')
gpt_model = GPTNeoForCausalLM.from_pretrained("ytling/gpt-neo-125m-finetuned").cuda()
gpt_model.resize_token_embeddings(len(tokenizer))
```
To use model, pass text, the loaded model and tokenizer into the `gpt_model()` function,
```
def gpt_model(block_text, model, tokenizer):
block_dict = {
# add labels here
"Use Case":None
}
for label in block_dict:
prompt = f"<|startoftext|>Text: {block_text}\n{label}: "
token_prompt = tokenizer(f"{prompt}", return_tensors='pt', padding=True).input_ids.cuda()
output = model.generate(token_prompt, do_sample=False, top_k=50, max_length=512, top_p=0.80,
temperature=1.08, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
decode_output = tokenizer.decode(output[0], skip_special_tokens=True)
try:
block_dict[label] = re.findall(f"\n{label}: (.*)", decode_output)[-1]
except:
pass
return block_dict
```
returns dict containing predicted entities within text.
```
# Eg.
{'Use Case': "'unify contact centre, unified communications, and real-time communications API capabilities within a single software solution.'"}
```
|
BSC-TeMU/roberta-large-bne | 2c4265d25a2832cb7f60ed3da1a904f2e5e75192 | 2021-10-21T10:32:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | BSC-TeMU | null | BSC-TeMU/roberta-large-bne | 138 | 8 | transformers | 4,129 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
datasets:
- "bne"
metrics:
- "ppl"
widget:
- text: "Este año las campanadas de La Sexta las <mask> Pedroche y Chicote."
- text: "El artista Antonio Orozco es un colaborador de La <mask>."
- text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."
- text: "Hay base legal dentro del marco <mask> actual."
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne
# RoBERTa large trained with data from National Library of Spain (BNE)
## Model Description
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Training corpora and preprocessing
The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
Some of the statistics of the corpus:
| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
## Tokenization and pre-training
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-large-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa large. The training lasted a total of 96 hours with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
## Evaluation and results
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Helsinki-NLP/opus-mt-ar-tr | 759d47d6d139851222b55f7996a0467c037d7026 | 2021-01-18T07:47:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-tr | 138 | null | transformers | 4,130 | ---
language:
- ar
- tr
tags:
- translation
license: apache-2.0
---
### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md)
* model: transformer
* source language(s): apc_Latn ara ara_Latn arq_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.tur | 33.1 | 0.619 |
### System Info:
- hf_name: ara-tur
- source_languages: ara
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'tr']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: tur
- short_pair: ar-tr
- chrF2_score: 0.619
- bleu: 33.1
- brevity_penalty: 0.9570000000000001
- ref_len: 6949.0
- src_name: Arabic
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ara-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
SEBIS/code_trans_t5_small_api_generation_multitask | 3d8e13858823ad53033d13d5df363823b001d531 | 2021-06-23T09:54:09.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_api_generation_multitask | 138 | null | transformers | 4,131 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
filco306/gpt2-shakespeare-paraphraser | 7a0e25bea9e0626396aacad0d7cf9c32d5813c71 | 2021-08-28T19:54:12.000Z | [
"pytorch",
"text-generation",
"arxiv:2010.05700",
"transformers"
] | text-generation | false | filco306 | null | filco306/gpt2-shakespeare-paraphraser | 138 | 1 | transformers | 4,132 | # GPT2 Shakespeare style transfer paraphraser
This is the trained Shakespeare-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
flax-community/t5-large-wikisplit | 86940cdf19268efda140b9836287b32093cc684f | 2021-07-16T12:40:17.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wiki_split",
"arxiv:1907.12461",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/t5-large-wikisplit | 138 | null | transformers | 4,133 | ---
datasets:
- wiki_split
widget:
- text: "Mary likes to play football in her freetime whenever she meets with her friends that are very nice people."
---
# T5 model for sentence splitting in English
Sentence Split is the task of dividing a long sentence into multiple sentences.
E.g.:
```
Mary likes to play football in her freetime whenever she meets with her friends that are very nice people.
```
could be split into
```
Mary likes to play football in her freetime whenever she meets with her friends.
```
```
Her friends are very nice people.
```
## How to use it in your code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("flax-community/t5-large-wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/t5-large-wikisplit")
complex_sentence = "This comedy drama is produced by Tidy , the company she co-founded in 2008 with her husband David Peet , who is managing director ."
sample_tokenized = tokenizer(complex_sentence, return_tensors="pt")
answer = model.generate(sample_tokenized['input_ids'], attention_mask = sample_tokenized['attention_mask'], max_length=256, num_beams=5)
gene_sentence = tokenizer.decode(answer[0], skip_special_tokens=True)
gene_sentence
"""
Output:
This comedy drama is produced by Tidy. She co-founded Tidy in 2008 with her husband David Peet, who is managing director.
"""
```
## Datasets:
[Wiki_Split](https://research.google/tools/datasets/wiki-split/)
## Current Basline from [paper](https://arxiv.org/abs/1907.12461)

## Our Results:
| Model | Exact | SARI | BLEU |
| --- | --- | --- | --- |
| [t5-base-wikisplit](https://huggingface.co/flax-community/t5-base-wikisplit) | 17.93 | 67.5438 | 76.9 |
| [t5-v1_1-base-wikisplit](https://huggingface.co/flax-community/t5-v1_1-base-wikisplit) | 18.1207 | 67.4873 | 76.9478 |
| [byt5-base-wikisplit](https://huggingface.co/flax-community/byt5-base-wikisplit) | 11.3582 | 67.2685 | 73.1682 |
| [t5-large-wikisplit](https://huggingface.co/flax-community/t5-large-wikisplit) | 18.6632 | 68.0501 | 77.1881 | |
gagan3012/bert-tiny-finetuned-ner | 54db92457baa1b88a45e52c048df8461498dc9d3 | 2021-09-01T23:50:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | gagan3012 | null | gagan3012/bert-tiny-finetuned-ner | 138 | 2 | transformers | 4,134 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-tiny-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8083060109289617
- name: Recall
type: recall
value: 0.8273856136033113
- name: F1
type: f1
value: 0.8177345348001547
- name: Accuracy
type: accuracy
value: 0.9597597979252387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-ner
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Precision: 0.8083
- Recall: 0.8274
- F1: 0.8177
- Accuracy: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0355 | 1.0 | 878 | 0.1692 | 0.8072 | 0.8248 | 0.8159 | 0.9594 |
| 0.0411 | 2.0 | 1756 | 0.1678 | 0.8101 | 0.8277 | 0.8188 | 0.9600 |
| 0.0386 | 3.0 | 2634 | 0.1697 | 0.8103 | 0.8269 | 0.8186 | 0.9599 |
| 0.0373 | 4.0 | 3512 | 0.1694 | 0.8106 | 0.8263 | 0.8183 | 0.9600 |
| 0.0383 | 5.0 | 4390 | 0.1689 | 0.8083 | 0.8274 | 0.8177 | 0.9598 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
indonesian-nlp/gpt2-medium-indonesian | 5e5fa4fe532b734c2c7fdb14401cbba96ac7de7b | 2022-05-28T10:33:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | indonesian-nlp | null | indonesian-nlp/gpt2-medium-indonesian | 138 | null | transformers | 4,135 | ---
language: id
widget:
- text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira."
---
# GPT2-medium-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found [here](https://huggingface.co/spaces/indonesian-nlp/gpt2-app).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='indonesian-nlp/gpt2-medium-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
model = GPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
model = TFGPT2Model.from_pretrained('indonesian-nlp/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/indonesian-nlp/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/indonesian-nlp/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.

The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).

### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: *let [person] ...*
* define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.

### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.

## Training data
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
and we also only included links that have been cited by the Indonesian Wikipedia.
## Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
## Team members
- Akmal ([@Wikidepia](https://huggingface.co/Wikidepia))
- alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner))
- Cahya Wirawan ([@cahya](https://huggingface.co/cahya))
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
## Future work
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources. |
minhpqn/bio_roberta-base_pubmed | 296f147fb483b7d620e4e35365b818bfa771b120 | 2021-05-20T17:53:22.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | minhpqn | null | minhpqn/bio_roberta-base_pubmed | 138 | null | transformers | 4,136 | Entry not found |
osanseviero/BigGAN-deep-128 | 86e3d82ec07f2513c0942d138a7b38133cfc2036 | 2022-02-21T13:55:46.000Z | [
"pytorch",
"generic",
"text-to-image"
] | text-to-image | false | osanseviero | null | osanseviero/BigGAN-deep-128 | 138 | 10 | generic | 4,137 | ---
tags:
- text-to-image
library_name: generic
---
# Image generation using pretrained BigGAN
## Warning: This only works for ImageNet inputs.
List of possible inputs: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a
GitHub repository: https://github.com/huggingface/pytorch-pretrained-BigGAN
|
rbhushan/distilgpt2-finetuned-wikitext2 | 5064459af45b03bcfb044698da21857e00725d69 | 2022-01-11T16:55:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | rbhushan | null | rbhushan/distilgpt2-finetuned-wikitext2 | 138 | null | transformers | 4,138 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 73 | 5.4169 |
| No log | 2.0 | 146 | 5.3145 |
| No log | 3.0 | 219 | 5.2872 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sagorsarker/mbert-bengali-ner | 75f727e825c43afb14d24c8c0a7dd602bf283e3c | 2022-06-17T11:29:39.000Z | [
"pytorch",
"bert",
"token-classification",
"bn",
"dataset:wikiann",
"dataset:xtreme",
"transformers",
"bengali-ner",
"bengali",
"bangla",
"NER",
"license:mit",
"autotrain_compatible"
] | token-classification | false | sagorsarker | null | sagorsarker/mbert-bengali-ner | 138 | 2 | transformers | 4,139 | ---
language: bn
tags:
- bengali-ner
- bengali
- bangla
- NER
license: mit
datasets:
- wikiann
- xtreme
---
# Multi-lingual BERT Bengali Name Entity Recognition
`mBERT-Bengali-NER` is a transformer-based Bengali NER model build with [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model and [Wikiann](https://huggingface.co/datasets/wikiann) Datasets.
## How to Use
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/mbert-bengali-ner")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/mbert-bengali-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "আমি জাহিদ এবং আমি ঢাকায় বাস করি।"
ner_results = nlp(example)
print(ner_results)
```
## Label and ID Mapping
| Label ID | Label |
| -------- | ----- |
|0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
## Training Details
- mBERT-Bengali-NER trained with [Wikiann](https://huggingface.co/datasets/wikiann) datasets
- mBERT-Bengali-NER trained with [transformers-token-classification](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb) script
- mBERT-Bengali-NER total trained 5 epochs.
- Trained in Kaggle GPU
## Evaluation Results
|Model | F1 | Precision | Recall | Accuracy | Loss |
| ---- | --- | --------- | ----- | -------- | --- |
|mBert-Bengali-NER | 0.97105 | 0.96769| 0.97443 | 0.97682 | 0.12511 |
|
speechbrain/sepformer-whamr-enhancement | ace1f9824a17e3f14be043b409b5defc452d325e | 2021-12-09T02:38:12.000Z | [
"en",
"dataset:WHAMR!",
"arxiv:2010.13154",
"arxiv:2106.04624",
"speechbrain",
"audio-to-audio",
"Speech Enhancement",
"WHAMR!",
"SepFormer",
"Transformer",
"pytorch",
"license:apache-2.0"
] | audio-to-audio | false | speechbrain | null | speechbrain/sepformer-whamr-enhancement | 138 | null | speechbrain | 4,140 | ---
language: "en"
thumbnail:
tags:
- audio-to-audio
- Speech Enhancement
- WHAMR!
- SepFormer
- Transformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- WHAMR!
metrics:
- SI-SNR
- PESQ
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WHAMR! for speech enhancement (8k sampling frequency)
This repository provides all the necessary tools to perform speech enhancement (denoising + dereverberation) with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAMR!](http://wham.whisper.ai/) dataset with 8k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 8k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 10.59 dB SI-SNR on the test set of WHAMR! dataset.
| Release | Test-Set SI-SNR | Test-Set PESQ |
|:-------------:|:--------------:|:--------------:|
| 01-12-21 | 10.59 | 2.84 |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Perform speech enhancement on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-whamr-enhancement", savedir='pretrained_models/sepformer-whamr-enhancement')
# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-whamr-enhancement/example_whamr.wav')
torchaudio.save("enhanced_whamr.wav", est_sources[:, :, 0].detach().cpu(), 8000)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The training script is currently being worked on an ongoing pull-request.
We will update the model card as soon as the PR is merged.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1V0KwkEfWwomZ0Vjox0BTnQ694_uxgu8G).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
stas/mt5-tiny-random | 25f1f52107153ed74c3ea9c89cd1a33818f0d67d | 2021-06-23T16:37:54.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stas | null | stas/mt5-tiny-random | 138 | 2 | transformers | 4,141 | This is a tiny random mt5 model used for testing
See `mt5-make-tiny-model.py` for how it was created. |
MachineBabs/DocBrown | d89fbcf4698e6572c4bc66c5227d8dcb9f054aef | 2022-04-24T11:39:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MachineBabs | null | MachineBabs/DocBrown | 138 | null | transformers | 4,142 | ---
tags:
- conversational
---
|
nanopass/distilbert-base-uncased-emotion-2 | 19cd3b5c0c9a3b5308bba13ff708abd16cd6c2d9 | 2022-05-02T09:43:02.000Z | [
"pytorch",
"tf",
"jax",
"distilbert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1910.01108",
"transformers",
"emotion",
"license:apache-2.0"
] | text-classification | false | nanopass | null | nanopass/distilbert-base-uncased-emotion-2 | 138 | null | transformers | 4,143 | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Distilbert-base-uncased-emotion
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.938,
'test_f1': 0.937932884041714,
'test_loss': 0.1472451239824295,
'test_mem_cpu_alloc_delta': 0,
'test_mem_cpu_peaked_delta': 0,
'test_mem_gpu_alloc_delta': 0,
'test_mem_gpu_peaked_delta': 163454464,
'test_runtime': 5.0164,
'test_samples_per_second': 398.69
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
apple/ane-distilbert-base-uncased-finetuned-sst-2-english | b610778797944d055a73e3da10630122237a7a38 | 2022-06-13T13:29:48.000Z | [
"pytorch",
"coreml",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"transformers",
"license:apache-2.0"
] | text-classification | false | apple | null | apple/ane-distilbert-base-uncased-finetuned-sst-2-english | 138 | 3 | transformers | 4,144 | ---
language: en
license: apache-2.0
datasets:
- sst2
---
# DistilBERT optimized for Apple Neural Engine
This is the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model, optimized for the Apple Neural Engine (ANE) as described in the article [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers).
The source code is taken from Apple's [ml-ane-transformers](https://github.com/apple/ml-ane-transformers) GitHub repo, modified slightly to make it usable from the 🤗 Transformers library.
For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
## How to use
Usage example:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_checkpoint = "apple/ane-distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint, trust_remote_code=True, return_dict=False,
)
inputs = tokenizer(
["The Neural Engine is really fast"],
return_tensors="pt",
max_length=128,
padding="max_length",
)
with torch.no_grad():
outputs = model(**inputs)
```
## Using the model with Core ML
PyTorch does not utilize the ANE, and running this version of the model with PyTorch on the CPU or GPU may actually be slower than the original. To take advantage of the hardware acceleration of the ANE, use the Core ML version of the model, **DistilBERT_fp16.mlpackage**.
Core ML usage example from Python:
```python
import coremltools as ct
mlmodel = ct.models.MLModel("DistilBERT_fp16.mlpackage")
inputs = tokenizer(
["The Neural Engine is really fast"],
return_tensors="np",
max_length=128,
padding="max_length",
)
outputs_coreml = mlmodel.predict({
"input_ids": inputs["input_ids"].astype(np.int32),
"attention_mask": inputs["attention_mask"].astype(np.int32),
})
```
To use the model from Swift, you will need to tokenize the input yourself according to the BERT rules. You can find a Swift implementation of the [BERT tokenizer here](https://github.com/huggingface/swift-coreml-transformers).
|
elozano/bert-base-cased-fake-news | 9e8cd2895bd36f0c25c78c4dcf937b6700c7bc46 | 2022-02-26T18:50:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | elozano | null | elozano/bert-base-cased-fake-news | 137 | null | transformers | 4,145 | Entry not found |
veronica320/QA-for-Event-Extraction | c679c64085048f2369918359836026d47061bb87 | 2021-07-29T22:57:42.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | veronica320 | null | veronica320/QA-for-Event-Extraction | 137 | null | transformers | 4,146 | # QA-for-Event-Extraction
## Model description
This is a QA model as part of the event extraction system in the ACL2021 paper: [Zero-shot Event Extraction via Transfer Learning: Challenges and Insights](https://aclanthology.org/2021.acl-short.42/). The pretrained architecture is [roberta-large](https://huggingface.co/roberta-large) and the fine-tuning data is [QAMR](https://github.com/uwnlp/qamr).
## Demo
To see how the model works, type a question and a context separated in the right-hand-side textboxs under "Hosted inference API".
Example:
- Question: `Who was killed?`
- Context: `A car bomb exploded Thursday in a crowded outdoor market in the heart of Jerusalem, killing at least two people, police said.`
- Answer: `people`
## Usage
- To use the QA model independently, follow the [huggingface documentation on AutoModelForQuestionAnswering](https://huggingface.co/transformers/task_summary.html?highlight=automodelforquestionanswering#extractive-question-answering).
- To use it as part of the event extraction system, please check out [our Github repo](https://github.com/veronica320/Zeroshot-Event-Extraction).
### BibTeX entry and citation info
```
@inproceedings{lyu-etal-2021-zero,
title = "Zero-shot Event Extraction via Transfer Learning: {C}hallenges and Insights",
author = "Lyu, Qing and
Zhang, Hongming and
Sulem, Elior and
Roth, Dan",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-short.42",
doi = "10.18653/v1/2021.acl-short.42",
pages = "322--332",
abstract = "Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answering (QA) queries (e.g. {``}A city was attacked{''} entails {``}There is an attack{''}), exploiting pretrained TE/QA models for direct transfer. On ACE-2005 and ERE, our system achieves acceptable results, yet there is still a large gap from supervised approaches, showing that current QA and TE technologies fail in transferring to a different domain. To investigate the reasons behind the gap, we analyze the remaining key challenges, their respective impact, and possible improvement directions.",
}
``` |
VMware/vbert-2021-large | 876b71dac6a6bb6f415cf53ad2e7bc170d0c8738 | 2022-06-16T22:30:39.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"eng",
"transformers",
"PyTorch",
"tensorflow",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | VMware | null | VMware/vbert-2021-large | 137 | 1 | transformers | 4,147 | ---
language:
- "eng"
thumbnail: "URL to a thumbnail used in social sharing"
tags:
- "PyTorch"
- "tensorflow"
license: "apache-2.0"
---
# vBERT-2021-BASE
### Model Info:
<ul>
<li> Authors: R&D AI Lab, VMware Inc.
<li> Model date: April, 2022
<li> Model version: 2021-base
<li> Model type: Pretrained language model
<li> License: Apache 2.0
</ul>
#### Motivation
Traditional BERT models struggle with VMware-specific words (Tanzu, vSphere, etc.), technical terms, and compound words. (<a href =https://medium.com/@rickbattle/weaknesses-of-wordpiece-tokenization-eb20e37fec99>Weaknesses of WordPiece Tokenization</a>)
We have created our vBERT model to address the aforementioned issues. We have replaced the first 1k unused tokens of BERT's vocabulary with VMware-specific terms to create a modified vocabulary. We then pretrained the 'bert-large-uncased' model for additional 66K steps (60k with MSL_128 and 6k with MSL_512) on VMware domain data.
#### Intended Use
The model functions as a VMware-specific Language Model.
#### How to Use
Here is how to use this model to get the features of a given text in PyTorch:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-large')
model = BertModel.from_pretrained("VMware/vbert-2021-large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-large')
model = TFBertModel.from_pretrained('VMware/vbert-2021-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Training
#### - Datasets
Publically available VMware text data such as VMware Docs, Blogs etc. were used for creating the pretraining corpus. Sourced in May, 2021. (~320,000 Documents)
#### - Preprocessing
<ul>
<li>Decoding HTML
<li>Decoding Unicode
<li>Stripping repeated characters
<li>Splitting compound word
<li>Spelling correction
</ul>
#### - Model performance measures
We benchmarked vBERT on various VMware-specific NLP downstream tasks (IR, classification, etc).
The model scored higher than the 'bert-base-uncased' model on all benchmarks.
### Limitations and bias
Since the model is further pretrained on the BERT model, it may have the same biases embedded within the original BERT model.
The data needs to be preprocessed using our internal vNLP Preprocessor (not available to the public) to maximize its performance.
|
edumunozsala/roberta_bne_sentiment_analysis_es | 6a506e8b4e8a5d24eea04961812e732188514cf1 | 2022-07-29T09:19:03.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"dataset:IMDbreviews_es",
"arxiv:2107.07253",
"transformers",
"sagemaker",
"roberta-bne",
"TextClassification",
"SentimentAnalysis",
"license:apache-2.0",
"model-index"
] | text-classification | false | edumunozsala | null | edumunozsala/roberta_bne_sentiment_analysis_es | 137 | null | transformers | 4,148 | ---
language: es
tags:
- sagemaker
- roberta-bne
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: roberta_bne_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy
type: accuracy
value: 0.9106666666666666
- name: F1 Score
type: f1
value: 0.9090909090909091
- name: Precision
type: precision
value: 0.9063852813852814
- name: Recall
type: recall
value: 0.9118127381600436
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model roberta_bne_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RoBERTa-base-bne** which is a RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB.
It was trained by The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html)
**RoBERTa BNE Citation**
Check out the paper for all the details: https://arxiv.org/abs/2107.07253
```
@article{gutierrezfandino2022,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"PlanTL-GOB-ES/roberta-base-bne\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.9106666666666666
- F1 Score = 0.9090909090909091
- Precision = 0.9063852813852814
- Recall = 0.9118127381600436
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
Mithil/86RecallRoberta | 3bb53d625342a7ea9ec08af1d7dd247b1bbbacb5 | 2022-07-04T16:03:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | Mithil | null | Mithil/86RecallRoberta | 137 | null | transformers | 4,149 | ---
license: afl-3.0
---
|
Rajaram1996/FacialEmoRecog | 059c5f2f0afc6fd7e2b62f558e2f2ab20798d72b | 2021-11-05T21:08:27.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | Rajaram1996 | null | Rajaram1996/FacialEmoRecog | 136 | 6 | transformers | 4,150 | ---
tags:
- image-classification
- pytorch
inference: true
pipeline_tag: image-classification
metrics:
- accuracy
model-index:
- name: FacialEmoRecog
results:
- task:
name: Image Classification
type: image-classification
- metrics:
name: Accuracy
type: accuracy
value: 0.9189583659172058
---
# FacialEmoRecog
Create your own image classifier for **anything** by running this repo
## Example Images |
Recognai/distilbert-base-es-multilingual-cased | 79dca2e293dd2a1208169689adc0c2f433b5cf4a | 2021-03-10T20:36:54.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Recognai | null | Recognai/distilbert-base-es-multilingual-cased | 136 | 2 | transformers | 4,151 | ---
language: es
license: apache-2.0
datasets:
- wikipedia
widget:
- text: "Mi nombre es Juan y vivo en [MASK]."
---
# DistilBERT base multilingual model Spanish subset (cased)
This model is the Spanish extract of `distilbert-base-multilingual-cased` (https://huggingface.co/distilbert-base-multilingual-cased), a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). This model is cased: it does make a difference between english and English.
It uses the extraction method proposed by Geotrend described in https://github.com/Geotrend-research/smaller-transformers.
The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of **63M parameters** (compared to 134M parameters for DistilmBERT).
The goal of this model is to reduce even further the size of the `distilbert-base-multilingual` multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT. |
blanchefort/rubert-base-cased-sentiment-med | f2077a6f4c9e63673d85af63ca1c2ac73d77d947 | 2021-05-19T12:58:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment"
] | text-classification | false | blanchefort | null | blanchefort/rubert-base-cased-sentiment-med | 136 | 1 | transformers | 4,152 | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# RuBERT for Sentiment Analysis of Medical Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on corpus of medical reviews.
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-med')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-med', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)**
> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
|
dpalominop/spanish-bert-apoyo | 6d8450759d44a0f00625d89936541dc831d760d3 | 2021-05-19T16:08:52.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | dpalominop | null | dpalominop/spanish-bert-apoyo | 136 | null | transformers | 4,153 | ```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("dpalominop/spanish-bert-apoyo")
model = AutoModelForSequenceClassification.from_pretrained("dpalominop/spanish-bert-apoyo")
``` |
marma/bert-base-swedish-cased-sentiment | 40c98c5ae300960f2a527def3e910063927c9f7d | 2021-05-19T23:02:02.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | marma | null | marma/bert-base-swedish-cased-sentiment | 136 | null | transformers | 4,154 | Experimental sentiment analysis based on ~20k of App Store reviews in Swedish.
### Usage
```python
from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='marma/bert-base-swedish-cased-sentiment')
>>> sa('Det här är ju fantastiskt!')
[{'label': 'POSITIVE', 'score': 0.9974609613418579}]
>>> sa('Den här appen suger!')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt.')
[{'label': 'NEGATIVE', 'score': 0.998340368270874}]
>>> sa('Det är fruktansvärt bra.')
[{'label': 'POSITIVE', 'score': 0.998340368270874}]
``` |
monsoon-nlp/dialect-ar-gpt-2021 | e2fc0a4bb449359b4fb79271a2348f8871c3779a | 2021-05-23T09:59:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"ar",
"arxiv:2012.15520",
"transformers"
] | text-generation | false | monsoon-nlp | null | monsoon-nlp/dialect-ar-gpt-2021 | 136 | null | transformers | 4,155 | ---
language: ar
---
# Dialect-AR-GPT-2021
## Finetuned AraGPT-2 demo
This model started with [AraGPT2-Medium](https://huggingface.co/aubmindlab/aragpt2-medium),
from AUB MIND Lab.
This model was then finetuned on dialect datasets from Qatar University, University of British Columbia / NLP,
and Johns Hopkins University / LREC for 10 epochs.
You can use special tokens to prompt five dialects: `[EGYPTIAN]`, `[GULF]`, `[LEVANTINE]`, `[MAGHREBI]`, or `[MSA]`, followed by a space.
```
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/dialect-ar-gpt-2021")
model.generate('[GULF] ' + "مدينتي هي", { 'max_length': 100 })
```
There is NO content filtering in the current version; do not use for public-facing
text generation!
## Training and Finetuning details
Original model: https://huggingface.co/aubmindlab/aragpt2-medium
I inserted new tokens into the tokenizer, finetuned the model on the dialect samples, and exported the new model.
Notebook: https://colab.research.google.com/drive/19C0zbkSCt5ncVCa4kY-ik9hSEiJcjI-F
## Citations
AraGPT2 model:
```
@misc{antoun2020aragpt2,
title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15520},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Dialect data sources:
- https://qspace.qu.edu.qa/handle/10576/15265
- https://github.com/UBC-NLP/aoc_id
- https://github.com/ryancotterell/arabic_dialect_annotation
|
sentence-transformers/quora-distilbert-base | 2708fe60f344ffbedf990cf4f8be7866f605bf60 | 2022-06-15T23:45:12.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/quora-distilbert-base | 136 | null | sentence-transformers | 4,156 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/quora-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/quora-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/quora-distilbert-base')
model = AutoModel.from_pretrained('sentence-transformers/quora-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/quora-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
sentence-transformers/xlm-r-large-en-ko-nli-ststb | 7359df4bd7393a242f5e3c16e933079c626772b2 | 2022-06-15T23:50:13.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/xlm-r-large-en-ko-nli-ststb | 136 | null | sentence-transformers | 4,157 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-large-en-ko-nli-ststb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-large-en-ko-nli-ststb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-large-en-ko-nli-ststb)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
MarkS/bart-base-qa2d | 24e38002cd12bc8c1381b2b69200d4d916930452 | 2022-04-21T08:46:22.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:2112.03849",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | MarkS | null | MarkS/bart-base-qa2d | 136 | null | transformers | 4,158 | ---
license: afl-3.0
---
# Generating Declarative Statements from QA Pairs
There are already some rule-based models that can accomplish this task, but I haven't seen any transformer-based models that can do so. Therefore, I trained this model based on `Bart-base` to transform QA pairs into declarative statements.
I compared the my model with other rule base models, including
> [paper1](https://aclanthology.org/D19-5401.pdf) (2019), which proposes **2 Encoder Pointer-Gen model**
and
> [paper2](https://arxiv.org/pdf/2112.03849.pdf) (2021), which proposes **RBV2 model**
**Here are results compared to 2 Encoder Pointer-Gen model (on testset released by paper1)**
Test on testset
| Model | 2 Encoder Pointer-Gen(2019) | BART-base |
| ------- | --------------------------- | ---------- |
| BLEU | 74.05 | **78.878** |
| ROUGE-1 | 91.24 | **91.937** |
| ROUGE-2 | 81.91 | **82.177** |
| ROUGE-L | 86.25 | **87.172** |
Test on NewsQA testset
| Model | 2 Encoder Pointer-Gen | BART |
| ------- | --------------------- | ---------- |
| BLEU | 73.29 | **74.966** |
| ROUGE-1 | **95.38** | 89.328 |
| ROUGE-2 | **87.18** | 78.538 |
| ROUGE-L | **93.65** | 87.583 |
Test on free_base testset
| Model | 2 Encoder Pointer-Gen | BART |
| ------- | --------------------- | ---------- |
| BLEU | 75.41 | **76.082** |
| ROUGE-1 | **93.46** | 92.693 |
| ROUGE-2 | **82.29** | 81.216 |
| ROUGE-L | **87.5** | 86.834 |
**As paper2 doesn't release its own dataset, it's hard to make a fair comparison. But according to results in paper2, the Bleu and ROUGE score of their model is lower than that of MPG, which is exactly the 2 Encoder Pointer-Gen model.**
| Model | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-L |
| ------------ | ---- | ------- | ------- | ------- |
| RBV2 | 74.8 | 95.3 | 83.1 | 90.3 |
| RBV2+BERT | 71.5 | 93.9 | 82.4 | 89.5 |
| RBV2+RoBERTa | 72.1 | 94 | 83.1 | 89.8 |
| RBV2+XLNET | 71.2 | 93.6 | 82.3 | 89.4 |
| MPG | 75.8 | 94.4 | 87.4 | 91.6 |
There are reasons to believe that my model performs better than RBV2.
To sum up,my model performs nearly as well as the SOTA rule-based model evaluated with BLEU and ROUGE score. However the sentence pattern is lack of diversity.
(It's worth mentioning that even though I tried my best to conduct objective tests, the testsets I could find were more or less different from what they introduced in the paper.)
## How to use
```python
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained("MarkS/bart-base-qa2d")
model = BartForConditionalGeneration.from_pretrained("MarkS/bart-base-qa2d")
input_text = "question: what day is it today? answer: Tuesday"
input = tokenizer(input_text, return_tensors='pt')
output = model.generate(input.input_ids)
result = tokenizer.batch_decode(output, skip_special_tokens=True)
```
|
fujuta/DialoGPT-medium-HarryPotter | 765808bab141a22272e1d8ac306aafb451bf1079 | 2022-05-24T23:24:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | fujuta | null | fujuta/DialoGPT-medium-HarryPotter | 136 | null | transformers | 4,159 | ---
tags:
- conversational
--- |
SynamicTechnologies/CYBERT | f0274dfc3e1bc5ce041da4e7d3bbf9cd0a67e618 | 2022-06-02T09:51:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | SynamicTechnologies | null | SynamicTechnologies/CYBERT | 136 | 1 | transformers | 4,160 | ## CYBERT
BERT model dedicated to the domain of cyber security. The model has been trained on a corpus of high-quality cyber security and computer science text and is unlikely to work outside this domain.
##Model architecture
The model architecture used is original Roberta and tokenizer to train the corpus is Byte Level.
##Hardware
The model is trained on GPU NVIDIA-SMI 510.54
|
Aviv/Moran_Aviv_Bart | e18f2535f05817530a796620d94a1c4988b7b46c | 2022-07-15T16:41:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aviv | null | Aviv/Moran_Aviv_Bart | 136 | 1 | transformers | 4,161 | Moran and Aviv project for solving Summarization task.
We choose 2 architectures: TextRank and BART (facebook).
In Streamlit' application, you can enter your article as an input, and the output is a summary.
Inspired by HIT studies. |
ryo0634/luke-base-comp-wiki-20181220-umls | 40b67edc829fb4ace28bfc53e6ee5e472a470ad2 | 2022-07-20T15:03:47.000Z | [
"pytorch",
"luke",
"feature-extraction",
"transformers"
] | feature-extraction | false | ryo0634 | null | ryo0634/luke-base-comp-wiki-20181220-umls | 136 | null | transformers | 4,162 | Entry not found |
activebus/BERT_Review | fd6d67dfb363222edb0277271b6a07f4e9c52f2a | 2021-05-18T23:05:54.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | activebus | null | activebus/BERT_Review | 135 | null | transformers | 4,163 | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT_Review` is cross-domain (beyond just `laptop` and `restaurant`) language model with one example from randomly mixed domains, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT_Review")
model = AutoModel.from_pretrained("activebus/BERT_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
lupinlevorace/tiny-bert-sst2-distilled | db7e67bdab78e2bd37d76f149ce89f76fe37bde1 | 2022-02-20T14:37:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | lupinlevorace | null | lupinlevorace/tiny-bert-sst2-distilled | 135 | null | transformers | 4,164 | Entry not found |
microsoft/beit-large-patch16-384 | 6ad3dc484125f460f2ce85ea2296732db291bdf1 | 2022-01-28T10:19:50.000Z | [
"pytorch",
"jax",
"beit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/beit-large-patch16-384 | 135 | null | transformers | 4,165 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (large-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-384')
model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
mrm8488/spanbert-finetuned-squadv1 | 95a9260e7b0447dd0cb79149847982375baf347d | 2021-05-20T00:55:17.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"arxiv:1907.10529",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/spanbert-finetuned-squadv1 | 135 | null | transformers | 4,166 | ---
language: en
thumbnail:
---
# SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v1.1
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Details of SpanBERT
A pre-training method that is designed to better represent and predict spans of text.
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset
[SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) contains 100,000+ question-answer pairs on 500+ articles.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 87.7k |
| SQuAD1.1 | eval | 10.6k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **85.49** |
| **F1** | **91.98** |
### Raw metrics:
```json
{
"exact": 85.49668874172185,
"f1": 91.9845699540379,
"total": 10570,
"HasAns_exact": 85.49668874172185,
"HasAns_f1": 91.9845699540379,
"HasAns_total": 10570,
"best_exact": 85.49668874172185,
"best_exact_thresh": 0.0,
"best_f1": 91.9845699540379,
"best_f1_thresh": 0.0
}
```
## Comparison:
| Model | EM | F1 score |
| ----------------------------------------------------------------------------------------- | --------- | --------- |
| [SpanBert official repo](https://github.com/facebookresearch/SpanBERT#pre-trained-models) | - | 92.4\* |
| [spanbert-finetuned-squadv1](https://huggingface.co/mrm8488/spanbert-finetuned-squadv1) | **85.49** | **91.98** |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-finetuned-squadv1",
tokenizer="mrm8488/spanbert-finetuned-squadv1"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
thilina/mt5-sinhalese-english | 2c69967cd0914d5dd136a79d75b3705e9af6a349 | 2021-01-03T21:14:26.000Z | [
"pytorch",
"tf",
"mt5",
"text2text-generation",
"si",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | thilina | null | thilina/mt5-sinhalese-english | 135 | null | transformers | 4,167 | ---
language:
- si
- en
tags:
- translation
license: apache-2.0
metrics:
- sacrebleu
---
# mt5-sinhalese-english
## Model description
An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa.
## Training details
- English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md)
- [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base)
## Eval results
SacreBLEU score:
- English to Sinhalese: 10.3
- Sinhalese to English: 24.4 |
ml4pubmed/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section | 0cbe99bf91e4bad964612639347e5aa8040e7370 | 2022-06-22T10:58:49.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers",
"document sections",
"sentence classification",
"document classification",
"medical",
"health",
"biomedical"
] | text-classification | false | ml4pubmed | null | ml4pubmed/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section | 135 | 1 | transformers | 4,168 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section
- original model file name: textclassifer_BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pubmed_20k
- This is a fine-tuned checkpoint of `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_metrics
- val_accuracy: 0.8678670525550842
- val_matthewscorrcoef: 0.8222037553787231
- val_f1score: 0.866841197013855
- val_cross_entropy: 0.3674609065055847
- epoch: 8.0
- train_accuracy_step: 0.83984375
- train_matthewscorrcoef_step: 0.7790813446044922
- train_f1score_step: 0.837363600730896
- train_cross_entropy_step: 0.39843088388442993
- train_accuracy_epoch: 0.8538406491279602
- train_matthewscorrcoef_epoch: 0.8031334280967712
- train_f1score_epoch: 0.8521654605865479
- train_cross_entropy_epoch: 0.4116102457046509
- test_accuracy: 0.8578397035598755
- test_matthewscorrcoef: 0.8091378808021545
- test_f1score: 0.8566917181015015
- test_cross_entropy: 0.3963385224342346
- date_run: Apr-22-2022_t-19
- huggingface_tag: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
|
paust/pko-t5-large | 554210dfbb2542b59777d8df2653c83ea5511bbe | 2022-05-21T06:38:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ko",
"arxiv:2105.09680",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | paust | null | paust/pko-t5-large | 135 | 1 | transformers | 4,169 | ---
language: ko
license: cc-by-4.0
---
# pko-t5-large
[Source Code](https://github.com/paust-team/pko-t5)
pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다.
pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다.
## Usage
transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다.
### Example
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-large')
input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids
labels = tokenizer(["T5 입니다."]).input_ids
outputs = model(input_ids=input_ids, labels=labels)
print(f"loss={outputs.loss} logits={outputs.logits}")
```
## Klue 평가 (dev)
| | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) |
| --- | --- |-----------------| --- | --- | --- | --- | --- | --- |
| | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | 75.26/- |
| FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 62.95 | 93.15 | 43.81/46.58 |
| FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 72.94 | 97.28 | 61.53/64.74 |
| FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | 72.26 | 97.60 | 68.01/71.44 |
| MT | pko-t5-small | 85.85 | 79.12/77.81 | 66.8 | 81.53 | 67.93 | 91.38 | 44.97/48.07 |
| MT | pko-t5-base | 86.86 | 87.61/81.42 | 75.46 | 86.85 | 71.85 | 96.32 | 61.95/65.06 |
| MT | pko-t5-large | 87.25 | 91.05/84.58 | 82.16 | 87.63 | **74.78** | **97.33** | **69.18/71.92** |
- FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝
- [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수
## License
PAUST에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다. |
derwahnsinn/gpt2-mediumBITB | fee83b219cfc2447ff29405e52ebdac438a15cc6 | 2022-07-27T19:13:02.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediumBITB | 135 | null | transformers | 4,170 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediumBITB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediumBITB
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 6.5861
- eval_runtime: 41.1497
- eval_samples_per_second: 56.039
- eval_steps_per_second: 7.023
- epoch: 15.0
- step: 3165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 29
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
camembert/camembert-base-oscar-4gb | efb6c58d51afb976f8ccd25c534543ac6ff115c5 | 2020-12-11T21:35:18.000Z | [
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"transformers"
] | null | false | camembert | null | camembert/camembert-base-oscar-4gb | 134 | null | transformers | 4,171 | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-oscar-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-oscar-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-oscar-4gb", tokenizer="camembert/camembert-base-oscar-4gb")
>>> results = camembert_fill_mask("Le camembert est <mask> !")
# results
#[{'sequence': '<s> Le camembert est parfait!</s>', 'score': 0.04089554399251938, 'token': 1654},
#{'sequence': '<s> Le camembert est délicieux!</s>', 'score': 0.037193264812231064, 'token': 7200},
#{'sequence': '<s> Le camembert est prêt!</s>', 'score': 0.025467922911047935, 'token': 1415},
#{'sequence': '<s> Le camembert est meilleur!</s>', 'score': 0.022812040522694588, 'token': 528},
#{'sequence': '<s> Le camembert est différent!</s>', 'score': 0.017135459929704666, 'token': 2935}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.1120, -0.1464, 0.0181, ..., -0.1723, -0.0278, 0.1606],
# [ 0.1234, 0.1202, -0.0773, ..., -0.0405, -0.0668, -0.0788],
# [-0.0440, 0.0480, -0.1926, ..., 0.1066, -0.0961, 0.0637],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-oscar-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-oscar-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.1584, -0.1207, -0.0179, ..., 0.5457, 0.1491, -0.1191],
# [-0.1122, 0.3634, 0.0676, ..., 0.4395, -0.0470, -0.3781],
# [-0.2232, 0.0019, 0.0140, ..., 0.4461, -0.0233, 0.0735],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
cointegrated/rubert-tiny-bilingual-nli | d914a099332ddea9d45267241695245ad64e2b76 | 2021-10-10T08:17:19.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification"
] | zero-shot-classification | false | cointegrated | null | cointegrated/rubert-tiny-bilingual-nli | 134 | null | transformers | 4,172 | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Сервис отстойный, кормили невкусно"
candidate_labels: "Мне понравилось, Мне не понравилось"
hypothesis_template: "{}."
---
# RuBERT-tiny for NLI (natural language inference)
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a related model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
jcblaise/bert-tagalog-base-cased | f49e54a8098d2f7e8759463c92cb32e8d3aa28d4 | 2021-11-12T03:21:35.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | jcblaise | null | jcblaise/bert-tagalog-base-cased | 134 | 1 | transformers | 4,173 | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# BERT Tagalog Base Cased
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
vumichien/wav2vec2-large-xlsr-japanese | 937c1d4c2912148d87e6c77756aa59854942cc6c | 2021-11-04T16:15:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vumichien | null | vumichien/wav2vec2-large-xlsr-japanese | 134 | 3 | transformers | 4,174 | ---
language: ja
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Japanese by Chien Vu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 30.84
- name: Test CER
type: cer
value: 17.85
widget:
- example_title: Japanese speech corpus sample 1
src: https://u.pcloud.link/publink/show?code=XZwhAlXZFOtXiqKHMzmYS9wXrCP8Yb7EtRd7
- example_title: Japanese speech corpus sample 2
src: https://u.pcloud.link/publink/show?code=XZ6hAlXZ5ccULt0YtrhJFl7LygKg0SJzKX0k
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice) and Japanese speech corpus of Saruwatari-lab, University of Tokyo [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import torchaudio
import librosa
from datasets import load_dataset
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
#config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model.to("cuda")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# evaluate function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Test Result
**WER:** 30.84%,
**CER:** 17.85%
## Training
The Common Voice `train`, `validation` datasets and Japanese speech corpus `basic5000` datasets were used for training.
|
BM-K/KoSimCSE-roberta | 37a6d8cc47bcf2a83b6bae5987632680cbc58e0f | 2022-06-03T01:47:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"ko",
"transformers",
"korean"
] | feature-extraction | false | BM-K | null | BM-K/KoSimCSE-roberta | 134 | 1 | transformers | 4,175 | ---
language: ko
tags:
- korean
---
https://github.com/BM-K/Sentence-Embedding-is-all-you-need
# Korean-Sentence-Embedding
🍭 Korean sentence embedding repository. You can download the pre-trained models and inference right away, also it provides environments where individuals can train models.
## Quick tour
```python
import torch
from transformers import AutoModel, AutoTokenizer
def cal_score(a, b):
if len(a.shape) == 1: a = a.unsqueeze(0)
if len(b.shape) == 1: b = b.unsqueeze(0)
a_norm = a / a.norm(dim=1)[:, None]
b_norm = b / b.norm(dim=1)[:, None]
return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100
model = AutoModel.from_pretrained('BM-K/KoSimCSE-roberta')
tokenizer = AutoTokenizer.from_pretrained('BM-K/KoSimCSE-roberta')
sentences = ['치타가 들판을 가로 질러 먹이를 쫓는다.',
'치타 한 마리가 먹이 뒤에서 달리고 있다.',
'원숭이 한 마리가 드럼을 연주한다.']
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
embeddings, _ = model(**inputs, return_dict=False)
score01 = cal_score(embeddings[0][0], embeddings[1][0])
score02 = cal_score(embeddings[0][0], embeddings[2][0])
```
## Performance
- Semantic Textual Similarity test set results <br>
| Model | AVG | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman |
|------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| KoSBERT<sup>†</sup><sub>SKT</sub> | 77.40 | 78.81 | 78.47 | 77.68 | 77.78 | 77.71 | 77.83 | 75.75 | 75.22 |
| KoSBERT | 80.39 | 82.13 | 82.25 | 80.67 | 80.75 | 80.69 | 80.78 | 77.96 | 77.90 |
| KoSRoBERTa | 81.64 | 81.20 | 82.20 | 81.79 | 82.34 | 81.59 | 82.20 | 80.62 | 81.25 |
| | | | | | | | | |
| KoSentenceBART | 77.14 | 79.71 | 78.74 | 78.42 | 78.02 | 78.40 | 78.00 | 74.24 | 72.15 |
| KoSentenceT5 | 77.83 | 80.87 | 79.74 | 80.24 | 79.36 | 80.19 | 79.27 | 72.81 | 70.17 |
| | | | | | | | | |
| KoSimCSE-BERT<sup>†</sup><sub>SKT</sub> | 81.32 | 82.12 | 82.56 | 81.84 | 81.63 | 81.99 | 81.74 | 79.55 | 79.19 |
| KoSimCSE-BERT | 83.37 | 83.22 | 83.58 | 83.24 | 83.60 | 83.15 | 83.54 | 83.13 | 83.49 |
| KoSimCSE-RoBERTa | 83.65 | 83.60 | 83.77 | 83.54 | 83.76 | 83.55 | 83.77 | 83.55 | 83.64 |
| | | | | | | | | | |
| KoSimCSE-BERT-multitask | 85.71 | 85.29 | 86.02 | 85.63 | 86.01 | 85.57 | 85.97 | 85.26 | 85.93 |
| KoSimCSE-RoBERTa-multitask | 85.77 | 85.08 | 86.12 | 85.84 | 86.12 | 85.83 | 86.12 | 85.03 | 85.99 | |
xlm-clm-enfr-1024 | dd9cb215d87baafeaf71f9b10e9678e90f5bf9f1 | 2022-07-22T08:06:22.000Z | [
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"arxiv:1901.07291",
"arxiv:1910.09700",
"transformers",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-clm-enfr-1024 | 133 | null | transformers | 4,176 | ---
language:
- multilingual
- en
- fr
---
# xlm-clm-enfr-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-French
- **License:** Unknown
- **Related Models:** [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for causal language modeling (next token prediction).
## Downstream Use
To learn more about this task and potential downstream uses, see the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the training data and training procedure.
# Evaluation
## Testing Data, Factors & Metrics
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for details on the testing data, factors and metrics.
## Results
For xlm-clm-enfr-1024 results, see Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import XLMTokenizer, XLMWithLMHeadModel
tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1
language_id = tokenizer.lang2id["en"] # 0
langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
# We reshape it to be of size (batch_size, sequence_length)
langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)
outputs = model(input_ids, langs=langs)
```
</details> |
abhijithneilabraham/longformer_covid_qa | 56f4dbe055f971300439d12633d1652b9b56d8e5 | 2021-05-13T19:09:22.000Z | [
"pytorch",
"longformer",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"autotrain_compatible"
] | question-answering | false | abhijithneilabraham | null | abhijithneilabraham/longformer_covid_qa | 133 | null | transformers | 4,177 | # Dataset
---
---
datasets:
- covid_qa_deepset
---
---
Covid 19 question answering data obtained from [covid_qa_deepset](https://huggingface.co/datasets/covid_qa_deepset).
# Original Repository
Repository for the fine tuning, inference and evaluation scripts can be found [here](https://github.com/abhijithneilabraham/Covid-QA).
# Model in action
```
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("abhijithneilabraham/longformer_covid_qa")
model = AutoModelForQuestionAnswering.from_pretrained("abhijithneilabraham/longformer_covid_qa")
question = "In this way, what do the mRNA-destabilising RBPs constitute ?"
text =
"""
In this way, mRNA-destabilising RBPs constitute a 'brake' on the immune system, which may ultimately be toggled therapeutically. I anticipate continued efforts in this area will lead to new methods of regaining control over inflammation in autoimmunity, selectively enhancing immunity in immunotherapy, and modulating RNA synthesis and virus replication during infection.
Another mRNA under post-transcriptional regulation by Regnase-1 and Roquin is Furin, which encodes a conserved proprotein convertase crucial in human health and disease. Furin, along with other PCSK family members, is widely implicated in immune regulation, cancer and the entry, maturation or release of a broad array of evolutionarily diverse viruses including human papillomavirus (HPV), influenza (IAV), Ebola (EboV), dengue (DenV) and human immunodeficiency virus (HIV). Here, Braun and Sauter review the roles of furin in these processes, as well as the history and future of furin-targeting therapeutics. 7 They also discuss their recent work revealing how two IFN-cinducible factors exhibit broad-spectrum inhibition of IAV, measles (MV), zika (ZikV) and HIV by suppressing furin activity. 8 Over the coming decade, I expect to see an ever-finer spatiotemporal resolution of host-oriented therapies to achieve safe, effective and broad-spectrum yet costeffective therapies for clinical use.
The increasing abundance of affordable, sensitive, high-throughput genome sequencing technologies has led to a recent boom in metagenomics and the cataloguing of the microbiome of our world. The MinION nanopore sequencer is one of the latest innovations in this space, enabling direct sequencing in a miniature form factor with only minimal sample preparation and a consumer-grade laptop computer. Nakagawa and colleagues here report on their latest experiments using this system, further improving its performance for use in resource-poor contexts for meningitis diagnoses. 9 While direct sequencing of viral genomic RNA is challenging, this system was recently used to directly sequence an RNA virus genome (IAV) for the first time. 10 I anticipate further improvements in the performance of such devices over the coming decade will transform virus surveillance efforts, the importance of which was underscored by the recent EboV and novel coronavirus (nCoV / COVID-19) outbreaks, enabling rapid deployment of antiviral treatments that take resistance-conferring mutations into account.
Decades of basic immunology research have provided a near-complete picture of the main armaments in the human antiviral arsenal. Nevertheless, this focus on mammalian defences and pathologies has sidelined examination of the types and roles of viruses and antiviral defences that exist throughout our biosphere. One case in point is the CRISPR/Cas antiviral immune system of prokaryotes, which is now repurposed as a revolutionary gene-editing biotechnology in plants and animals. 11 Another is the ancient lineage of nucleocytosolic large DNA viruses (NCLDVs), which are emerging human pathogens that possess enormous genomes of up to several megabases in size encoding hundreds of proteins with unique and unknown functions. 12 Moreover, hundreds of human-and avian-infective viruses such as IAV strain H5N1 are known, but recent efforts indicate the true number may be in the millions and many harbour zoonotic potential. 13 It is increasingly clear that host-virus interactions have generated truly vast yet poorly understood and untapped biodiversity. Closing this Special Feature, Watanabe and Kawaoka elaborate on neo-virology, an emerging field engaged in cataloguing and characterising this biodiversity through a global consortium. 14 I predict these efforts will unlock a vast wealth of currently unexplored biodiversity, leading to biotechnologies and treatments that leverage the host-virus interactions developed throughout evolution.
When biomedical innovations fall into the 'Valley of Death', patients who are therefore not reached all too often fall with them. Being entrusted with the resources and expectation to conceive, deliver and communicate dividends to society is both cherished and eagerly pursued at every stage of our careers. Nevertheless, the road to research translation is winding and is built on a foundation of basic research. Supporting industry-academia collaboration and nurturing talent and skills in the Indo-Pacific region are two of the four pillars of the National Innovation and Science Agenda. 2 These frame Australia's Medical Research and Innovation Priorities, which include antimicrobial resistance, global health and health security, drug repurposing and translational research infrastructure, 15 capturing many of the key elements of this CTI Special Feature. Establishing durable international relationships that integrate diverse expertise is essential to delivering these outcomes. To this end, NHMRC has recently taken steps under the International Engagement Strategy 16 to increase cooperation with its counterparts overseas. These include the Japan Agency for Medical Research and Development (AMED), tasked with translating the biomedical research output of that country. Given the reciprocal efforts at accelerating bilateral engagement currently underway, 17 the prospects for new areas of international cooperation and mobility have never been more exciting nor urgent. With the above in mind, all contributions to this CTI Special Feature I have selected from research presented by fellow invitees to the 2018 Awaji International Forum on Infection and Immunity (AIFII) and 2017 Consortium of Biological Sciences (ConBio) conferences in Japan. Both Australia and Japan have strong traditions in immunology and related disciplines, and I predict that the quantity, quality and importance of our bilateral cooperation will accelerate rapidly over the short to medium term. By expanding and cooperatively leveraging our respective research strengths, our efforts may yet solve the many pressing disease, cost and other sustainability issues of our time.
"""
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => a 'brake' on the immune system
``` |
alenusch/rugpt3-paraphraser | c0194b0c0db67521636675ac5b8a0d73050048d7 | 2021-05-21T12:54:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | alenusch | null | alenusch/rugpt3-paraphraser | 133 | null | transformers | 4,178 | Entry not found |
avichr/hebEMO_joy | f623e5735d250347d7244111a693dd7763eedf17 | 2022-01-11T16:28:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_joy | 133 | null | transformers | 4,179 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
|
cross-encoder/nli-deberta-v3-small | 9b04ba8f6b3dd4fdecba34bf349399f969b85ee5 | 2021-12-27T22:27:07.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"transformers",
"microsoft/deberta-v3-small",
"license:apache-2.0",
"zero-shot-classification"
] | zero-shot-classification | false | cross-encoder | null | cross-encoder/nli-deberta-v3-small | 133 | 0 | transformers | 4,180 | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-small
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.65
- Accuracy on MNLI mismatched set: 87.55
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-small')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-small')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-small')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-small')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
flax-community/spanish-t5-small | c97f4667f06fc184dcd7f680c4a8da1f8d887fd2 | 2022-03-30T21:04:00.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"es",
"dataset:large_spanish_corpus",
"transformers",
"T5",
"Seq2Seq",
"EconderDecoder",
"Spanish",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/spanish-t5-small | 133 | 5 | transformers | 4,181 | ---
language: es
tags:
- T5
- Seq2Seq
- EconderDecoder
- Spanish
datasets:
- large_spanish_corpus
widgets:
- text: "Érase un vez un"
license: mit
---
# Spanish T5 (small) trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus).
This is a Spanish **T5** (small arch) trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax)
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Dataset
The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation.
## [Metrics](https://huggingface.co/flax-community/spanish-t5-small/tensorboard) (on evaluation dataset)
- Accuracy: 0.675
## Team members
- Manuel Romero ([mrm8488](https://huggingface.co/mrm8488))
- María Grandury ([mariagrandury](https://huggingface.co/mariagrandury))
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2021spanish-t5-small,
title={Spanish T5 (small) by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/flax-community/spanish-t5-small}},
year={2021}
}
``` |
pucpr/clinicalnerpt-diagnostic | 1772db88f7092ffb662608c83fe738ab3acf8e15 | 2021-10-13T09:33:19.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-diagnostic | 133 | 3 | transformers | 4,182 | ---
language: "pt"
widget:
- text: "Uretrocistografia miccional, residuo pos miccional significativo."
- text: "No exame, apresentou apenas leve hiperemia no local do choque."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Diagnostic
The Diagnostic NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
tupleblog/salim-classifier | 9a1d1a1a3ade3921f582717345e8ad832f5da6e8 | 2021-07-16T20:11:16.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | tupleblog | null | tupleblog/salim-classifier | 133 | null | transformers | 4,183 | ---
widget:
- text: "รัฐรับผิดชอบทุกชีวิตไม่ได้หรอกคนให้บริการต้องจัดการเองถ้าจะเปิดผับบาร์"
---

# Salim-Classifier
**วัตถุประสงค์:** ทุกวันนี้หาเพื่อนที่รักชาติ ศาสนา พระมหากษัตริย์ รัฐบาลยากเหลือเกิน มีแต่พวกสามกีบ ควายแดงคอยจ้องจะทำร้าย
ทางทีมของเราจึงสร้างโมเดลมาเพื่อช่วยหาเพื่อนสลิ่มจากคอมเม้น ที่นับวันจะหลงเหลืออยู่น้อยยิ่งนักในสังคมไทย ทั้งนี้เพื่อเป็นแนวทางในการสร้างสังคมสลิ่มที่แข็งแรงต่อไป
## วิธีการใช้งาน
สามารถลง `transfomers` จาก Huggingface และใช้งานโมเดลได้ดังต่อไปนี้
``` py
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
pipeline
)
# download model from hub
tokenizer = AutoTokenizer.from_pretrained("tupleblog/salim-classifier")
model = AutoModelForSequenceClassification.from_pretrained("tupleblog/salim-classifier")
# using pipeline to classify an input text
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
text = "จิตไม่ปกติ วันๆคอยแต่ให้คนเสี้ยมทะเลาะกันด่ากัน คอยจ้องแต่จะเล่นงานรัฐบาล ความคดด้านลบ"
classifier(text)
# >> [{'label': 'HIGHLY LIKELY SALIM', 'score': 0.9989368915557861}] ยินดีด้วย น่าจะเป็นสลิ่ม!
```
## การเก็บข้อมูล
สร้างข้อมูลตัวอย่างและทำการ Annotate จากนั้นนำข้อมูลมาเทรนโมเดลด้วย WangchanBERTa
โดยข้อมูลอาจมีความ bias เนื่องจากทางทีมงานเป็นผู้เก็บข้อมูลเอง
## ทดลองใช้งานผ่าน HuggingFace
ท่านสามารถทดลองใช้งานผ่าน HuggingFace โดยใส่คอมเม้นจาก Facebook เข้าไปในช่องได้ในเว็บไซต์
[huggingface.co/tupleblog/salim-classifier](https://huggingface.co/tupleblog/salim-classifier)
**ตัวอย่างประโยค**
- รัฐรับผิดชอบทุกชีวิตไม่ได้หรอกคนให้บริการต้องจัดการเองถ้าจะเปิดผับบาร์
- แค่เคารพกฎหมาย คนพวกนี้ยังทำไม่ได้เลย แล้วจะถามหาความก้าวหน้าของประเทศ ?
- หมามันยังยืนเคารพธงชาติ แต่พวกนี้กลับทำอะไรไม่อายเดรัจฉาน
- ถ้าไม่ชอบประชาธิปไตย จะไปใช้วิธีการปกครองแบบไหนหรอครับ แล้วแบบไหนถึงดีหรอ ผมไม่เข้าใจครับอดีตผ่านไปแล้ว ทำไมไม่มองที่อนาคตกันหละครับ
- อีพวกสามกีบ`<pad>`
สำหรับข้อความที่สั้นกว่า 50 ตัวอักษรแนะนำให้เติม `<pad>` ตามหลังข้อความเพื่อความแม่นยำที่สูงขึ้น
## Performance
We report performance on 20% evaluation set (accuracy, precision, recall, F1-score macro) as follows:
| Accuracy | Precision | Recall | F1 |
| -------- | --------- | ------ | ------ |
| 86.15% | 86.12% | 86.13% | 86.13% |
|
BeIR/sparta-msmarco-distilbert-base-v1 | 34bbf9fb00f396055b989346faae51a1677b93a4 | 2021-10-01T19:04:27.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2009.13013",
"arxiv:2104.08663",
"transformers"
] | feature-extraction | false | BeIR | null | BeIR/sparta-msmarco-distilbert-base-v1 | 132 | null | transformers | 4,184 | # SPARTA
Re-Implementation of [SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval](https://arxiv.org/abs/2009.13013). It is the re-implementation we used for [BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models](https://arxiv.org/abs/2104.08663).
Also have a look at our BEIR repository: https://github.com/UKPLab/beir
Have a look at https://github.com/nreimers/beir-sparta for the training and inference code of this SPARTA model
|
Davlan/bert-base-multilingual-cased-finetuned-hausa | e08eaa625a687776657e84c6c0a4ce5a8fabc6fd | 2022-06-27T10:56:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ha",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-hausa | 132 | null | transformers | 4,185 | Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-hausa
## Model description
**bert-base-multilingual-cased-finetuned-hausa** is a **Hausa BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Hausa language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-hausa')
>>> unmasker("Shugaban [MASK] Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence':
'[CLS] Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]',
'score': 0.9762618541717529,
'token': 22045,
'token_str': 'Nigeria'},
{'sequence': '[CLS] Shugaban Ka Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.007239189930260181,
'token': 25444,
'token_str': 'Ka'},
{'sequence': '[CLS] Shugaban, Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001990817254409194,
'token': 117,
'token_str': ','},
{'sequence': '[CLS] Shugaban Ghana Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001566368737258017,
'token': 28682,
'token_str': 'Ghana'},
{'sequence': '[CLS] Shugabanmu Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.0009375187801197171,
'token': 11717,
'token_str': '##mu'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ha_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.65 | 91.31
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | 84.76 | 90.98
### BibTeX entry and citation info
By David Adelani
```
```
|
Hate-speech-CNERG/dehatebert-mono-german | 53a24df030e8e20e7880a161494fb5922ce34617 | 2021-09-25T13:55:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"de",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-german | 132 | null | transformers | 4,186 | ---
language: de
license: apache-2.0
---
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
LegolasTheElf/Wav2vec2_XLSR_Bengali | 9c1fdc849f7a95cb0703ead87ceffc82dbee889d | 2022-01-25T11:38:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2vec2_XLSR_Bengali | 132 | null | transformers | 4,187 | Entry not found |
ozcangundes/T5-base-for-BioQA | 20a289e9e962dcdbe2bb454ef78fed92b261eafe | 2021-09-22T09:31:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"english",
"dataset:bioASQ",
"arxiv:1910.10683",
"transformers",
"license:mit",
"question-answering",
"autotrain_compatible"
] | question-answering | false | ozcangundes | null | ozcangundes/T5-base-for-BioQA | 132 | null | transformers | 4,188 | ---
language: english
datasets:
- bioASQ
pipeline_tag: question-answering
license: mit
---
# T5-base model fine-tuned on BioASQ for Biological Question Answering 👩⚕️👨⚕️
[Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task.
## Details of T5
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Dependencies
transformers == 4.3.3
sentencepiece >= 0.1.94
## Usage 🚀
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA")
model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA")
def get_answer(question,context):
source_encoding=tokenizer(
question,
context,
max_length=512,
padding="max_length",
truncation="only_second",
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"])
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
question={
"context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.",
"question":"Which protein does empagliflozin inhibit?"
}
get_answer(question["question"],question["context"])
```
> SGLT2
### Example 2
```python
question2={
"context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.",
"question":"What is the typical rash associated with gluten?"
}
get_answer(question2["question"],question2["context"])
```
> dermatitis herpetiformis
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
sismetanin/rubert-ru-sentiment-rusentiment | f3d755e39a6af467a4e90b9a1c486ea1d2aa3852 | 2021-05-20T06:11:34.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-rusentiment | 132 | null | transformers | 4,189 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-Base-ru-sentiment-RuSentiment
RuBERT-ru-sentiment-RuSentiment is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
north/t5_base_NCC | 633892e183740133c83f81483932498a5da67055 | 2022-06-01T19:41:01.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | north | null | north/t5_base_NCC | 132 | 2 | transformers | 4,190 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|✔|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_t5x_base/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
Nehc/AGIRussia | f00acdd0258fd1e513c7b235f29151b9ef5b0eea | 2022-06-05T20:07:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
] | text-generation | false | Nehc | null | Nehc/AGIRussia | 132 | null | transformers | 4,191 | ---
language:
- ru
widget:
- text: "<IN>Как нам все-таки сделать AGI?\n<OUT>"
metrics:
- loss: 3.3
- perplexity: 25.7528
---
Start from sberbank-ai/rugpt3medium_based_on_gpt2 and finetuning on AGIRussia chats (russian).
On this moment - only 3 epoch (perplexity falls reasons)
on progress...
|
yanekyuk/bert-uncased-keyword-extractor | 47b62643118087b5366600b97f73b1d3ba105303 | 2022-06-06T09:27:10.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/bert-uncased-keyword-extractor | 132 | null | transformers | 4,192 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- en
widget:
- text: "Broadcom agreed to acquire cloud computing company VMware in a $61 billion (€57bn) cash-and stock deal, massively diversifying the chipmaker’s business and almost tripling its software-related revenue to about 45% of its total sales. By the numbers: VMware shareholders will receive either $142.50 in cash or 0.2520 of a Broadcom share for each VMware stock. Broadcom will also assume $8 billion of VMware's net debt."
- text: "Canadian Natural Resources Minister Jonathan Wilkinson told Bloomberg that the country could start supplying Europe with liquefied natural gas (LNG) in as soon as three years by converting an existing LNG import facility on Canada’s Atlantic coast into an export terminal. Bottom line: Wilkinson said what Canada cares about is that the new LNG facility uses a low-emission process for the gas and is capable of transitioning to exporting hydrogen later on."
- text: "Google is being investigated by the UK’s antitrust watchdog for its dominance in the \"ad tech stack,\" the set of services that facilitate the sale of online advertising space between advertisers and sellers. Google has strong positions at various levels of the ad tech stack and charges fees to both publishers and advertisers. A step back: UK Competition and Markets Authority has also been investigating whether Google and Meta colluded over ads, probing into the advertising agreement between the two companies, codenamed Jedi Blue."
- text: "Shares in Twitter closed 6.35% up after an SEC 13D filing revealed that Elon Musk pledged to put up an additional $6.25 billion of his own wealth to fund the $44 billion takeover deal, lifting the total to $33.5 billion from an initial $27.25 billion. In other news: Former Twitter CEO Jack Dorsey announced he's stepping down, but would stay on Twitter’s board \\“until his term expires at the 2022 meeting of stockholders.\""
model-index:
- name: bert-uncased-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-keyword-extractor
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1247
- Precision: 0.8547
- Recall: 0.8825
- Accuracy: 0.9741
- F1: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.165 | 1.0 | 1875 | 0.1202 | 0.7109 | 0.7766 | 0.9505 | 0.7423 |
| 0.1211 | 2.0 | 3750 | 0.1011 | 0.7801 | 0.8186 | 0.9621 | 0.7989 |
| 0.0847 | 3.0 | 5625 | 0.0945 | 0.8292 | 0.8044 | 0.9667 | 0.8166 |
| 0.0614 | 4.0 | 7500 | 0.0927 | 0.8409 | 0.8524 | 0.9711 | 0.8466 |
| 0.0442 | 5.0 | 9375 | 0.1057 | 0.8330 | 0.8738 | 0.9712 | 0.8529 |
| 0.0325 | 6.0 | 11250 | 0.1103 | 0.8585 | 0.8743 | 0.9738 | 0.8663 |
| 0.0253 | 7.0 | 13125 | 0.1204 | 0.8453 | 0.8825 | 0.9735 | 0.8635 |
| 0.0203 | 8.0 | 15000 | 0.1247 | 0.8547 | 0.8825 | 0.9741 | 0.8684 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese | e1aa040bf0f016c608a5fbefed4ccfdda7215e18 | 2022-07-25T06:25:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"clip",
"zh",
"image-text",
"feature-extraction",
"license:apache-2.0"
] | feature-extraction | false | IDEA-CCNL | null | IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese | 132 | 3 | transformers | 4,193 | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Model Details
This model is a Chinese CLIP model trained on [Noah-Wukong Dataset](https://wukong-dataset.github.io/wukong-dataset/), which contains about 100M Chinese image-text pairs. We use ViT-B-32 from [openAI](https://github.com/openai/CLIP) as image encoder and Chinese pre-trained language model [chinese-roberta-wwm](https://huggingface.co/hfl/chinese-roberta-wwm-ext) as text encoder. We freeze the image encoder and only finetune the text encoder. The model was trained for 20 epochs and it takes about 10 days with 8 A100 GPUs.
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies. We will release more image-text model trained on Chinese dataset and benefit the Chinese community.
# Usage
```python3
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-102M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
# Evaluation
### Zero-Shot Classification
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | ImageNet1k-CN | 41.00% | 69.19% |
### Zero-Shot Text-to-Image Retrieval
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-102M-Chinese | Flickr30k-CNA-test | 44.06 % | 71.42% | 80.84% |
| Taiyi-CLIP-Roberta-102M-Chinese | COCO-CN-test | 46.30 % | 78.00% | 89.00% |
| Taiyi-CLIP-Roberta-102M-Chinese | wukong50k | 48.67 % | 81.77% | 90.09% |
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Geotrend/distilbert-base-en-es-pt-cased | c27a3458f33d900b142be324022e3d28581ca830 | 2021-07-29T12:33:27.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-es-pt-cased | 131 | null | transformers | 4,194 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-es-pt-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-pt-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
cardiffnlp/twitter-roberta-base-stance-abortion | e73434e7f22370615f75e6b86f5df6ca130c6d18 | 2021-05-20T15:07:21.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-stance-abortion | 131 | null | transformers | 4,195 | |
elozano/tweet_emotion_eval | f848c42f40dcf6afc0b834271377bb227fc3ef8c | 2022-02-07T18:04:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"transformers",
"license:mit"
] | text-classification | false | elozano | null | elozano/tweet_emotion_eval | 131 | 3 | transformers | 4,196 | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "Stop sharing which songs did you listen to during this year on Spotify, NOBODY CARES"
example_title: "Anger"
- text: "I love that joke HAHAHAHAHA"
example_title: "Joy"
- text: "Despite I've not studied a lot for this exam, I think I will pass 😜"
example_title: "Optimism"
- text: "My dog died this morning..."
example_title: "Sadness"
---
|
huggingtweets/dadsaysjokes | 7ef5efee90c740984da14d9fe24cf60ea7cf812e | 2021-05-22T00:10:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dadsaysjokes | 131 | null | transformers | 4,197 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/923451113239703552/62jMMnTQ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dad Jokes 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dadsaysjokes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dadsaysjokes's tweets](https://twitter.com/dadsaysjokes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3205</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>47</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>8</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3150</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3tibg7vt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dadsaysjokes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1pxb4a3v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1pxb4a3v/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dadsaysjokes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
kleinay/qanom-seq2seq-model-joint | a053034de46291e710bb27301d4c1127c9280b71 | 2022-04-04T11:06:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kleinay/qanom",
"transformers",
"semantic-role-labeling",
"question-answer generation",
"autotrain_compatible"
] | text2text-generation | false | kleinay | null | kleinay/qanom-seq2seq-model-joint | 131 | 2 | transformers | 4,198 | ---
language:
- en
tags:
- semantic-role-labeling
- question-answer generation
- pytorch
datasets:
- kleinay/qanom
---
# A Seq2Seq model for QANom parsing
This is a `t5-small` pretrained model, fine-tuned jointly on the tasks of generating QASRL and QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of [QASRL (Question-Answer driven Semantic Role Labeling)](https://qasrl.org) for the nominal predicates domain. See the [QANom paper](https://aclanthology.org/2020.coling-main.274/) for details about the task. The QANom Dataset official site is a [Google drive](https://drive.google.com/drive/folders/15PHKVdPm65ysgdkV47z6J_73kETk7_of), but we also wrapped it into a [Huggingface Dataset](https://huggingface.co/datasets/biu-nlp/qanom), which is easier to plug-and-play with (check out our [HF profile](https://huggingface.co/biu-nlp) for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit [our demo](https://huggingface.co/spaces/kleinay/qanom-seq2seq-demo) for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
```python
import transformers
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
```
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-joint/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
```python
from pipeline import QASRL_Pipeline
pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-joint")
pipe("The student was interested in Luke 's <predicate> research about sea animals .", verb_form="research", predicate_type="nominal")
```
Which will output:
```json
[{'generated_text': 'who _ _ researched something _ _ ?<extra_id_7> Luke',
'QAs': [{'question': 'who researched something ?', 'answers': ['Luke']}]}]
```
You can learn more about using `transformers.pipelines` in the [official docs](https://huggingface.co/docs/transformers/main_classes/pipelines).
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the `<predicate>` symbol, but you can also specify your own predicate marker:
```python
pipe("The student was interested in Luke 's <PRED> research about sea animals .", verb_form="research", predicate_type="nominal", predicate_marker="<PRED>")
```
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
```python
pipe("The student was interested in Luke 's <predicate> research about sea animals .", verb_form="research", predicate_type="nominal", num_beams=3)
```
|
kz/mt5base-finetuned-patentsum-japanese-small | e26d766d858ed991fac7906e06856d5c27ae0784 | 2022-05-19T06:50:32.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"transformers",
"Summarization",
"japanese",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | kz | null | kz/mt5base-finetuned-patentsum-japanese-small | 131 | 2 | transformers | 4,199 | ---
language: "ja"
widget:
- text: "請求項 <extra_id_0>"
license: "mit"
tags:
- Summarization
- japanese
---
Google's mt5-base fine-tuned in Japanese to summarize patent claims in a limited Pharmaceutical domain.
# 日本語特許請求項要約(医薬特定ドメイン限定)
- """【請求項1】
ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、
a)以下を含む重鎖可変領域:
i)配列番号3を含む第1のCDR;
ii)配列番号4を含む第2のCDR;
iii)配列番号5を含む第3のCDR;及び
b)以下を含む軽鎖可変領域:
i)配列番号6を含む第1のCDR;
ii)配列番号7を含む第2のCDR;
iii)配列番号8を含む第3のCDR;
を含む、抗体。(請求項2~19省略)【請求項20】
前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。
"""
- →"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: *
- prefix: "patent claim summarization: " (notice: single task trained.)
- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.
- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.
# 参考
- https://huggingface.co/blog/how-to-generate
- 前処理が最適ではなかった。修正する。
- 任意に上位概念・下位概念と変換できるようprefixを追加する。
- 任意のテーマに沿った要約とできるようprefixを追加する。
- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。
**check in progress**
## Licenese
- The MIT license |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.