modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
staka/takomt | faddfbf2eb4d804a4fe1f87db8d8ebb4b53fa24b | 2022-05-21T22:56:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"en",
"es",
"fr",
"it",
"ja",
"ru",
"uk",
"transformers",
"translation",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | translation | false | staka | null | staka/takomt | 131 | null | transformers | 4,200 | ---
license: cc-by-sa-4.0
language:
- de
- en
- es
- fr
- it
- ja
- ru
- uk
tags:
- translation
---
# TakoMT
This is a translation model using Marian-NMT.
For more details, please see [my repository](https://github.com/s-taka/fugumt).
In addition to the data listed in the repository I also used [ParaCrawl](https://paracrawl.eu/).
* source languages: de, en, es, fr, it, ru, uk
* target language: ja
### How to use
This model uses transformers and sentencepiece.
```python
!pip install transformers sentencepiece
```
You can use this model directly with a pipeline:
```python
from transformers import pipeline
tako_translator = pipeline('translation', model='staka/takomt')
tako_translator('This is a cat.')
```
### Eval results
The results of the evaluation using [tatoeba](https://tatoeba.org/ja)(randomly selected 500 sentences) are as follows:
|source |target |BLEU(*1)|
|-------|-------|--------|
|de |ja |27.8 |
|en |ja |28.4 |
|es |ja |32.0 |
|fr |ja |27.9 |
|it |ja |24.3 |
|ru |ja |27.3 |
|uk |ja |29.8 |
(*1) sacrebleu --tokenize ja-mecab
|
cardiffnlp/tweet-topic-21-single | b4e1fd1462122301b213e150a42a141235082db8 | 2022-06-09T10:34:33.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"arxiv:2202.03829",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/tweet-topic-21-single | 131 | null | transformers | 4,201 | # tweet-topic-21-single
This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for single-label topic classification on a corpus of 6,997 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
- 0 -> arts_&_culture;
- 1 -> business_&_entrepreneurs;
- 2 -> pop_culture;
- 3 -> daily_life;
- 4 -> sports_&_gaming;
- 5 -> science_&_technology
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = f"cardiffnlp/tweet-topic-21-single"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "Tesla stock is on the rise!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# TF
#model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "Tesla stock is on the rise!"
#encoded_input = tokenizer(text, return_tensors='tf')
#output = model(**encoded_input)
#scores = output[0][0]
#scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = class_mapping[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) business_&_entrepreneurs 0.8361
2) science_&_technology 0.0904
3) pop_culture 0.0288
4) daily_life 0.0178
5) arts_&_culture 0.0137
6) sports_&_gaming 0.0133
``` |
valurank/headline_generator_baseline | ef72ce346d1c3c62dde541e065cbf0011481bb4c | 2022-07-11T08:59:50.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | valurank | null | valurank/headline_generator_baseline | 131 | null | transformers | 4,202 | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4853 | 0.89 | 500 | 0.3318 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pucpr-br/postagger-bio-portuguese | f4898fecc9561120567e1165dba9bb3dbe2ab0af | 2022-07-25T22:49:37.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:MacMorpho",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr-br | null | pucpr-br/postagger-bio-portuguese | 131 | 1 | transformers | 4,203 | ---
language: "pt"
widget:
- text: "O paciente recebeu no hospital e falou com a médica"
- text: "COMO ESQUEMA DE MEDICAÇÃO PARA ICC PRESCRITO NO ALTA, RECEBE FUROSEMIDA 40 BID, ISOSSORBIDA 40 TID, DIGOXINA 0,25 /D, CAPTOPRIL 50 TID E ESPIRONOLACTONA 25 /D."
- text: "ESTAVA EM USO DE FUROSEMIDA 40 BID, DIGOXINA 0,25 /D, SINVASTATINA 40 /NOITE, CAPTOPRIL 50 TID, ISOSSORBIDA 20 TID, AAS 100 /D E ESPIRONOLACTONA 25 /D."
datasets:
- MacMorpho
---
# POS-Tagger Bio Portuguese
We fine-tuned the BioBERTpt(all) model with the MacMorpho corpus for the Post-Tagger task, with 10 epochs, achieving a general F1-Score of 0.9818.
Metrics:
```
Precision Recall F1 Suport
accuracy 0.98 38320
macro avg 0.95 0.94 0.94 38320
weighted avg 0.98 0.98 0.98 38320
F1: 0.9818 Accuracy: 0.9818
```
Parameters:
```
nclasses = 27
nepochs_total = 30
nepochs_stop = 12 (stop in 12th because early stop)
batch_size = 32
batch_status = 32
learning_rate = 1e-5
early_stop = 3
max_length = 200
```
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
coming soon
```
## Questions?
Please, post a Github issue on the [NLP Portuguese Chunking](https://github.com/HAILab-PUCPR/nlp-portuguese-chunking).
|
NlpHUST/t5-vi-en-small | 0840843a5b314af4600bda2303a7a55d7e4f152f | 2021-06-23T03:45:23.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NlpHUST | null | NlpHUST/t5-vi-en-small | 130 | null | transformers | 4,204 | ---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small")
model.to(device)
src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn"
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons
``` |
castorini/tct_colbert-v2-msmarco | a07b49eecf416152c7b43e8de0ec401604db1abd | 2021-08-12T01:06:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/tct_colbert-v2-msmarco | 130 | null | transformers | 4,205 | This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper:
> Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_.
You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
|
satyaalmasian/temporal_tagger_roberta2roberta | eb10738132d26bc29d648e9d7bea4a63641c7552 | 2021-09-21T11:11:22.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | satyaalmasian | null | satyaalmasian/temporal_tagger_roberta2roberta | 130 | 5 | transformers | 4,206 | # RoBERTa2RoBERTa temporal tagger
Seq2seq model for temporal tagging of plain text using RoBERTa language model. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Model description
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use RoBERTa in an encoder-decoder architecture for text generation, where the input is raw text and the output is the temporally annotated text. The model is pre-trained on a weakly annotated dataset from a rule-based system (HeidelTime) and fine-tuned on the temporal benchmark datasets (Wikiwars, Tweets, Tempeval-3).
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide cleaning functions for the output and insert the temporal tags from the generated text in the input text. If you have temporally annotated data you can fine-tune this model.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")
model = EncoderDecoderModel.from_pretrained("satyaalmasian/temporal_tagger_roberta2roberta")
```
for inference use:
```
model_inputs = tokenizer(input_text, truncation=True, return_tensors="pt")
out = model.generate(**model_inputs)
decoded_preds = tokenizer.batch_decode(out, skip_special_tokens=True)
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
to further fine-tune, use the `Seq2SeqTrainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_seq2seq_bert_roberta.py).
```
trainer = Seq2SeqTrainer(
model=model2model,
tokenizer=tokenizer,
args=training_args,
compute_metrics=metrics.compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
train_result=trainer.train()
```
where the `training_args` is an instance of `Seq2SeqTrainingArguments`.
#Training data
We use four data sources:
For Pretraining :1 million weakly annotated samples from heideltime. The samples are from news articles between the 1st January 2019 and the 30th July.
Fine-tunning: [Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is pre-trained on the weakly labeled data for $3$ epochs on the train set, from publicly available checkpoints on huggingface (`roberta-base`), with a batch size of 12. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
Additionally, we use 2000 warmup steps.
We fine-tune the 3 benchmark data for 8 epochs with 5 different random seeds, this version of the model is the only seed=4.
The batch size and the learning rate is the same as the pre-training setup, but the warm-up steps are reduced to 100.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
For inference in seq2seq models, we use Greedy decoding, since beam search had sub-optimal results.
|
zanelim/singbert | 79cafd739beff7d1768a3ef32149c15f75c1013f | 2021-05-20T09:38:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"en",
"dataset:reddit singapore, malaysia",
"dataset:hardwarezone",
"transformers",
"singapore",
"sg",
"singlish",
"malaysia",
"ms",
"manglish",
"bert-base-uncased",
"license:mit"
] | null | false | zanelim | null | zanelim/singbert | 130 | null | transformers | 4,207 | ---
language: en
tags:
- singapore
- sg
- singlish
- malaysia
- ms
- manglish
- bert-base-uncased
license: mit
datasets:
- reddit singapore, malaysia
- hardwarezone
widget:
- text: "kopi c siew [MASK]"
- text: "die [MASK] must try"
---
# Model name
SingBert - Bert for Singlish (SG) and Manglish (MY).
## Model description
[BERT base uncased](https://github.com/google-research/bert#pre-trained-models), with pre-training finetuned on
[singlish](https://en.wikipedia.org/wiki/Singlish) and [manglish](https://en.wikipedia.org/wiki/Manglish) data.
## Intended uses & limitations
#### How to use
```python
>>> from transformers import pipeline
>>> nlp = pipeline('fill-mask', model='zanelim/singbert')
>>> nlp("kopi c siew [MASK]")
[{'sequence': '[CLS] kopi c siew dai [SEP]',
'score': 0.5092713236808777,
'token': 18765,
'token_str': 'dai'},
{'sequence': '[CLS] kopi c siew mai [SEP]',
'score': 0.3515934646129608,
'token': 14736,
'token_str': 'mai'},
{'sequence': '[CLS] kopi c siew bao [SEP]',
'score': 0.05576375499367714,
'token': 25945,
'token_str': 'bao'},
{'sequence': '[CLS] kopi c siew. [SEP]',
'score': 0.006019321270287037,
'token': 1012,
'token_str': '.'},
{'sequence': '[CLS] kopi c siew sai [SEP]',
'score': 0.0038361591286957264,
'token': 18952,
'token_str': 'sai'}]
>>> nlp("one teh c siew dai, and one kopi [MASK].")
[{'sequence': '[CLS] one teh c siew dai, and one kopi c [SEP]',
'score': 0.6176503300666809,
'token': 1039,
'token_str': 'c'},
{'sequence': '[CLS] one teh c siew dai, and one kopi o [SEP]',
'score': 0.21094971895217896,
'token': 1051,
'token_str': 'o'},
{'sequence': '[CLS] one teh c siew dai, and one kopi. [SEP]',
'score': 0.13027705252170563,
'token': 1012,
'token_str': '.'},
{'sequence': '[CLS] one teh c siew dai, and one kopi! [SEP]',
'score': 0.004680239595472813,
'token': 999,
'token_str': '!'},
{'sequence': '[CLS] one teh c siew dai, and one kopi w [SEP]',
'score': 0.002034128177911043,
'token': 1059,
'token_str': 'w'}]
>>> nlp("dont play [MASK] leh")
[{'sequence': '[CLS] dont play play leh [SEP]',
'score': 0.9281464219093323,
'token': 2377,
'token_str': 'play'},
{'sequence': '[CLS] dont play politics leh [SEP]',
'score': 0.010990909300744534,
'token': 4331,
'token_str': 'politics'},
{'sequence': '[CLS] dont play punk leh [SEP]',
'score': 0.005583590362221003,
'token': 7196,
'token_str': 'punk'},
{'sequence': '[CLS] dont play dirty leh [SEP]',
'score': 0.0025784350000321865,
'token': 6530,
'token_str': 'dirty'},
{'sequence': '[CLS] dont play cheat leh [SEP]',
'score': 0.0025066907983273268,
'token': 21910,
'token_str': 'cheat'}]
>>> nlp("catch no [MASK]")
[{'sequence': '[CLS] catch no ball [SEP]',
'score': 0.7922210693359375,
'token': 3608,
'token_str': 'ball'},
{'sequence': '[CLS] catch no balls [SEP]',
'score': 0.20503675937652588,
'token': 7395,
'token_str': 'balls'},
{'sequence': '[CLS] catch no tail [SEP]',
'score': 0.0006608376861549914,
'token': 5725,
'token_str': 'tail'},
{'sequence': '[CLS] catch no talent [SEP]',
'score': 0.0002158183924620971,
'token': 5848,
'token_str': 'talent'},
{'sequence': '[CLS] catch no prisoners [SEP]',
'score': 5.3481446229852736e-05,
'token': 5895,
'token_str': 'prisoners'}]
>>> nlp("confirm plus [MASK]")
[{'sequence': '[CLS] confirm plus chop [SEP]',
'score': 0.992355227470398,
'token': 24494,
'token_str': 'chop'},
{'sequence': '[CLS] confirm plus one [SEP]',
'score': 0.0037301010452210903,
'token': 2028,
'token_str': 'one'},
{'sequence': '[CLS] confirm plus minus [SEP]',
'score': 0.0014284878270700574,
'token': 15718,
'token_str': 'minus'},
{'sequence': '[CLS] confirm plus 1 [SEP]',
'score': 0.0011354683665558696,
'token': 1015,
'token_str': '1'},
{'sequence': '[CLS] confirm plus chopped [SEP]',
'score': 0.0003804611915256828,
'token': 24881,
'token_str': 'chopped'}]
>>> nlp("die [MASK] must try")
[{'sequence': '[CLS] die die must try [SEP]',
'score': 0.9552758932113647,
'token': 3280,
'token_str': 'die'},
{'sequence': '[CLS] die also must try [SEP]',
'score': 0.03644804656505585,
'token': 2036,
'token_str': 'also'},
{'sequence': '[CLS] die liao must try [SEP]',
'score': 0.003282855963334441,
'token': 727,
'token_str': 'liao'},
{'sequence': '[CLS] die already must try [SEP]',
'score': 0.0004937972989864647,
'token': 2525,
'token_str': 'already'},
{'sequence': '[CLS] die hard must try [SEP]',
'score': 0.0003659659414552152,
'token': 2524,
'token_str': 'hard'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('zanelim/singbert')
model = BertModel.from_pretrained("zanelim/singbert")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained("zanelim/singbert")
model = TFBertModel.from_pretrained("zanelim/singbert")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
This model was finetuned on colloquial Singlish and Manglish corpus, hence it is best applied on downstream tasks involving the main
constituent languages- english, mandarin, malay. Also, as the training data is mainly from forums, beware of existing inherent bias.
## Training data
Colloquial singlish and manglish (both are a mixture of English, Mandarin, Tamil, Malay, and other local dialects like Hokkien, Cantonese or Teochew)
corpus. The corpus is collected from subreddits- `r/singapore` and `r/malaysia`, and forums such as `hardwarezone`.
## Training procedure
Initialized with [bert base uncased](https://github.com/google-research/bert#pre-trained-models) vocab and checkpoints (pre-trained weights).
Top 1000 custom vocab tokens (non-overlapped with original bert vocab) were further extracted from training data and filled into unused tokens in original bert vocab.
Pre-training was further finetuned on training data with the following hyperparameters
* train_batch_size: 512
* max_seq_length: 128
* num_train_steps: 300000
* num_warmup_steps: 5000
* learning_rate: 2e-5
* hardware: TPU v3-8
|
sgunderscore/hatescore-korean-hate-speech | 0061fe461f9d0227b44c346221779e6d00d8ec4d | 2022-04-07T10:32:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sgunderscore | null | sgunderscore/hatescore-korean-hate-speech | 130 | 1 | transformers | 4,208 | Entry not found |
MLRS/BERTu | 0eaba948f170df8e672a7863042bbcd85fd6ed2e | 2022-05-20T17:30:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"mt",
"dataset:MLRS/korpus_malti",
"arxiv:2205.10517",
"transformers",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | MLRS | null | MLRS/BERTu | 130 | null | transformers | 4,209 | ---
language:
- mt
datasets:
- MLRS/korpus_malti
model-index:
- name: BERTu
results:
- task:
type: dependency-parsing
name: Dependency Parsing
dataset:
type: universal_dependencies
args: mt_mudt
name: Maltese Universal Dependencies Treebank (MUDT)
metrics:
- type: uas
value: 92.31
name: Unlabelled Attachment Score
- type: las
value: 88.14
name: Labelled Attachment Score
- task:
type: part-of-speech-tagging
name: Part-of-Speech Tagging
dataset:
type: mlrs_pos
name: MLRS POS dataset
metrics:
- type: accuracy
value: 98.58
name: UPOS Accuracy
args: upos
- type: accuracy
value: 98.54
name: XPOS Accuracy
args: xpos
- task:
type: named-entity-recognition
name: Named Entity Recognition
dataset:
type: wikiann
name: WikiAnn (Maltese)
args: mt
metrics:
- type: f1
args: span
value: 86.77
name: Span-based F1
- task:
type: sentiment-analysis
name: Sentiment Analysis
dataset:
type: mt-sentiment-analysis
name: Maltese Sentiment Analysis Dataset
metrics:
- type: f1
args: macro
value: 78.96
name: Macro-averaged F1
license: cc-by-nc-sa-4.0
widget:
- text: "Malta hija gżira fil-[MASK]."
---
# BERTu
A Maltese monolingual model pre-trained from scratch on the Korpus Malti v4.0 using the BERT (base) architecture.
## License
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
## Citation
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://arxiv.org/abs/2205.10517).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = {Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese},
author = {Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia},
booktitle = {Proceedings of the 3rd Workshop on Deep Learning for Low-Resource NLP (DeepLo 2022)},
day = {14},
month = {07},
year = {2022},
address = {Seattle, Washington},
publisher = {Association for Computational Linguistics},
}
```
|
valurank/t5-paraphraser | d1a85a021d0754baf15e703ffea8bca1897be5af | 2022-06-08T20:19:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"license:other",
"autotrain_compatible"
] | text2text-generation | false | valurank | null | valurank/t5-paraphraser | 130 | 1 | transformers | 4,210 | ---
language: en
license: other
---
## Model in Action 🚀
```python
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
model = T5ForConditionalGeneration.from_pretrained('valurank/t5-paraphraser')
tokenizer = T5Tokenizer.from_pretrained('valurank/t5-paraphraser')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print ("device ",device)
model = model.to(device)
sentence = "Which course should I take to get started in data science?"
# sentence = "What are the ingredients required to bake a perfect cake?"
# sentence = "What is the best possible approach to learn aeronautical engineering?"
# sentence = "Do apples taste better than oranges in general?"
text = "paraphrase: " + sentence + " </s>"
max_len = 256
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
beam_outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=120,
top_p=0.98,
early_stopping=True,
num_return_sequences=10
)
print ("\nOriginal Question ::")
print (sentence)
print ("\n")
print ("Paraphrased Questions :: ")
final_outputs =[]
for beam_output in beam_outputs:
sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if sent.lower() != sentence.lower() and sent not in final_outputs:
final_outputs.append(sent)
for i, final_output in enumerate(final_outputs):
print("{}: {}".format(i, final_output))
```
## Output
```
Original Question ::
Which course should I take to get started in data science?
Paraphrased Questions ::
0: What should I learn to become a data scientist?
1: How do I get started with data science?
2: How would you start a data science career?
3: How can I start learning data science?
4: How do you get started in data science?
5: What's the best course for data science?
6: Which course should I start with for data science?
7: What courses should I follow to get started in data science?
8: What degree should be taken by a data scientist?
9: Which course should I follow to become a Data Scientist?
```
|
ismail-lucifer011/autotrain-job_all-903929564 | 3031df2e15619a494dde58715e51498a471799de | 2022-05-24T14:52:32.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:ismail-lucifer011/autotrain-data-job_all",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | ismail-lucifer011 | null | ismail-lucifer011/autotrain-job_all-903929564 | 130 | null | transformers | 4,211 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ismail-lucifer011/autotrain-data-job_all
co2_eq_emissions: 192.68222884611995
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 903929564
- CO2 Emissions (in grams): 192.68222884611995
## Validation Metrics
- Loss: 0.0036299973726272583
- Accuracy: 0.9989412009896035
- Precision: 0.9863310000901253
- Recall: 0.9885186672019269
- F1: 0.9874236219367322
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ismail-lucifer011/autotrain-job_all-903929564
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ismail-lucifer011/autotrain-job_all-903929564", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ismail-lucifer011/autotrain-job_all-903929564", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
lucataco/DialoGPT-medium-omar | db5451aea8eb53d092c761d44100fe21346ff96f | 2022-07-03T23:37:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lucataco | null | lucataco/DialoGPT-medium-omar | 130 | null | transformers | 4,212 | ---
tags:
- conversational
---
# Omar Dialog GPT Model Medium 10
# Trained on discord channels:
# half of Dragalia chat |
Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization | b7987d12b17732987f9fef363fb9db200862cdbd | 2021-08-01T09:45:32.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"es",
"dataset:mlsum",
"transformers",
"summarization",
"news",
"autotrain_compatible"
] | summarization | false | Narrativa | null | Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization | 129 | 2 | transformers | 4,213 | ---
tags:
- summarization
- news
language: es
datasets:
- mlsum
widget:
- text: 'Al filo de las 22.00 horas del jueves, la Asamblea de Madrid vive un momento sorprendente: Vox decide no apoyar una propuesta del PP en favor del blindaje fiscal de la Comunidad. Se ha roto la unidad de los tres partidos de derechas. Es un hecho excepcional. Desde que arrancó la legislatura, PP, Cs y Vox han votado en bloque casi el 75% de las veces en el pleno de la Cámara. Juntos decidieron la composición de la Mesa de la Asamblea. Juntos invistieron presidenta a Isabel Díaz Ayuso. Y juntos han votado la mayoría de proposiciones no de ley, incluida la que ha marcado el esprint final de la campaña para las elecciones generales: acaban de instar al Gobierno de España a "la ilegalización inmediata" de los partidos separatistas "que atenten contra la unidad de la Nación". Los críticos de Cs no comparten el apoyo al texto de Vox contra el secesionisimo Ese balance retrata una necesidad antes que una complicidad, según fuentes del PP con predicamento en la dirección regional y nacional. Tras casi 15 años gobernando con mayoría absoluta, la formación conservadora vivió como una tortura la pasada legislatura, en la que dependió de Cs para sacar adelante sus iniciativas. El problema se agudizó tras las elecciones autonómicas de mayo. El PP ha tenido que formar con Cs el primer gobierno de coalición de la historia de la región, y ni siquiera con eso le basta para ganar las votaciones de la Cámara. Los dos socios gubernamentales necesitan a Vox, la menos predecible de las tres formaciones. "Tenemos que trabajar juntos defendiendo la unidad del país, por eso no quisimos dejar a Vox solo", dijo ayer Díaz Ayuso para justificar el apoyo de PP y Cs a la proposición de la extrema derecha sobre Cataluña. "Después nosotros llevábamos otra proposición para defender el blindaje fiscal de Madrid, y ahí Vox nos dejó atrás. No permitió que esto saliera. Es un grave error por su parte", prosiguió, recalcando el enfado del PP. "Demuestra que está más en cuestiones electoralistas", subrayó. "Los que pensamos, con nuestras inmensas diferencias, que tenemos cosas en común que nos unen como partidos que queremos Comunidades libres, con bajos impuestos, en las que se viva con seguridad y en paz, tenemos que estar unidos", argumentó. "Y por lo menos nosotros de nuestra línea no nos separamos". Al contrario de lo que está ocurriendo el Ayuntamiento de Madrid, donde el PP y Cs ya han defendido posiciones de voto distintas, pese a compartir el Gobierno, en la Asamblea los partidos de Díaz Ayuso e Ignacio Aguado están actuando con la máxima lealtad en las votaciones del pleno. Otra cosa son las comisiones. Y el caso Avalmadrid. Es en ese terreno donde Cs y Vox están buscando el margen de maniobra necesario para separarse del PP en plena campaña electoral, abandonando a su suerte a su socio para distinguirse ante los electores. —"Usted me ha dejado tirada", le espetó la presidenta de la Comunidad de Madrid a Rocío Monasterio tras saber que Vox permitiría que la izquierda tuviera mayoría en la comisión parlamentaria que investigará los avales concedidos por la empresa semipública entre 2007 y 2018, lo que podría incluir el de 400.000 euros aprobado en 2011, y nunca devuelto al completo, para una empresa participada por el padre de Isabel Díaz Ayuso. "Monasterio no es de fiar. Dice una cosa y hace la contraria", dice una fuente popular sobre las negociaciones mantenidas para repartirse los puestos de las diferentes comisiones, que Vox no cumplió tras buscar un segundo pacto con otras formaciones (que no llegó a buen puerto). Ilegalización de Vox Los tres partidos de derechas también se han enfrentado por la ubicación de Vox en el pleno. Las largas negociaciones para la investidura de Díaz Ayuso dejaron heridas abiertas. Y los diputados de Cs no desaprovechan la oportunidad de lanzar dardos contra los de Vox, pero luego coinciden con ellos en la mayoría de votaciones. Ocurrió, por ejemplo, el jueves, cuando se debatía la polémica proposición para instar al Gobierno nacional a ilegalizar a los partidos separatistas que atenten contra la unidad de España. —"Mostrar nuestra sorpresa ante la presentación por parte de Vox de esta propuesta", lanzó Araceli Gómez, diputada de la formación de Aguado. "Sorprende que planteen ustedes este asunto cuando está también sobre la mesa el debate de su propia ilegalización por atentar contra el ordenamiento jurídico o contra valores constitucionales como la igualdad o la no discriminación". Luego de esa descalificación, y ante la incredulidad de los diputados de los partidos de izquierdas, Cs unió sus votos a los de Vox y a los del PP. La decisión ha provocado polémica interna, como demuestra que Albert Rivera no la apoyara ayer explícitamente. Tampoco ha sido bien acogida por el sector crítico de la formación. Pero ha demostrado una cosa: en Madrid hay tres partidos que casi siempre votan como uno.'
---
# Spanish RoBERTa2RoBERTa (roberta-base-bne) fine-tuned on MLSUM ES for summarization
## Model
[BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) (RoBERTa Checkpoint)
## Dataset
**MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, **Spanish**, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
[MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum)
## Results
|Set|Metric| Value|
|----|------|------|
| Test |Rouge2 - mid -precision | 11.42|
| Test | Rouge2 - mid - recall | 10.58 |
| Test | Rouge2 - mid - fmeasure | 10.69|
| Test | Rouge1 - fmeasure | 28.83 |
| Test | RougeL - fmeasure | 23.15 |
Raw metrics using HF/metrics `rouge`:
```python
rouge = datasets.load_metric("rouge")
rouge.compute(predictions=results["pred_summary"], references=results["summary"])
{'rouge1': AggregateScore(low=Score(precision=0.30393366820245, recall=0.27905239591639935, fmeasure=0.283148902808752), mid=Score(precision=0.3068521142101569, recall=0.2817252494122592, fmeasure=0.28560373425206464), high=Score(precision=0.30972608774202665, recall=0.28458152325781716, fmeasure=0.2883786700591887)),
'rougeL': AggregateScore(low=Score(precision=0.24184668819794716, recall=0.22401171380621518, fmeasure=0.22624104698839514), mid=Score(precision=0.24470388406868163, recall=0.22665793214539162, fmeasure=0.2289118878817394), high=Score(precision=0.2476594458951327, recall=0.22932683203591905, fmeasure=0.23153001570662513))}
rouge.compute(predictions=results["pred_summary"], references=results["summary"], rouge_types=["rouge2"])["rouge2"].mid
Score(precision=0.11423200347113865, recall=0.10588038944902506, fmeasure=0.1069921217219595)
```
## Usage
```python
import torch
from transformers import RobertaTokenizerFast, EncoderDecoderModel
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization'
tokenizer = RobertaTokenizerFast.from_pretrained(ckpt)
model = EncoderDecoderModel.from_pretrained(ckpt).to(device)
def generate_summary(text):
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Your text here..."
generate_summary(text)
```
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
|
TransQuest/monotransquest-hter-en_zh-wiki | 799e174cc584ee244400fdc1b62505b69a2b91f9 | 2021-06-03T18:58:10.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-zh",
"transformers",
"Quality Estimation",
"monotransquest",
"hter",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_zh-wiki | 129 | null | transformers | 4,214 | ---
language: en-zh
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_zh-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
## Table of Contents
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
adamlin/bert-distil-chinese | 97eeca4e63fb1eb0ffcf04ad3562c745520d7511 | 2022-05-31T10:03:03.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adamlin | null | adamlin/bert-distil-chinese | 129 | null | transformers | 4,215 | Entry not found |
google/t5-efficient-large-nl36 | bcc80281b68e8499034cf16b86843d6707d37cdb | 2022-02-15T10:49:22.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-large-nl36 | 129 | 4 | transformers | 4,216 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-LARGE-NL36 (Deep-Narrow version)
T5-Efficient-LARGE-NL36 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-nl36** - is of model type **Large** with the following variations:
- **nl** is **36**
It has **1090.14** million parameters and thus requires *ca.* **4360.54 MB** of memory in full precision (*fp32*)
or **2180.27 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
junnyu/wobert_chinese_base | da75ef6e8fa702e8220bd1ccd10efc632f8b02f4 | 2021-07-06T05:04:11.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"zh",
"transformers",
"wobert",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/wobert_chinese_base | 129 | 1 | transformers | 4,217 | ---
language: zh
tags:
- wobert
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/WoBERT
### pytorch版本
https://github.com/JunnYu/WoBERT_pytorch
## 安装(主要为了安装WoBertTokenizer)
注意:transformers版本需要>=4.7.0
WoBertTokenizer的实现与RoFormerTokenizer是一样的,因此使用RoFormerTokenizer就可以了
## 使用
```python
import torch
from transformers import BertForMaskedLM as WoBertForMaskedLM
from transformers import RoFormerTokenizer as WoBertTokenizer
pretrained_model_or_path_list = [
"junnyu/wobert_chinese_plus_base", "junnyu/wobert_chinese_base"
]
for path in pretrained_model_or_path_list:
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = WoBertTokenizer.from_pretrained(path)
model = WoBertForMaskedLM.from_pretrained(path)
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits[0]
outputs_sentence = ""
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += "[" + "||".join(tokens) + "]"
else:
outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id],
skip_special_tokens=True))
print(outputs_sentence)
# RoFormer 今天[天气||天||心情||阳光||空气]很好,我[想||要||打算||准备||喜欢]去公园玩。
# PLUS WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||打算||准备||就]去公园玩。
# WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||就||准备||也]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
``` |
kuzgunlar/electra-turkish-sentiment-analysis | 8efa8b28569f413b5f3e85cf7cd7faf14c1f4e22 | 2020-08-16T13:05:57.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | kuzgunlar | null | kuzgunlar/electra-turkish-sentiment-analysis | 129 | 1 | transformers | 4,218 | Entry not found |
shahrukhx01/schema-aware-denoising-bart-large-cnn-text2sql | 71c954ece79b4a55ec5800d17c3b3855cdb1ad69 | 2021-08-21T08:43:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"schema-aware-text2sql",
"text2sql",
"wikisql",
"autotrain_compatible"
] | text2text-generation | false | shahrukhx01 | null | shahrukhx01/schema-aware-denoising-bart-large-cnn-text2sql | 129 | 1 | transformers | 4,219 | ---
language: "en"
tags:
- schema-aware-text2sql
- text2sql
- wikisql
widget:
- text: "What is terrence ross' nationality? </s> <col0> Player : text <col1> No. : text <col2> Nationality : text <col3> Position : text <col4> Years in Toronto : text <col5> School/Club Team : text"
---
```python
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('shahrukhx01/schema-aware-denoising-bart-large-cnn-text2sql')
tokenizer = BartTokenizer.from_pretrained('shahrukhx01/schema-aware-denoising-bart-large-cnn-text2sql')
## add NL query with table schema
question = "What is terrence ross' nationality? </s> <col0> Player : text <col1> No. : text <col2> Nationality : text <col3> Position : text <col4> Years in Toronto : text <col5> School/Club Team : text"
inputs = tokenizer([question], max_length=1024, return_tensors='pt')
# Generate SQL
text_query_ids = model.generate(inputs['input_ids'], num_beams=4, min_length=0, max_length=125, early_stopping=True)
prediction = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in text_query_ids][0]
print(prediction)
``` |
superb/hubert-base-superb-ic | 34e16bfd56b5c4d3965d9d83b6d83f9116ddf9d8 | 2021-09-06T12:11:28.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/hubert-base-superb-ic | 129 | null | transformers | 4,220 | ---
language: en
datasets:
- superb
tags:
- speech
- audio-classification
- hubert
license: apache-2.0
---
# Hubert-Base for Intent Classification
## Model description
This is a ported version of [S3PRL's Hubert for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
speakers. SUPERB uses the
[Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/)
dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands).
## Usage examples
You can use the model directly like so:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ic", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ic")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-ic")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
action_ids = torch.argmax(logits[:, :6], dim=-1).tolist()
action_labels = [model.config.id2label[_id] for _id in action_ids]
object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist()
object_labels = [model.config.id2label[_id + 6] for _id in object_ids]
location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist()
location_labels = [model.config.id2label[_id + 20] for _id in location_ids]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9834` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
billfrench/cyberlandr-door | 7e6577d4899e94c2eb50677e99f07a0fe46abe50 | 2022-03-06T23:07:03.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | billfrench | null | billfrench/cyberlandr-door | 129 | null | transformers | 4,221 | Entry not found |
nielsr/swin-tiny-patch4-window7-224-finetuned-eurosat | 2164338db59d40004286bc65800bfa50561ecd3d | 2022-04-12T09:09:48.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nielsr | null | nielsr/swin-tiny-patch4-window7-224-finetuned-eurosat | 129 | null | transformers | 4,222 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9744444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0664
- Accuracy: 0.9744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2621 | 1.0 | 190 | 0.1083 | 0.9630 |
| 0.1769 | 2.0 | 380 | 0.1425 | 0.95 |
| 0.1343 | 3.0 | 570 | 0.0664 | 0.9744 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Souvikcmsa/SentimentAnalysisDistillBERT | f4f1eed70d9b2446020173ee7ae4b05c68048cbd | 2022-04-20T09:05:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Souvikcmsa/autotrain-data-sentiment_analysis",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Souvikcmsa | null | Souvikcmsa/SentimentAnalysisDistillBERT | 129 | null | transformers | 4,223 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Souvikcmsa/autotrain-data-sentiment_analysis
co2_eq_emissions: 0.015536746909294205
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 762923432
- CO2 Emissions (in grams): 0.015536746909294205
## Validation Metrics
- Loss: 0.49825894832611084
- Accuracy: 0.7962895598399418
- Macro F1: 0.7997458031044901
- Micro F1: 0.7962895598399418
- Weighted F1: 0.796365325858282
- Macro Precision: 0.7995724418486833
- Micro Precision: 0.7962895598399418
- Weighted Precision: 0.7965384250324863
- Macro Recall: 0.8000290112564951
- Micro Recall: 0.7962895598399418
- Weighted Recall: 0.7962895598399418
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentiment_analysis-762923432
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923432", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923432", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
derwahnsinn/gpt2-mediumMetallica | bb838a69b0517b4df846f0c46adf0db8e4ae94b7 | 2022-07-28T18:03:25.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediumMetallica | 129 | null | transformers | 4,224 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediumMetallica
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediumMetallica
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6964
## Model description
GPT2-medium trained on a custom dataset of Metallica lyrics
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 11 | 1.6964 |
| No log | 2.0 | 22 | 1.6964 |
| No log | 3.0 | 33 | 1.6964 |
| No log | 4.0 | 44 | 1.6964 |
| No log | 5.0 | 55 | 1.6964 |
| No log | 6.0 | 66 | 1.6964 |
| No log | 7.0 | 77 | 1.6964 |
| No log | 8.0 | 88 | 1.6964 |
| No log | 9.0 | 99 | 1.6964 |
| No log | 10.0 | 110 | 1.6964 |
| No log | 11.0 | 121 | 1.6964 |
| No log | 12.0 | 132 | 1.6964 |
| No log | 13.0 | 143 | 1.6964 |
| No log | 14.0 | 154 | 1.6964 |
| No log | 15.0 | 165 | 1.6964 |
| No log | 16.0 | 176 | 1.6964 |
| No log | 17.0 | 187 | 1.6964 |
| No log | 18.0 | 198 | 1.6964 |
| No log | 19.0 | 209 | 1.6964 |
| No log | 20.0 | 220 | 1.6964 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AriakimTaiyo/DialoGPT-cultured-Kumiko | d70e8c0fc60c7e75ed7bb60386fe020c88773950 | 2022-02-08T14:19:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AriakimTaiyo | null | AriakimTaiyo/DialoGPT-cultured-Kumiko | 128 | null | transformers | 4,225 | ---
tags:
- conversational
---
# Cultured Kumiko DialoGPT Model |
AryanLala/autonlp-Scientific_Title_Generator-34558227 | faca55915244bc0f4360bbfd3dcfd2741aa7c99c | 2021-11-23T16:51:34.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:AryanLala/autonlp-data-Scientific_Title_Generator",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | AryanLala | null | AryanLala/autonlp-Scientific_Title_Generator-34558227 | 128 | 17 | transformers | 4,226 | ---
tags: autonlp
language: en
widget:
- text: "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets."
datasets:
- AryanLala/autonlp-data-Scientific_Title_Generator
co2_eq_emissions: 137.60574081887984
---
# Model Trained Using AutoNLP
- Model: Google's Pegasus (https://huggingface.co/google/pegasus-xsum)
- Problem type: Summarization
- Model ID: 34558227
- CO2 Emissions (in grams): 137.60574081887984
- Spaces: https://huggingface.co/spaces/TitleGenerators/ArxivTitleGenerator
- Dataset: arXiv Dataset (https://www.kaggle.com/Cornell-University/arxiv)
- Data subset used: https://huggingface.co/datasets/AryanLala/autonlp-data-Scientific_Title_Generator
## Validation Metrics
- Loss: 2.578599214553833
- Rouge1: 44.8482
- Rouge2: 24.4052
- RougeL: 40.1716
- RougeLsum: 40.1396
- Gen Len: 11.4675
## Social
- LinkedIn: https://www.linkedin.com/in/aryanlala/
- Twitter: https://twitter.com/AryanLala20
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/AryanLala/autonlp-Scientific_Title_Generator-34558227
``` |
DeepChem/ChemBERTa-10M-MTR | 53d08e205841e15b77483b42aac0c55bdb1d1822 | 2022-01-20T17:51:35.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | DeepChem | null | DeepChem/ChemBERTa-10M-MTR | 128 | null | transformers | 4,227 | Entry not found |
Helsinki-NLP/opus-mt-en-gaa | 2f75e3d8bc190f8e0e412beecf00e564c40e33c4 | 2021-09-09T21:35:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"gaa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gaa | 128 | null | transformers | 4,228 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-gaa
* source languages: en
* target languages: gaa
* OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.gaa | 39.9 | 0.593 |
|
Ivo/emscad-skill-extraction | e129b03f0e913f12e3f0583216199a990a5b7968 | 2021-06-09T12:15:44.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Ivo | null | Ivo/emscad-skill-extraction | 128 | null | transformers | 4,229 | Entry not found |
KoichiYasuoka/roberta-base-japanese-luw-upos | a3abefe6faa80f17dd82550ec38b488d401e31e3 | 2022-05-24T06:29:59.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-japanese-luw-upos | 128 | null | transformers | 4,230 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-base-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
anton-l/gpt-j-tiny-random | c104fcf6d66c324e1348281ab29c01a0423f272d | 2021-09-22T19:44:41.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | anton-l | null | anton-l/gpt-j-tiny-random | 128 | 1 | transformers | 4,231 | Entry not found |
chiayewken/aspect-sentiment-pretrain | aad3d378ab00b8647796634649084084630d0bf9 | 2021-05-19T14:03:18.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | chiayewken | null | chiayewken/aspect-sentiment-pretrain | 128 | null | transformers | 4,232 | Entry not found |
meedan/indian-xlm-r | df5f9a82da83c8ff832ba17dc6b7979206d6feed | 2021-02-22T22:37:11.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | meedan | null | meedan/indian-xlm-r | 128 | null | transformers | 4,233 | Entry not found |
uclanlp/plbart-large | d296275f5a15b9b971dec79d06410852f0c8635d | 2021-11-23T18:09:20.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-large | 128 | 1 | transformers | 4,234 | Entry not found |
lightonai/RITA_s | fced662eadd2b7099a3b92a88365dfc3c98eb3da | 2022-05-19T08:22:49.000Z | [
"pytorch",
"rita",
"text-generation",
"protein",
"dataset:uniref-100",
"arxiv:2205.05789",
"transformers"
] | text-generation | false | lightonai | null | lightonai/RITA_s | 128 | 2 | transformers | 4,235 | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-S
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[**Small**](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_s, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_s")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
conan1024hao/cjkbert-small | 8d693bb10d381bd6682c968f16131fd7428ec648 | 2022-05-14T10:18:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"ja",
"zh",
"ko",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | conan1024hao | null | conan1024hao/cjkbert-small | 128 | 2 | transformers | 4,236 | ---
language:
- ja
- zh
- ko
license: cc-by-sa-4.0
datasets:
- wikipedia
mask_token: "[MASK]"
widget:
- text: "早稲田大学で自然言語処理を[MASK]ぶ。"
- text: "李白是[MASK]朝人。"
- text: "불고기[MASK] 먹겠습니다."
---
### Model description
- This model was trained on **ZH, JA, KO**'s Wikipedia (5 epochs).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("conan1024hao/cjkbert-small")
model = AutoModelForMaskedLM.from_pretrained("conan1024hao/cjkbert-small")
```
- Before you fine-tune downstream tasks, you don't need any text segmentation.
- (Though you may obtain better results if you applied morphological analysis to the data before fine-tuning)
### Morphological analysis tools
- ZH: For Chinese, we use [LTP](https://github.com/HIT-SCIR/ltp).
- JA: For Japanese, we use [Juman++](https://github.com/ku-nlp/jumanpp).
- KO: For Korean, we use [KoNLPy](https://github.com/konlpy/konlpy)(Kkma class).
### Tokenization
- We use character-based tokenization with **whole-word-masking** strategy.
### Model size
- vocab_size: 15015
- num_hidden_layers: 4
- hidden_size: 512
- num_attention_heads: 8
- param_num: 25M |
DLochmelis33/22s-dl-sentiment-1 | 493f9f6c924aca0aae0b33ef08a1ac5ac89b71dd | 2022-06-15T01:07:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | DLochmelis33 | null | DLochmelis33/22s-dl-sentiment-1 | 128 | null | transformers | 4,237 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: 22s-dl-sentiment-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9542333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 22s-dl-sentiment-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2574
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-japanese-aozora-ud-head | d8fb76c08d076a07dcfe09aa2e6c3b72fd1bce18 | 2022-07-23T14:43:38.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-aozora-ud-head | 128 | null | transformers | 4,238 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-base-japanese-aozora-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
mcsabai/huBert-fine-tuned-hungarian-squadv2 | 710fe207e20c222e76c5f72f779aaa939bfa6698 | 2022-07-26T18:35:08.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"hu",
"transformers",
"autotrain_compatible"
] | question-answering | false | mcsabai | null | mcsabai/huBert-fine-tuned-hungarian-squadv2 | 128 | null | transformers | 4,239 | ---
language: hu
thumbnail:
tags:
- question-answering
- bert
widget:
- text: "Melyik folyó szeli ketté Budapestet?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
- text: "Mivel juthatunk fel az Óvárosba?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
---
## MODEL DESCRIPTION
huBERT base model (cased) fine-tuned on SQuADv2 (NEW!)
- huBert model + Tokenizer: https://huggingface.co/SZTAKI-HLT/hubert-base-cc
- Hungarian SQUADv2 dataset: Machine Translated SQuAD dataset (Google Translate API)
<p> <i> "SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.[1]" </i> </p>
## Model in action
- Fast usage with pipelines:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mcsabai/huBert-fine-tuned-hungarian-squadv2",
tokenizer="mcsabai/huBert-fine-tuned-hungarian-squadv2",
topk = 1,
handle_impossible_answer = True
)
predictions = qa_pipeline({
'context': "Máté vagyok és Budapesten élek már több mint 4 éve.",
'question': "Hol lakik Máté?"
})
print(predictions)
# output:
# {'score': 0.9892364144325256, 'start': 16, 'end': 26, 'answer': 'Budapesten'}
```
Two important parameter:
- <p> <b> topk </b> (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context. </p>
- <p> <b> handle_impossible_answer </b> (bool, optional, defaults to False): Whether or not we accept impossible as an answer. </p>
[1] https://rajpurkar.github.io/SQuAD-explorer/ |
balamurugan1603/bert-finetuned-ner | fc2a68631adeddf441d619f3e4ab6824ffc3789a | 2021-11-25T17:00:00.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | balamurugan1603 | null | balamurugan1603/bert-finetuned-ner | 127 | null | transformers | 4,240 | # Named Entity Recognition using Transformers
This is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.
# Dataset
**CoNLL-2003** :
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>
**Link** : https://huggingface.co/datasets/conll2003
# Using this fine-tuned version
From python, download the whole pipeline and use it instantly using the following code :
```
from transformers import pipeline
# Loading the pipeline from hub
# Pipeline handles the preprocessing and post processing steps
model_checkpoint = "balamurugan1603/bert-finetuned-ner"
namedEntityRecogniser = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
```
Reference for using this pipeline to find NER tags can be found in this <a href="https://github.com/balamurugan1603/Named-Entity-Recognition-using-Tranformers/blob/main/named-entity-recognition-using-transfer-learning.ipynb">notebook</a>.
|
castorini/duot5-base-msmarco | 001e9dc78a3129f95184727a336d484b03956006 | 2021-12-07T12:53:29.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:2101.05667",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/duot5-base-msmarco | 127 | null | transformers | 4,241 | This model is a T5-base pairwise reranker fine-tuned on MS MARCO passage dataset for 50k steps (or 5 epochs).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/pdf/2101.05667.pdf) |
gustavecortal/fr-boris-8bit | 5a2bb943a6200d3b77faf364edb578ec751d4dd8 | 2022-03-04T10:33:04.000Z | [
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:c4",
"dataset:The Pile",
"transformers",
"causal-lm",
"license:mit"
] | text-generation | false | gustavecortal | null | gustavecortal/fr-boris-8bit | 127 | 6 | transformers | 4,242 | ---
language: fr
license: mit
tags:
- causal-lm
- fr
datasets:
- c4
- The Pile
---
### Quantized Cedille/fr-boris with 8-bit weights
This is a version of Cedille's GPT-J (fr-boris) with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
This model can be easily loaded using the `GPTJForCausalLM` functionality:
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("gustavecortal/fr-boris-8bit")
```
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset.
## Links
* [Cedille](https://en.cedille.ai/)
* [Hivemind](https://training-transformers-together.github.io/)
* [Gustave Cortal](https://twitter.com/gustavecortal) |
martin-ha/toxic-comment-model | 9842c08b35a4687e7b211187d676986c8c96256d | 2022-05-06T02:24:31.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers"
] | text-classification | false | martin-ha | null | martin-ha/toxic-comment-model | 127 | null | transformers | 4,243 | ---
language: en
---
## Model description
This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model_path = "martin-ha/toxic-comment-model"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline('This is a test text.'))
```
## Limitations and Bias
This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.
| **subgroup** | **subgroup_size** | **subgroup_auc** | **bpsn_auc** | **bnsp_auc** |
| ----------------------------- | ----------------- | ---------------- | ------------ | ------------ |
| muslim | 108 | 0.689 | 0.811 | 0.88 |
| jewish | 40 | 0.749 | 0.86 | 0.825 |
| homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 |
| black | 84 | 0.866 | 0.758 | 0.975 |
| white | 112 | 0.876 | 0.784 | 0.97 |
| female | 306 | 0.898 | 0.887 | 0.948 |
| christian | 231 | 0.904 | 0.917 | 0.93 |
| male | 225 | 0.922 | 0.862 | 0.967 |
| psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 |
The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence "Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion." Into the model, the model will classify it as toxic. Be mindful for this type of potential bias.
## Training data
The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 10% of the `train.csv` data to train the model.
## Training procedure
You can see [this documentation and codes](https://github.com/MSIA/wenyang_pan_nlp_project_2021) for how we train the model. It takes about 3 hours in a P-100 GPU.
## Evaluation results
The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set. |
microsoft/swin-large-patch4-window12-384 | 9ac08d2bb52910aa24b5899a09fc8f8a939eb47f | 2022-05-16T18:08:30.000Z | [
"pytorch",
"tf",
"swin",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/swin-large-patch4-window12-384 | 127 | null | transformers | 4,244 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window12-384")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window12-3844")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/tapex-large-sql-execution | a65006afd78ef4876fbe8e5372a14b9236fb1f8a | 2022-05-17T08:28:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:mit",
"autotrain_compatible"
] | table-question-answering | false | microsoft | null | microsoft/tapex-large-sql-execution | 127 | null | transformers | 4,245 | ---
language: en
tags:
- tapex
- table-question-answering
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-sql-execution")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-large-sql-execution")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "select year where city = beijing"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['2008']
```
### How to Fine-tuning
⚠️ This model checkpoint is **ONLY** used for simulating neural SQL execution (i.e., employ TAPEX to execute a SQL query on a given table), and you **CANNOT** use this model for fine-tuning on downstream tasks. The one that can be used for fine-tuning is at [here](https://huggingface.co/microsoft/tapex-large).
> This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see [this comment](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564) for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
KoichiYasuoka/bert-base-russian-upos | 8716ad92870ae086852766f416d0efe6300a7365 | 2022-03-13T07:32:53.000Z | [
"pytorch",
"bert",
"token-classification",
"ru",
"dataset:universal_dependencies",
"transformers",
"russian",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-russian-upos | 127 | null | transformers | 4,246 | ---
language:
- "ru"
tags:
- "russian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# bert-base-russian-upos
## Model Description
This is a BERT model pre-trained with [UD_Russian](https://universaldependencies.org/ru/) for POS-tagging and dependency-parsing, derived from [rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-russian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
nlp-waseda/gpt2-small-japanese | 2d14ed0652612483b331291b7b25731e5cd70d76 | 2022-03-30T04:28:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0"
] | text-generation | false | nlp-waseda | null | nlp-waseda/gpt2-small-japanese | 127 | null | transformers | 4,247 | ---
language:
- ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "早稲田 大学 で 自然 言語 処理 を"
---
# nlp-waseda/gpt2-small-japanese
This model is Japanese GPT-2 pretrained on Japanese Wikipedia and CC-100.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task.
Note that the texts should be segmented into words using Juman++ in advance.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese')
>>> set_seed(42)
>>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5)
[{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 、 早稲田 大学 理工 学部 に 入学 し ます 。 卒業 後 、 早稲田 大学 工学 研究 科 、'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 アメリカ の 大学 で 学士 号 を 取得 、 修士 の 取得 で 博士 号 を 取得 。 2008 年'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 勉強 して い ます 。 学部 は 日本 語 学科 を 専攻 して い ます 。 英語 が 話せる と いう'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 して いた 。 2011 年 に 第 26 回 日本 化学 会 学生 委員 会 奨励 賞 ( 第 2 年次 審査'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 中心 と する 言語 学 研究 を 行って いる 。 東京 都 ・ 豊島 区 の お 見合い 相手 。'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ReformerTokenizer, GPT2Model
tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese')
model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese')
text = "早稲田 大学 で 自然 言語 処理 を"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2022-03-20, and the Japanese portion of CC-100.
## Training procedure
### Preprocessing
The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
The model was trained on 8 NVIDIA A100 GPUs. |
Ahmedgr/DistilBert_Fine_tune_QuestionVsAnswer | 4d864046b31a0251a2e0eb1c8dbbb300a3141af3 | 2022-04-05T09:24:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Ahmedgr | null | Ahmedgr/DistilBert_Fine_tune_QuestionVsAnswer | 127 | 1 | transformers | 4,248 | ---
tags:
- generated_from_trainer
model-index:
- name: DistilBert_Fine_tune_QuestionVsAnswer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBert_Fine_tune_QuestionVsAnswer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
jegormeister/robbert-v2-dutch-base-mqa-finetuned | f5427c2264b9d82255217fa4c7a851cba7a8171b | 2022-04-11T19:09:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"nl",
"dataset:clips/mqa",
"sentence-transformers",
"sentence-similarity",
"transformers",
"robbert"
] | sentence-similarity | false | jegormeister | null | jegormeister/robbert-v2-dutch-base-mqa-finetuned | 127 | 2 | sentence-transformers | 4,249 | ---
language: nl
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- robbert
datasets:
- clips/mqa
---
# jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). It was fine-tuned on 1,000,000 rows of Dutch FAQ question-answer pairs from [clips/mqa](https://huggingface.co/datasets/clips/mqa).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
model = AutoModel.from_pretrained('jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12500 with parameters:
```
{'batch_size': 80, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Abderrahim2/bert-finetuned-gender_classification | df2088723946d24d2222d76ca56e997a79de797a | 2022-06-01T14:39:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Abderrahim2 | null | Abderrahim2/bert-finetuned-gender_classification | 127 | null | transformers | 4,250 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-gender_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-gender_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- F1: 0.9645
- Roc Auc: 0.9732
- Accuracy: 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.1679 | 1.0 | 1125 | 0.1781 | 0.928 | 0.946 | 0.927 |
| 0.1238 | 2.0 | 2250 | 0.1252 | 0.9516 | 0.9640 | 0.95 |
| 0.0863 | 3.0 | 3375 | 0.1283 | 0.9515 | 0.9637 | 0.95 |
| 0.0476 | 4.0 | 4500 | 0.1419 | 0.9565 | 0.9672 | 0.956 |
| 0.0286 | 5.0 | 5625 | 0.1428 | 0.9555 | 0.9667 | 0.954 |
| 0.0091 | 6.0 | 6750 | 0.1515 | 0.9604 | 0.9700 | 0.959 |
| 0.0157 | 7.0 | 7875 | 0.1535 | 0.9580 | 0.9682 | 0.957 |
| 0.0048 | 8.0 | 9000 | 0.1484 | 0.9645 | 0.9732 | 0.964 |
| 0.0045 | 9.0 | 10125 | 0.1769 | 0.9605 | 0.9703 | 0.96 |
| 0.0037 | 10.0 | 11250 | 0.2007 | 0.9565 | 0.9672 | 0.956 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
IDEA-CCNL/Randeng-Pegasus-523M-Chinese | fbd9bc0ebd910efba728295538b7780c26906882 | 2022-06-30T06:59:39.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"zh",
"arxiv:1912.08777",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | IDEA-CCNL | null | IDEA-CCNL/Randeng-Pegasus-523M-Chinese | 127 | 1 | transformers | 4,251 | ---
language: zh
tags:
- summarization
inference: False
---
IDEA-CCNL/Randeng-Pegasus-523M-Chinese model (Chinese),which codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
This model we provided in hugging face hub is only the pretrained model, has not finetuned with downstream data yet.
We also pretained a base model, available with [IDEA-CCNL/Randeng-Pegasus-238M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Chinese)
Task: Summarization
## Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
# Strongly recommend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")
text = "据微信公众号“界面”报道,4日上午10点左右,中国发改委反垄断调查小组突击查访奔驰上海办事处,调取数据材料,并对多名奔驰高管进行了约谈。截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model Output: 截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
lizz27/DialoGPT-medium-BaymaxBot | 7ca296834bbdadcd73fcf1349af84db83a09dde3 | 2022-07-28T19:19:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lizz27 | null | lizz27/DialoGPT-medium-BaymaxBot | 127 | null | transformers | 4,252 | ---
tags:
- conversational
---
# DialoGPT BaymaxBot |
KETI-AIR/ke-t5-small-ko | 106751e579e337dda2337d76aa4b871e671f2ad4 | 2021-06-23T03:12:04.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-small-ko | 126 | null | transformers | 4,253 | Entry not found |
algoprog/mimics-bart-base | 276f8e003cd5a9b9fd39df99d2197789d09a0f7c | 2022-02-24T01:31:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | algoprog | null | algoprog/mimics-bart-base | 126 | null | transformers | 4,254 | Entry not found |
beomi/beep-KcELECTRA-base-hate | 41220dd1598bb646f6ca87e80a4229f5b6d1014c | 2021-10-23T05:48:36.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | beomi | null | beomi/beep-KcELECTRA-base-hate | 126 | null | transformers | 4,255 | Entry not found |
cristian-popa/bart-tl-all | c1b3c7554b701f481848701efd2653ab690d3186 | 2021-09-22T08:18:03.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"topic labeling",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | cristian-popa | null | cristian-popa/bart-tl-all | 126 | null | transformers | 4,256 | ---
language:
- en
<!-- thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
-->
tags:
- topic labeling
license: apache-2.0
metrics:
- ndcg
---
# MyModel
## Model description
This is the `BART-TL-all` model from the paper [BART-TL: Weakly-Supervised Topic Label Generation](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf). We aim to solve the topic labeling task using generative methods, rather than selection from a pool of labels as was done in previous State of the Art works.
For more details not covered here, you can read the paper or look at the open-source implementation: https://github.com/CristianViorelPopa/BART-TL-topic-label-generation.
There are two models made available from the paper:
* [BART-TL-all](https://huggingface.co/cristian-popa/bart-tl-all)
* [BART-TL-ng](https://huggingface.co/cristian-popa/bart-tl-ng)
## Intended uses & limitations
#### How to use
The model takes in a topic, represented as a space-separated series of words. Such topics can be generated using LDA, as was done for gathering the fine-tuning dataset for the model.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
mname = "cristian-popa/bart-tl-all"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "site web google search website online internet social content user"
enc = tokenizer(input, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
outputs = model.generate(
input_ids=enc.input_ids,
attention_mask=enc.attention_mask,
max_length=15,
min_length=1,
do_sample=False,
num_beams=25,
length_penalty=1.0,
repetition_penalty=1.5
)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # application programming interface
```
#### Limitations and bias
The model may not generate accurate labels for topics from domains unrelated to the ones it was fine-tuned on, such as gastronomy.
## Training data
The model was fine-tuned on 5 different StackExchange corpora (see https://archive.org/download/stackexchange for a full list of existing such corpora): English, biology, economics, law, and photography. 100 topics are extracted using LDA for each of these corpora, filtered for coherence and then used for obtaining the final model here.
## Training procedure
The large Facebook BART model is fine-tuned in a weakly-supervised manner, making use of the unsupervised candidate selection of the [NETL](https://www.aclweb.org/anthology/C16-1091.pdf) method, along with other heuristic labels, such as n-grams from the topics, relevant sentences in the corpora and noun phrases. The dataset is a one-to-many mapping from topics to labels. More details on training and parameters can be found in the [paper](https://www.aclweb.org/anthology/2021.eacl-main.121.pdf) or by following [this notebook](https://github.com/CristianViorelPopa/BART-TL-topic-label-generation/blob/main/notebooks/end_to_end_workflow.ipynb).
## Eval results
model | Top-1 Avg. | Top-3 Avg. | Top-5 Avg. | nDCG-1 | nDCG-3 | nDCG-5
------------|------------|------------|------------|--------|--------|-------
NETL (U) | 2.66 | 2.59 | 2.50 | 0.83 | 0.85 | 0.87
NETL (S) | 2.74 | 2.57 | 2.49 | 0.88 | 0.85 | 0.88
BART-TL-all | 2.64 | 2.52 | 2.43 | 0.83 | 0.84 | 0.87
BART-TL-ng | 2.62 | 2.50 | 2.33 | 0.82 | 0.84 | 0.85
### BibTeX entry and citation info
```bibtex
@inproceedings{popa-rebedea-2021-bart,
title = "{BART}-{TL}: Weakly-Supervised Topic Label Generation",
author = "Popa, Cristian and
Rebedea, Traian",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.121",
pages = "1418--1425",
abstract = "We propose a novel solution for assigning labels to topic models by using multiple weak labelers. The method leverages generative transformers to learn accurate representations of the most important topic terms and candidate labels. This is achieved by fine-tuning pre-trained BART models on a large number of potential labels generated by state of the art non-neural models for topic labeling, enriched with different techniques. The proposed BART-TL model is able to generate valuable and novel labels in a weakly-supervised manner and can be improved by adding other weak labelers or distant supervision on similar tasks.",
}
``` |
facebook/wav2vec2-base-10k-voxpopuli-ft-fr | e6d7b145359a2a07df759369aea64307f4f6031a | 2021-07-06T01:50:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"arxiv:2101.00390",
"transformers",
"audio",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-10k-voxpopuli-ft-fr | 126 | null | transformers | 4,257 | ---
language: fr
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in fr (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fr")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-fr")
# load dataset
ds = load_dataset("common_voice", "fr", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
laxya007/gpt2_manage | 37c66c780e86fc421df348796706b93613eecd81 | 2021-05-23T07:42:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_manage | 126 | null | transformers | 4,258 | Entry not found |
m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition | ed9982a2c7a3fd79a918bac683a9e4cbfbad772e | 2021-07-06T11:11:59.000Z | [
"pytorch",
"jax",
"wav2vec2",
"el",
"dataset:aesdd",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"speech-emotion-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition | 126 | 2 | transformers | 4,259 | ---
language: el
datasets:
- aesdd
tags:
- audio
- automatic-speech-recognition
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Greek (el) Speech using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-greek-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/disgust.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Emotion': 'anger', 'Score': '0.0%'},
{'Emotion': 'disgust', 'Score': '99.2%'},
{'Emotion': 'fear', 'Score': '0.1%'},
{'Emotion': 'happiness', 'Score': '0.3%'},
{'Emotion': 'sadness', 'Score': '0.5%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|-----------|-----------|--------|----------|----------|
| anger | 0.92 | 1.00 | 0.96 | |
| disgust | 0.85 | 0.96 | 0.90 | |
| fear | 0.88 | 0.88 | 0.88 | |
| happiness | 0.94 | 0.71 | 0.81 | |
| sadness | 0.96 | 1.00 | 0.98 | |
| | | | Overal | 0.91 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
mrm8488/bert2bert-multilingual_shared-question-generation | 91a59b9ccdf9e9ea3fcf1bf24d2640fb06eb1569 | 2020-12-29T19:10:07.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert-multilingual_shared-question-generation | 126 | 2 | transformers | 4,260 | Entry not found |
uclanlp/plbart-java-en_XX | b1ca7ca4c18a8b23c9eff5d25eff783d23e07d15 | 2021-11-09T17:08:51.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-java-en_XX | 126 | null | transformers | 4,261 | Entry not found |
apple/mobilevit-xx-small | 1527dac02513f12172463b98c9ebe868d7866023 | 2022-06-02T10:50:02.000Z | [
"pytorch",
"coreml",
"mobilevit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"transformers",
"vision",
"license:other"
] | image-classification | false | apple | null | apple/mobilevit-xx-small | 126 | null | transformers | 4,262 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileViT (extra extra small-sized model)
MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-xx-small")
model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-xx-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
## Training procedure
### Preprocessing
Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping.
To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320).
At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256.
Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|-------------------|-------------------------|-------------------------|-----------|-------------------------------------------------|
| **MobileViT-XXS** | **69.0** | **88.9** | **1.3 M** | https://huggingface.co/apple/mobilevit-xx-small |
| MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small |
| MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
kkuramitsu/kogi-mt5-test | 87c9e6ca88073ebb2140dfedb36c19572bd85cd9 | 2022-06-26T17:50:05.000Z | [
"pytorch",
"ja",
"dataset:mc4",
"t5",
"text2text-generation",
"seq2seq",
"license:cc-by-sa-4.0"
] | text2text-generation | false | kkuramitsu | null | kkuramitsu/kogi-mt5-test | 126 | 2 | null | 4,263 | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
datasets:
- mc4
---
# Kogi Python-Code Generation Model |
JasperD-UGent/roberta-base-bne-difficulty-classifier | ce93b4e701be532c5b700b2232ff990e079b63e3 | 2022-06-30T15:27:35.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"transformers",
"autotrain_compatible"
] | token-classification | false | JasperD-UGent | null | JasperD-UGent/roberta-base-bne-difficulty-classifier | 126 | null | transformers | 4,264 | ---
language:
- es
--- |
pszemraj/grammar-synthesis-small | 70e8d2f07ee64f82e52a82c1ed50995fc581ab17 | 2022-07-22T08:36:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:jfleg",
"arxiv:2107.06751",
"transformers",
"grammar",
"spelling",
"punctuation",
"error-correction",
"grammar synthesis",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/grammar-synthesis-small | 126 | null | transformers | 4,265 | ---
license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
datasets:
- jfleg
widget:
- text: "i can has cheezburger"
example_title: "cheezburger"
- text: "There car broke down so their hitching a ride to they're class."
example_title: "compound-1"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
example_title: "lowercased audio transcription output"
parameters:
max_length: 128
min_length: 2
num_beams: 8
repetition_penalty: 1.3
length_penalty: 1
early_stopping: True
---
# grammar-synthesis-small (beta)
This model is a fine-tuned version of [google/t5-small-lm-adapt](https://huggingface.co/google/t5-small-lm-adapt) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.
usage in Python (after `pip install transformers`):
```
from transformers import pipeline
corrector = pipeline(
'text2text-generation',
'pszemraj/grammar-synthesis-small',
)
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)
```
Check out a simple demo in [Google Colab here](https://colab.research.google.com/gist/pszemraj/06fac5b608889e258229a659cc53485f/demo-for-grammar-synthesis-small.ipynb).
## Model description
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
## Limitations
- dataset: `cc-by-nc-sa-4.0`
- model: `apache-2.0`
- this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
## Use Cases
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
- To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
> An example of this model running on CPU with beam search:
```
original response:
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
```
_Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
## Training and evaluation data
More information needed 😉
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
shozi218/finetuned-emotion-model | ed639a694ce83e4ebaee685a87d860803c50cf7d | 2022-07-25T21:54:52.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | shozi218 | null | shozi218/finetuned-emotion-model | 126 | null | transformers | 4,266 | Entry not found |
cointegrated/rut5-small-normalizer | 05222a03ecee390bef0b3966dbb983aa53358a92 | 2021-06-23T12:04:59.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"ru",
"transformers",
"normalization",
"denoising autoencoder",
"russian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-small-normalizer | 125 | 2 | transformers | 4,267 | ---
language: "ru"
tags:
- normalization
- denoising autoencoder
- russian
widget:
- text: "меня тобой не понимать"
license: mit
---
This is a small Russian denoising autoencoder. It can be used for restoring corrupted sentences.
This model was produced by fine-tuning the [rut5-small](https://huggingface.co/cointegrated/rut5-small) model on the task of reconstructing a sentence:
* restoring word positions (after slightly shuffling them)
* restoring dropped words and punctuation marks (after dropping some of them randomly)
* restoring inflection of words (after changing their inflection randomly using [natasha](https://github.com/natasha/natasha) and [pymorphy2](https://github.com/kmike/pymorphy2) packages)
The fine-tuning was performed on a [Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download/Russian) of Russian sentences.
The model can be applied as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-normalizer")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-normalizer")
text = 'меня тобой не понимать'
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.95,
num_return_sequences=5,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
```
A possible output is:
```
# Мне тебя не понимать.
# Если бы ты понимаешь меня?
# Я с тобой не понимаю.
# Я тебя не понимаю.
# Я не понимаю о чем ты.
``` |
elozano/bert-base-cased-clickbait-news | af3154cf4325e687bbf08384b710859dfc69a7d1 | 2022-02-08T19:05:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | elozano | null | elozano/bert-base-cased-clickbait-news | 125 | 3 | transformers | 4,268 | Entry not found |
jadohu/BEiT-finetuned | 2f708c7f94b602ab5ea7690216b5757f2bfe93a6 | 2022-05-18T07:51:57.000Z | [
"pytorch",
"tensorboard",
"beit",
"image-classification",
"dataset:cifar10",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | jadohu | null | jadohu/BEiT-finetuned | 125 | 1 | transformers | 4,269 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: BEiT-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BEiT-finetuned
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0256
- Accuracy: 0.9918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3296 | 1.0 | 351 | 0.0492 | 0.9862 |
| 0.2353 | 2.0 | 702 | 0.0331 | 0.9894 |
| 0.2127 | 3.0 | 1053 | 0.0256 | 0.9918 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
lewtun/autotrain-acronym-identification-7324788 | e9eb71cdf798163f4df2d8990d9035085ed73abf | 2022-07-04T12:12:08.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:lewtun/autotrain-data-acronym-identification",
"dataset:acronym_identification",
"transformers",
"autotrain",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | lewtun | null | lewtun/autotrain-acronym-identification-7324788 | 125 | null | transformers | 4,270 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain \U0001F917"
datasets:
- lewtun/autotrain-data-acronym-identification
- acronym_identification
co2_eq_emissions: 10.435358044493652
model-index:
- name: autotrain-demo
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: acronym_identification
type: acronym_identification
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9708090976211485
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
config: default
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.9790777669399117
verified: true
- name: Precision
type: precision
value: 0.9197835301644851
verified: true
- name: Recall
type: recall
value: 0.946479027789208
verified: true
- name: F1
type: f1
value: 0.9329403493591477
verified: true
- name: loss
type: loss
value: 0.06360606849193573
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 7324788
- CO2 Emissions (in grams): 10.435358044493652
## Validation Metrics
- Loss: 0.08991389721632004
- Accuracy: 0.9708090976211485
- Precision: 0.8998421675654347
- Recall: 0.9309429854401959
- F1: 0.9151284109149278
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-acronym-identification-7324788
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200 | 82a1dfce91a42debc80e51f0b3f3b19e10bab633 | 2021-08-02T18:38:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200 | 124 | 1 | transformers | 4,271 | Entry not found |
alperiox/autonlp-user-review-classification-536415182 | 26060abfefaeb6899b5be46cfc378a1425df371e | 2022-01-28T16:30:08.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:alperiox/autonlp-data-user-review-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | alperiox | null | alperiox/autonlp-user-review-classification-536415182 | 124 | null | transformers | 4,272 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alperiox/autonlp-data-user-review-classification
co2_eq_emissions: 1.268309634217171
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 536415182
- CO2 Emissions (in grams): 1.268309634217171
## Validation Metrics
- Loss: 0.44733062386512756
- Accuracy: 0.8873239436619719
- Macro F1: 0.8859416445623343
- Micro F1: 0.8873239436619719
- Weighted F1: 0.8864646766540891
- Macro Precision: 0.8848522167487685
- Micro Precision: 0.8873239436619719
- Weighted Precision: 0.8883299798792756
- Macro Recall: 0.8908045977011494
- Micro Recall: 0.8873239436619719
- Weighted Recall: 0.8873239436619719
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
gdario/biobert_bioasq | 51cde02b5c9373d78db934bdab1db4a6ac997ec3 | 2021-05-19T17:13:28.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | gdario | null | gdario/biobert_bioasq | 124 | null | transformers | 4,273 | Entry not found |
hfl/chinese-electra-base-generator | e005fa8fbff9b175ea5a49729343fd141464289a | 2021-03-03T01:39:38.000Z | [
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | hfl | null | hfl/chinese-electra-base-generator | 124 | null | transformers | 4,274 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli | c435eb5136a669fbb3ba9990c894d609267717dd | 2021-05-19T20:05:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | huggingface | null | huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli | 124 | null | transformers | 4,275 | Entry not found |
seyonec/ChemBERTA_PubChem1M_shard00_155k | 63f1b55ceb072fee347d9a7972ea703bf462da9a | 2021-05-20T20:54:07.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTA_PubChem1M_shard00_155k | 124 | null | transformers | 4,276 | Entry not found |
allenai/tk-instruct-3b-def-pos-neg-expl | 38c2e6d5235c811ad3562af0e508e94c0f7ebe0c | 2022-05-27T06:30:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/tk-instruct-3b-def-pos-neg-expl | 124 | 1 | transformers | 4,277 | ---
language: en
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
aiassociates/t5-small-grammar-correction-german | e5c4c530dbb4cab338732e1ac3e6b288219a1d9f | 2022-05-19T13:05:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"transformers",
"grammar",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | aiassociates | null | aiassociates/t5-small-grammar-correction-german | 124 | 1 | transformers | 4,278 | ---
language: de
tags:
- grammar
- text2text-generation
license: cc-by-nc-sa-4.0
widget:
- text: "grammar: hier ein kleines beispiel was haltet ihr von der korrektur"
---
# T5 Grammar Correction
This model restores upper and lower case as well as punctuation. It was trained with [Happy Transformer](https://github.com/EricFillion/happy-transformer) on the German Wikipedia dump.
## Usage
`pip install happytransformer `
```python
from happytransformer import HappyTextToText, TTSettings
happy_tt = HappyTextToText("T5", "aiassociates/t5-small-grammar-correction-german")
args = TTSettings(num_beams=5, min_length=1)
# Add the prefix "grammar: " before each input
result = happy_tt.generate_text("grammar: hier ein kleines beispiel was haltet ihr von der korrektur", args=args)
print(result.text) # Hier ein kleines Beispiel: Was haltet ihr von der Korrektur?
```
## Authors
**David Hustadt:** [email protected]
## About us
[AI.Associates](https://www.ai.associates/)
[LinkedIn](https://www.linkedin.com/company/ai-associates)
We're always looking for developers to join us. Feel free to contact us [email protected] |
Helsinki-NLP/opus-mt-ar-es | 3ef30bf7f35ba7a4da5a484db92a2c2be18ef521 | 2021-01-18T07:47:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-es | 123 | null | transformers | 4,279 | ---
language:
- ar
- es
tags:
- translation
license: apache-2.0
---
### ara-spa
* source group: Arabic
* target group: Spanish
* OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.spa | 46.0 | 0.641 |
### System Info:
- hf_name: ara-spa
- source_languages: ara
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'es']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: spa
- short_pair: ar-es
- chrF2_score: 0.6409999999999999
- bleu: 46.0
- brevity_penalty: 0.9620000000000001
- ref_len: 9708.0
- src_name: Arabic
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: es
- prefer_old: False
- long_pair: ara-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-sw | cc959449d3b74732da0527bb57d79a14e729fae1 | 2021-09-09T21:39:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sw | 123 | 2 | transformers | 4,280 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sw
* source languages: en
* target languages: sw
* OPUS readme: [en-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.en.sw | 24.2 | 0.527 |
|
Theivaprakasham/bert-base-cased-twitter_sentiment | 727ed630a04949a37dab15054b441eb49ce9f4f5 | 2021-12-06T09:52:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Theivaprakasham | null | Theivaprakasham/bert-base-cased-twitter_sentiment | 123 | null | transformers | 4,281 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-twitter_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-twitter_sentiment
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6907
- Accuracy: 0.7132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8901 | 1.0 | 1387 | 0.8592 | 0.6249 |
| 0.8085 | 2.0 | 2774 | 0.7600 | 0.6822 |
| 0.7336 | 3.0 | 4161 | 0.7170 | 0.6915 |
| 0.6938 | 4.0 | 5548 | 0.7018 | 0.7016 |
| 0.6738 | 5.0 | 6935 | 0.6926 | 0.7067 |
| 0.6496 | 6.0 | 8322 | 0.6910 | 0.7088 |
| 0.6599 | 7.0 | 9709 | 0.6902 | 0.7088 |
| 0.631 | 8.0 | 11096 | 0.6910 | 0.7095 |
| 0.6327 | 9.0 | 12483 | 0.6925 | 0.7146 |
| 0.6305 | 10.0 | 13870 | 0.6907 | 0.7132 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
hfl/english-pert-base | 154b7b6bbedca1a722a72982a00e071ea3c88549 | 2022-02-24T02:58:25.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"en",
"transformers",
"license:cc-by-nc-sa-4.0"
] | feature-extraction | false | hfl | null | hfl/english-pert-base | 123 | 2 | transformers | 4,282 | ---
language:
- en
license: "cc-by-nc-sa-4.0"
---
# Please use 'Bert' related functions to load this model!
# ALL English models are UNCASED (lowercase=True)
Under construction...
Please visit our GitHub repo for more information: https://github.com/ymcui/PERT |
julien-c/distilbert-feature-extraction | b7890af3b5cea5d4b74313aac19619a2e1915685 | 2021-06-04T21:47:24.000Z | [
"pytorch",
"distilbert",
"transformers",
"feature-extraction"
] | feature-extraction | false | julien-c | null | julien-c/distilbert-feature-extraction | 123 | 2 | transformers | 4,283 | ---
tags:
- feature-extraction
widget:
- text: "Hello world"
---
# Distilbert, used as a Feature Extractor
|
monsoon-nlp/tamillion | 12ea4bde01dba6721a5bd70d334f2a08e47c68fc | 2020-10-30T03:58:42.000Z | [
"pytorch",
"tf",
"electra",
"feature-extraction",
"ta",
"transformers"
] | feature-extraction | false | monsoon-nlp | null | monsoon-nlp/tamillion | 123 | null | transformers | 4,284 | ---
language: ta
---
# TaMillion
This is the second version of a Tamil language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1Pwia5HJIb6Ad4Hvbx5f-IjND-vCaJzSE?usp=sharing
V1: small model with GPU; 190,000 steps;
V2 (current): base model with TPU and larger corpus; 224,000 steps
## Classification
Sudalai Rajkumar's Tamil-NLP page contains classification and regression tasks:
https://www.kaggle.com/sudalairajkumar/tamil-nlp
Notebook: https://colab.research.google.com/drive/1_rW9HZb6G87-5DraxHvhPOzGmSMUc67_?usp=sharin
The model outperformed mBERT on news classification:
(Random: 16.7%, mBERT: 53.0%, TaMillion: 75.1%)
The model slightly outperformed mBERT on movie reviews:
(RMSE - mBERT: 0.657, TaMillion: 0.626)
Equivalent accuracy on the Tirukkural topic task.
## Question Answering
I didn't find a Tamil-language question answering dataset, but this model could be finetuned
to train a QA model. See Hindi and Bengali examples here: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar
## Corpus
Trained on
IndicCorp Tamil (11GB) https://indicnlp.ai4bharat.org/corpora/
and 1 October 2020 dump of https://ta.wikipedia.org (482MB)
## Vocabulary
Included as vocab.txt in the upload
|
pucpr/clinicalnerpt-healthcare | e00266d90441fc5950099c27f71820be7c2edb4a | 2021-10-13T09:32:28.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-healthcare | 123 | 4 | transformers | 4,285 | ---
language: "pt"
widget:
- text: "Acompanhamento da diabetes, paciente encaminhado da unidade de saúde."
- text: "Paciente encaminhado por alteração na função renal."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - HealthCare
The HealthCare NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
superb/wav2vec2-large-superb-ic | 06b9dc6cdef3881d673b3b771ffa46e51c6d7a66 | 2021-09-04T19:52:29.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/wav2vec2-large-superb-ic | 123 | null | transformers | 4,286 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Large for Intent Classification
## Model description
This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
speakers. SUPERB uses the
[Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/)
dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands).
## Usage examples
You can use the model directly like so:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ic", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-ic")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-ic")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
action_ids = torch.argmax(logits[:, :6], dim=-1).tolist()
action_labels = [model.config.id2label[_id] for _id in action_ids]
object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist()
object_labels = [model.config.id2label[_id + 6] for _id in object_ids]
location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist()
location_labels = [model.config.id2label[_id + 20] for _id in location_ids]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9528` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
Finnish-NLP/t5-large-nl36-finnish | 18c21fd9af82c53217100d0c40fd36dd330e1061 | 2022-07-12T13:30:36.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"transformers",
"finnish",
"t5x",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Finnish-NLP | null | Finnish-NLP/t5-large-nl36-finnish | 123 | null | transformers | 4,287 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-large-nl36 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-large-nl36](https://huggingface.co/google/t5-efficient-large-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "large" model's architecture of 24 transformer layers.
In total, this model has 1425 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1.87M steps with a batch size of 32 (in total 31B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-3, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the sixth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |TBA |TBA |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing | ced36d680fb782cfd6c5a73d042af88587ce4ef4 | 2022-04-26T05:59:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2203.07836",
"transformers",
"AMRBART",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | xfbai | null | xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing | 123 | null | transformers | 4,288 | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART-large-finetuned-AMR3.0-AMRParsing
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR3.0 dataset. It achieves a Smatch of 84.2 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR3.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635
training instances, 1,722 validation instances, and 1,898 test instances.
## Intended uses & limitations
You can use the model for AMR parsing, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
junnyu/structbert-large-zh | b401f2453e90eed416d2b844b75758f645565f5a | 2022-05-18T05:49:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"zh",
"arxiv:1908.04577",
"transformers",
"structbert",
"tf2.0"
] | feature-extraction | false | junnyu | null | junnyu/structbert-large-zh | 123 | 1 | transformers | 4,289 | ---
language: zh
tags:
- structbert
- pytorch
- tf2.0
inference: False
---
# StructBERT: Un-Official Copy
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
**Claimer**
* This model card is not produced by [AliceMind Team](https://github.com/alibaba/AliceMind/)
## Reproduce HFHub models:
Download model/tokenizer vocab
```bash
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/ch_large_bert_config.json && mv ch_large_bert_config.json config.json
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/ch_vocab.txt
wget https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/ch_model && mv ch_model pytorch_model.bin
```
```python
from transformers import BertConfig, BertModel, BertTokenizer
config = BertConfig.from_pretrained("./config.json")
model = BertModel.from_pretrained("./", config=config)
tokenizer = BertTokenizer.from_pretrained("./")
model.push_to_hub("structbert-large-zh")
tokenizer.push_to_hub("structbert-large-zh")
```
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
# StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
## Introduction
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
order of words and sentences, which leverage language structures at the word and sentence levels,
respectively.
## Pre-trained models
|Model | Description | #params | Download |
|------------------------|-------------------------------------------|------|------|
|structbert.en.large | StructBERT using the BERT-large architecture | 340M | [structbert.en.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model) |
|structroberta.en.large | StructRoBERTa continue training from RoBERTa | 355M | Coming soon |
|structbert.ch.large | Chinese StructBERT; BERT-large architecture | 330M | [structbert.ch.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/ch_model) |
## Results
The results of GLUE & CLUE tasks can be reproduced using the hyperparameters listed in the following "Example usage" section.
#### structbert.en.large
[GLUE benchmark](https://gluebenchmark.com/leaderboard)
|Model| MNLI | QNLIv2 | QQP | SST-2 | MRPC |
|--------------------|-------|-------|-------|-------|-------|
|structbert.en.large |86.86% |93.04% |91.67% |93.23% |86.51% |
#### structbert.ch.large
[CLUE benchmark](https://www.cluebenchmarks.com/)
|Model | CMNLI | OCNLI | TNEWS | AFQMC |
|--------------------|-------|-------|-------|-------|
|structbert.ch.large |84.47% |81.28% |68.67% |76.11% |
## Example usage
#### Requirements and Installation
* [PyTorch](https://pytorch.org/) version >= 1.0.1
* Install other libraries via
```
pip install -r requirements.txt
```
* For faster training install NVIDIA's [apex](https://github.com/NVIDIA/apex) library
#### Finetune MNLI
```
python run_classifier_multi_task.py \
--task_name MNLI \
--do_train \
--do_eval \
--do_test \
--amp_type O1 \
--lr_decay_factor 1 \
--dropout 0.1 \
--do_lower_case \
--detach_index -1 \
--core_encoder bert \
--data_dir path_to_glue_data \
--vocab_file config/vocab.txt \
--bert_config_file config/large_bert_config.json \
--init_checkpoint path_to_pretrained_model \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--fast_train \
--gradient_accumulation_steps 1 \
--output_dir path_to_output_dir
```
## Citation
If you use our work, please cite:
```
@article{wang2019structbert,
title={Structbert: Incorporating language structures into pre-training for deep language understanding},
author={Wang, Wei and Bi, Bin and Yan, Ming and Wu, Chen and Bao, Zuyi and Xia, Jiangnan and Peng, Liwei and Si, Luo},
journal={arXiv preprint arXiv:1908.04577},
year={2019}
}
``` |
cross-encoder/mmarco-mdeberta-v3-base-5negs-v1 | e4639f2fcee3da997e7da0a0948229ac172f83b1 | 2022-06-30T11:44:05.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | cross-encoder | null | cross-encoder/mmarco-mdeberta-v3-base-5negs-v1 | 123 | null | transformers | 4,290 | Entry not found |
Evelyn18/distilbert-base-uncased-becas-4 | a47c05762bdf53c07524d8a25bdf9f61992389e7 | 2022-07-05T21:55:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becas-4 | 123 | null | transformers | 4,291 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becas-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becas-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.9618 |
| No log | 2.0 | 18 | 4.1071 |
| No log | 3.0 | 27 | 3.5438 |
| No log | 4.0 | 36 | 3.2115 |
| No log | 5.0 | 45 | 2.9524 |
| No log | 6.0 | 54 | 3.0645 |
| No log | 7.0 | 63 | 2.9351 |
| No log | 8.0 | 72 | 3.1037 |
| No log | 9.0 | 81 | 3.1132 |
| No log | 10.0 | 90 | 3.1357 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
projecte-aina/m2m100_418M_ft_zh_ca | 7dbefb1702260be9d7b0313971590846560491ef | 2022-07-25T06:46:35.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"ca",
"zh",
"dataset:projecte-aina/ca_zh_wikipedia",
"transformers",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | projecte-aina | null | projecte-aina/m2m100_418M_ft_zh_ca | 123 | null | transformers | 4,292 | ---
license: cc-by-4.0
language:
- ca
- zh
datasets:
- projecte-aina/ca_zh_wikipedia
metrics:
- "bleu"
model-index:
- name: m2m100_418M_ft_zh_ca
results:
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: BLEU
type: bleu
value: 18.0
---
## m2m100 fine-tuned on the ca_zh_wikipedia dataset for machine translation
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
## Model description
This model was obtained by fine-tuning the [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model on a Zh-Ca machine translation task with the [ca_zh_wikipedia](https://huggingface.co/datasets/projecte-aina/ca_zh_wikipedia) dataset that has been created along with the model. We also evaluate it on a general-domain multilingual testset [Flores-101](https://github.com/facebookresearch/flores).
## Intended Uses and Limitations
You can use this model for machine translation from Chinese to Catalan.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("projecte-aina/m2m100_418M_ft_zh_ca")
model = AutoModelForSeq2SeqLM.from_pretrained("projecte-aina/m2m100_418M_ft_zh_ca")
```
## Training
### Training Data
As a data for fine-tuning we used the [ca_zh_wikipedia](https://huggingface.co/datasets/projecte-aina/ca_zh_wikipedia) dataset extracted from Wikipedia.
### Training Procedure
#### Tokenization
The original [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model's sentencepiece tokenizer was used. The fine-tuning dataset that contained both simplified and traditional Chinese was reduced to its simplified form.
#### Hyperparameters
The model was trained for 15 epochs with the default parameters and \\(LR = 2\mathrm{e}{-5}\\).
## Evaluation
### Variable and Metrics
We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores).
### Evaluation Results
Below are the evaluation results on the machine translation from Catalan to Chinese compared with the original m2m100 on a testset: [Flores-101](https://github.com/facebookresearch/flores).
|Test set | Model | BLEU |
| ------------|-------------| -----|
|Flores-101 | m2m100 | 17.5 |
| | m2m100_418M_ft_ca_zh | **18.0** |
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
projecte-aina/m2m100-418M-ft-de-ca | bb55f538f99435c1555022b2c5466e30f0f78461 | 2022-07-25T09:38:20.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"ca",
"de",
"dataset:Softcatala/parallel-catalan-corpus/deu-cat",
"transformers",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | projecte-aina | null | projecte-aina/m2m100-418M-ft-de-ca | 123 | null | transformers | 4,293 | ---
license: cc-by-4.0
language:
- ca
- de
datasets:
- Softcatala/parallel-catalan-corpus/deu-cat
metrics:
- "bleu"
- "meteor"
- "chrf"
- "ter"
model-index:
- name: m2m100_418M_ft_de_ca
results:
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: BLEU
type: bleu
value: 28.5
- task:
type: translation
dataset:
type: wmt/wmt13
name: WMT13
metrics:
- name: BLEU
type: bleu
value: 22.9
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: TER
type: ter
value: 60.7
- task:
type: translation
dataset:
type: wmt/wmt13
name: WMT13
metrics:
- name: TER
type: ter
value: 71.0
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: METEOR
type: meteor
value: 55.9
- task:
type: translation
dataset:
type: wmt/wmt13
name: WMT13
metrics:
- name: METEOR
type: meteor
value: 49.5
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: chrF
type: chrf
value: 55.9
- task:
type: translation
dataset:
type: wmt/wmt13
name: WMT13
metrics:
- name: chrF
type: chrf
value: 54.1
---
## m2m100 fine-tuned on Softcatalà's parallel Catalan-German dataset for machine translation
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
## Model description
This model was obtained by fine-tuning the [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model on a De-Ca machine translation task with the [Softcatalà Catalan-German parallel corpus](https://github.com/Softcatala/parallel-catalan-corpus/tree/master/deu-cat) dataset, with sentences deduplicated and filtered by the [GEnCaTa quality filter](https://huggingface.co/projecte-aina/mbert-base-gencata). We also evaluate it on a general-domain multilingual testset [Flores-200](https://github.com/facebookresearch/flores) and [WMT13](https://www.statmt.org/wmt13/).
## Intended Uses and Limitations
You can use this model for machine translation from German to Catalan.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("projecte-aina/2m100-418M-ft-de-ca")
model = AutoModelForSeq2SeqLM.from_pretrained("projecte-aina/2m100-418M-ft-de-ca")
```
## Training
### Training Data
As a data for fine-tuning we used the [Softcatalà Catalan-German parallel corpus](https://github.com/Softcatala/parallel-catalan-corpus/tree/master/deu-cat) dataset, with sentences deduplicated and filtered by the [GEnCaTa quality filter](https://huggingface.co/projecte-aina/mbert-base-gencata).
### Training Procedure
#### Tokenization
The original [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model's sentencepiece tokenizer was used.
#### Hyperparameters
The model was trained for 2 epochs with the default parameters and \\(LR = 2\mathrm{e}{-5}\\).
## Evaluation
### Variable and Metrics
We use the BLEU score for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores) and [WMT13](https://www.statmt.org/wmt13/).
### Evaluation Results
Below are the evaluation results on the machine translation from German to Catalan compared with the original m2m100 on a testset: [Flores-200](https://github.com/facebookresearch/flores).
|Test set | Model | BLEU | TER | METEOR | chrF
| ------------|-------------| -----| -----| -----| -----|
|Flores-200 | m2m100 | 26.6 | 63.1 | 54.0 | 53.5 |
| | m2m100-418M-ft-de-ca | **28.5** | **60.7** | **55.9** | **55.9** |
|WMT13 | m2m100 | 21.8 | 72.8 | 48.0 | 53.5 |
| | m2m100-418M-ft-de-ca | **22.9** | **71.0** | **49.5** | **54.1** |
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU | 1a011cfdd489add090755bc0d966f9f835fb6e1a | 2021-09-09T21:24:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"multilingual",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU | 122 | 2 | transformers | 4,294 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-NORTH_EU-NORTH_EU
* source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* OPUS readme: [de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.sv | 48.1 | 0.663 |
|
KoboldAI/fairseq-dense-2.7B-Janeway | f5a0d4c5b1a6b6588bba5374f67cf5585533065c | 2022-04-16T16:46:57.000Z | [
"pytorch",
"xglm",
"text-generation",
"en",
"transformers",
"license:mit"
] | text-generation | false | KoboldAI | null | KoboldAI/fairseq-dense-2.7B-Janeway | 122 | 1 | transformers | 4,295 | ---
language: en
license: mit
---
# Fairseq-dense 2.7B - Janeway
## Model Description
Fairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model.
## Training data
The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway.
Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-2.7B-Janeway')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
### BibTeX entry and citation info
```
Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
``` |
akhooli/mbart-large-cc25-en-ar | 5e05f431e7d8e6778911f433499baabf3936e1e2 | 2020-12-11T21:32:08.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"en",
"ar",
"transformers",
"translation",
"license:mit",
"autotrain_compatible"
] | translation | false | akhooli | null | akhooli/mbart-large-cc25-en-ar | 122 | 1 | transformers | 4,296 | ---
tags:
- translation
language:
- en
- ar
license: mit
---
### mbart-large-en-ar
This is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
|
arm-on/BERTweet-FA | 40c3d9b30af98e81eb595208dd482ef734187add | 2022-05-06T08:24:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"fa",
"arxiv:1810.04805",
"transformers",
"BERTweet",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | arm-on | null | arm-on/BERTweet-FA | 122 | 3 | transformers | 4,297 | ---
license: apache-2.0
language: fa
widget:
- text: "این بود [MASK] های ما؟"
- text: "داداچ داری [MASK] میزنی"
- text: 'به علی [MASK] میگفتن جادوگر'
- text: 'آخه محسن [MASK] هم شد خواننده؟'
- text: 'پسر عجب [MASK] زد'
tags:
- BERTweet
model-index:
- name: BERTweet-FA
results: []
---
BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets
---
BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [[Devlin et al.](https://arxiv.org/abs/1810.04805)].
How to use the Model
---
```python
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained('arm-on/BERTweet-FA')
tokenizer = BertTokenizer.from_pretrained('arm-on/BERTweet-FA')
fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید')
```
The Training Data
---
The first version of the model was trained on the "[Large Scale Colloquial Persian Dataset](https://iasbs.ac.ir/~ansari/lscp/)" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
Evaluation
---
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:-----:|
| 0.0036 | 1.0 | 322906 |
Contributors
---
- [Arman Malekzadeh](http://ce.sharif.edu/~malekzaadeh/), PhD Student in AI @ Sharif University of Technology [[Linkedin](https://www.linkedin.com/in/arman-malekzadeh/)] [[Github](https://github.com/arm-on)] |
gilf/english-yelp-sentiment | 933119219cfdf286ddd370b39af5b207ee145822 | 2022-06-15T09:19:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | gilf | null | gilf/english-yelp-sentiment | 122 | 1 | transformers | 4,298 | Entry not found |
mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt | ae3de34287ea2a58cac79d045a5dc7d88ae8cb50 | 2021-05-20T00:21:55.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"pt",
"dataset:squad_v1_pt",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt | 122 | 2 | transformers | 4,299 | ---
language: pt
datasets:
- squad_v1_pt
widget:
- text: "Com que licença posso usar o conteúdo da wikipedia?"
context: "A Wikipédia é um projeto de enciclopédia colaborativa, universal e multilíngue estabelecido na internet sob o princípio wiki. Tem como propósito fornecer um conteúdo livre, objetivo e verificável, que todos possam editar e melhorar. O projeto é definido pelos princípios fundadores. O conteúdo é disponibilizado sob a licença Creative Commons BY-SA e pode ser copiado e reutilizado sob a mesma licença — mesmo para fins comerciais — desde que respeitando os termos e condições de uso."
license: apache-2.0
---
# bert-base-portuguese-cased fine-tuned on SQuAD-v1-pt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.