modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HaoHu/vit-base-patch16-224-in21k-classify-4scence | 71d615a3bd25998d3d8b4684b9feb08dbaf77949 | 2022-07-24T16:02:55.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"license:other"
] | image-classification | false | HaoHu | null | HaoHu/vit-base-patch16-224-in21k-classify-4scence | 70 | null | transformers | 5,400 | ---
license: other
---
train this model on the Contest
the original dataset is
链接: https://pan.baidu.com/s/1pr094NZ2QMj3nLy12gfa6g 密码: kb7a |
Akashpb13/xlsr_kurmanji_kurdish | cfa28c05cf423a45538613190880b3aa335e11ed | 2022-07-24T11:13:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kmr",
"ku",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/xlsr_kurmanji_kurdish | 69 | 2 | transformers | 5,401 | ---
language:
- kmr
- ku
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- kmr
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/xlsr_kurmanji_kurdish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: kmr
metrics:
- name: Test WER
type: wer
value: 0.33073206986250464
- name: Test CER
type: cer
value: 0.08035244447163924
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: kmr
metrics:
- name: Test WER
type: wer
value: 0.33073206986250464
- name: Test CER
type: cer
value: 0.08035244447163924
---
# Akashpb13/xlsr_kurmanji_kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.292389
- Wer: 0.388585
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kurmanji Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 16
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.382500 | 3.183725 | 1.000000 |
| 400 | 2.870200 | 0.996664 | 0.781117 |
| 600 | 0.609900 | 0.333755 | 0.445052 |
| 800 | 0.326800 | 0.305729 | 0.403157 |
| 1000 | 0.255000 | 0.290734 | 0.391621 |
| 1200 | 0.226300 | 0.292389 | 0.388585 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/xlsr_kurmanji_kurdish --dataset mozilla-foundation/common_voice_8_0 --config kmr --split test
```
|
Helsinki-NLP/opus-mt-gv-en | c02ace54b4c05a93551a57b72abffe5410883453 | 2021-09-09T21:59:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gv",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gv-en | 69 | null | transformers | 5,402 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gv-en
* source languages: gv
* target languages: en
* OPUS readme: [gv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.gv.en | 38.9 | 0.668 |
|
M-FAC/bert-mini-finetuned-mrpc | df30494cb0306e73f26b01f2a262500e3399eb19 | 2021-12-13T08:15:19.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
] | text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-mrpc | 69 | null | transformers | 5,403 | # BERT-mini model finetuned with M-FAC
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MRPC validation set:
```bash
f1 = 86.51
accuracy = 81.12
```
Mean and standard deviation for 5 runs on MRPC validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 84.57 ± 0.36| 76.57 ± 0.80|
| M-FAC | 85.06 ± 1.63 | 78.87 ± 2.33 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-mini \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
airesearch/wangchanberta-base-wiki-syllable | c21b6b012537891233bb454df05fea4deac0bdae | 2021-09-11T09:38:36.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearch | null | airesearch/wangchanberta-base-wiki-syllable | 69 | null | transformers | 5,404 | ---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-syllable`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-syllable` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as `syllable` from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
akhooli/xlm-r-large-arabic-toxic | c87447910b4639a701ac961cd188c96cb4f11263 | 2020-12-11T21:32:20.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ar",
"en",
"transformers",
"license:mit"
] | text-classification | false | akhooli | null | akhooli/xlm-r-large-arabic-toxic | 69 | null | transformers | 5,405 | ---
language:
- ar
- en
license: mit
---
### xlm-r-large-arabic-toxic (toxic/hate speech classifier)
Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
|
huggingartists/platina | 82f4531528d68ed7b914a5611d070c99c9626360 | 2021-09-29T17:06:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/platina",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/platina | 69 | null | transformers | 5,406 | ---
language: en
datasets:
- huggingartists/platina
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b12dc90e6f405684ef6b74c9de92fdcd.853x853x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Платина (Platina)</div>
<a href="https://genius.com/artists/platina">
<div style="text-align: center; font-size: 14px;">@platina</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Платина (Platina).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/platina).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/platina")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2ih365j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Платина (Platina)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1quasiz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/platina')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/platina")
model = AutoModelWithLMHead.from_pretrained("huggingartists/platina")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jcblaise/electra-tagalog-base-cased-discriminator | 8a8f3ffd0ed8fbfb8b7cfaec8ed45e99e90adbb4 | 2021-11-12T03:23:38.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
] | null | false | jcblaise | null | jcblaise/electra-tagalog-base-cased-discriminator | 69 | null | transformers | 5,407 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# ELECTRA Tagalog Base Cased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
pparasurama/raceBERT | dc586588e7bd022cb74f03b511b6d7cc075702ad | 2021-12-07T01:21:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | pparasurama | null | pparasurama/raceBERT | 69 | 1 | transformers | 5,408 | Entry not found |
pucpr/clinicalnerpt-quantitative | c1e9dc6fb2d579485140c7655ce7a4badea1da87 | 2021-10-13T09:31:50.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-quantitative | 69 | 3 | transformers | 5,409 | ---
language: "pt"
widget:
- text: "Paciente faz uso de losartana 50mg, HCTZ 25mg DM ha 25 anos."
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Quantitative
The Quantitative NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
sarahlintang/IndoBERT | b49b5ed020c51076acc24bb00f6b63f9161102ce | 2021-05-20T04:51:45.000Z | [
"pytorch",
"jax",
"bert",
"id",
"dataset:oscar",
"transformers"
] | null | false | sarahlintang | null | sarahlintang/IndoBERT | 69 | null | transformers | 5,410 | ---
language: id
datasets:
- oscar
---
# IndoBERT (Indonesian BERT Model)
## Model description
IndoBERT is a pre-trained language model based on BERT architecture for the Indonesian Language.
This model is base-uncased version which use bert-base config.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahlintang/IndoBERT")
model = AutoModel.from_pretrained("sarahlintang/IndoBERT")
tokenizer.encode("hai aku mau makan.")
[2, 8078, 1785, 2318, 1946, 18, 4]
```
## Training data
This model was pre-trained on 16 GB of raw text ~2 B words from Oscar Corpus (https://oscar-corpus.com/).
This model is equal to bert-base model which has 32,000 vocabulary size.
## Training procedure
The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
## Eval results
We evaluate this model on three Indonesian NLP downstream task:
- some extractive summarization model
- sentiment analysis
- Part-of-Speech Tagger
it was proven that this model outperforms multilingual BERT for all downstream tasks.
|
vitouphy/wav2vec2-xls-r-300m-english | 8d878c78d96fd2841116b7b894b93985edc8c716 | 2022-05-24T11:08:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"librispeech_asr",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-300m-english | 69 | 1 | transformers | 5,411 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
- generated_from_trainer
- hf-asr-leaderboard
- librispeech_asr
- robust-speech-event
datasets:
- librispeech_asr
model-index:
- name: XLS-R-300M - English
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 12.29
- name: Test CER
type: cer
value: 3.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: en
metrics:
- name: Validation WER
type: wer
value: 36.75
- name: Validation CER
type: cer
value: 14.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 37.81
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: en
metrics:
- name: Test WER
type: wer
value: 38.8
---
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Wer: 0.1167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9365 | 4.17 | 500 | 2.9398 | 0.9999 |
| 1.5444 | 8.33 | 1000 | 0.5947 | 0.4289 |
| 1.1367 | 12.5 | 1500 | 0.2751 | 0.2366 |
| 0.9972 | 16.66 | 2000 | 0.2032 | 0.1797 |
| 0.9118 | 20.83 | 2500 | 0.1786 | 0.1479 |
| 0.8664 | 24.99 | 3000 | 0.1641 | 0.1408 |
| 0.8251 | 29.17 | 3500 | 0.1537 | 0.1267 |
| 0.793 | 33.33 | 4000 | 0.1525 | 0.1244 |
| 0.785 | 37.5 | 4500 | 0.1470 | 0.1184 |
| 0.7612 | 41.66 | 5000 | 0.1446 | 0.1177 |
| 0.7478 | 45.83 | 5500 | 0.1449 | 0.1176 |
| 0.7443 | 49.99 | 6000 | 0.1444 | 0.1167 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
IDEA-CCNL/Erlangshen-Longformer-330M | a50c01a4c08401689f14eda06f2e4da996db90a0 | 2022-07-17T09:22:17.000Z | [
"pytorch",
"longformer",
"zh",
"transformers",
"license:apache-2.0"
] | null | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Longformer-330M | 69 | null | transformers | 5,412 | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# longformer model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We modify the original position code of longformer to [rotational position coding](https://github.com/ZhuiyiTechnology/roformer), and use 180G of data to pretraining.
## Usage
There is no structure of Longformer-large in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of longformer from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### Load Model
```python
from fengshen import LongformerModel
from fengshen import LongformerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-330M")
config = LongformerConfig.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-330M")
model = LongformerModel.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-330M")
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
dimbyTa/rock-challenge-DeiT-solo | 38067d789dd7a7cef2a1b7086a5461620719a8d9 | 2022-04-23T12:05:21.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | dimbyTa | null | dimbyTa/rock-challenge-DeiT-solo | 69 | null | transformers | 5,413 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rock-challenge-DeiT-solo
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8866301774978638
---
# rock-challenge-DeiT-solo
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fines

#### large

#### medium

#### pellets
 |
iyzg/computer-stuff | ab492de29a4d712c6d250c629b435970d8a50fab | 2022-05-13T21:55:51.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | iyzg | null | iyzg/computer-stuff | 69 | null | transformers | 5,414 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: computer-stuff
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8839285969734192
---
# computer-stuff
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### computer mouse

#### desktop

#### keyboard

#### laptop

#### monitor
 |
dinalzein/xlm-roberta-base-finetuned-language-identification | c369439fa6258c051d624f2da8903481a4830033 | 2022-05-25T09:52:27.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | dinalzein | null | dinalzein/xlm-roberta-base-finetuned-language-identification | 69 | null | transformers | 5,415 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-language-identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-language-detection-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification).
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Accuracy: 0.9959
## Model description
The model used in this task is XLM-RoBERTa, a transformer model with a classification head on top.
## Intended uses & limitations
It identifies the language a document is written in and it supports 20 different langauges:
Arabic (ar), Bulgarian (bg), German (de), Modern greek (el), English (en), Spanish (es), French (fr), Hindi (hi), Italian (it), Japanese (ja), Dutch (nl), Polish (pl), Portuguese (pt), Russian (ru), Swahili (sw), Thai (th), Turkish (tr), Urdu (ur), Vietnamese (vi), Chinese (zh)
## Training and evaluation data
The model is fine-tuned on the [Language Identification dataset](https://huggingface.co/datasets/papluca/language-identification), a corpus consists of text from 20 different languages. The dataset is split with 7000 sentences for training, 1000 for validating, and 1000 for testing. The accuracy on the test set is 99.5%.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0493 | 1.0 | 35000 | 0.0407 | 0.9955 |
| 0.018 | 2.0 | 70000 | 0.0436 | 0.9959 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jamie613/mt5_fill_puntuation | a3041da41ee127fe2ef62a64fbadb7019c182e6e | 2022-07-11T19:00:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jamie613 | null | jamie613/mt5_fill_puntuation | 69 | null | transformers | 5,416 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5_fill_puntuation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_fill_puntuation
This model is a fine-tuned version of [jamie613/mt5_fill_puntuation](https://huggingface.co/jamie613/mt5_fill_puntuation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0918 | 0.04 | 500 | 0.0803 |
| 0.0894 | 0.07 | 1000 | 0.0773 |
| 0.0905 | 0.11 | 1500 | 0.0822 |
| 0.0908 | 0.15 | 2000 | 0.0833 |
| 0.0868 | 0.18 | 2500 | 0.0840 |
| 0.09 | 0.22 | 3000 | 0.0811 |
| 0.0868 | 0.26 | 3500 | 0.0735 |
| 0.0869 | 0.29 | 4000 | 0.0805 |
| 0.0874 | 0.33 | 4500 | 0.0742 |
| 0.088 | 0.37 | 5000 | 0.0749 |
| 0.0884 | 0.4 | 5500 | 0.0730 |
| 0.0861 | 0.44 | 6000 | 0.0749 |
| 0.0804 | 0.48 | 6500 | 0.0739 |
| 0.0845 | 0.51 | 7000 | 0.0717 |
| 0.0861 | 0.55 | 7500 | 0.0743 |
| 0.0812 | 0.59 | 8000 | 0.0726 |
| 0.0824 | 0.62 | 8500 | 0.0729 |
| 0.0836 | 0.66 | 9000 | 0.0751 |
| 0.079 | 0.7 | 9500 | 0.0731 |
| 0.0806 | 0.73 | 10000 | 0.0725 |
| 0.0798 | 0.77 | 10500 | 0.0749 |
| 0.0794 | 0.81 | 11000 | 0.0725 |
| 0.0795 | 0.84 | 11500 | 0.0726 |
| 0.0755 | 0.88 | 12000 | 0.0732 |
| 0.0815 | 0.92 | 12500 | 0.0722 |
| 0.0776 | 0.95 | 13000 | 0.0719 |
| 0.0838 | 0.99 | 13500 | 0.0717 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-rua_wl_3_classes | 0c3176815b3cc89dc21a269931c9061f8971348b | 2022-06-20T09:34:17.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-rua_wl_3_classes | 69 | null | transformers | 5,417 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 72.4 | 72.2 |
| test | 72.8 | 72.5 | |
AI4Sec/cyner-xlm-roberta-base | 1e400729cac0561170f8f441d35791768b661201 | 2022-02-22T16:23:17.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | AI4Sec | null | AI4Sec/cyner-xlm-roberta-base | 68 | null | transformers | 5,418 | ---
license: mit
---
|
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 | 4f806dbc260d6ce3d6aed0cbf875f668cc1b5480 | 2021-09-02T08:31:10.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | DataikuNLP | null | DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 | 68 | null | sentence-transformers | 5,419 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
**This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) from sentence-transformers at the specific commit `d66eff4d8a8598f264f166af8db67f7797164651`.**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
InfoCoV/Cro-CoV-cseBERT | 1dd9ad73fdaec21bae223e8ea328705691159763 | 2021-12-09T12:39:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | InfoCoV | null | InfoCoV/Cro-CoV-cseBERT | 68 | null | transformers | 5,420 | ## Usage:
```
from sentence_transformers import models
from sentence_transformers import SentenceTransformer
word_embedding_model = models.Transformer('Cro-CoV-cseBERT')
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model], device='') ## device = 'gpu' or 'cpu'
texts_emb = model.encode(texts)
```
## Datasets:
https://github.com/InfoCoV/InfoCoV
## Paper:
Please cite https://www.mdpi.com/2076-3417/11/21/10442 |
Jean-Baptiste/roberta-ticker | 13ee076648e9fb3dfcdcca2e475efae71cf2aebe | 2021-08-30T12:58:23.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"transformers",
"autotrain_compatible"
] | token-classification | false | Jean-Baptiste | null | Jean-Baptiste/roberta-ticker | 68 | null | transformers | 5,421 | ---
language: en
widget:
- text: "I am going to buy 100 shares of cake tomorrow"
---
# roberta-ticker: model was fine-tuned from Roberta to detect financial tickers
## Introduction
This is a model specifically designed to identify tickers in text.
Model was trained on transformed dataset from following Kaggle dataset:
https://www.kaggle.com/omermetinn/tweets-about-the-top-companies-from-2015-to-2020
## How to use roberta-ticker with HuggingFace
##### Load roberta-ticker and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-ticker")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-ticker")
##### Process text sample
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("I am going to buy 100 shares of cake tomorrow")
[{'entity_group': 'TICKER',
'score': 0.9612462520599365,
'word': ' cake',
'start': 32,
'end': 36}]
nlp("I am going to eat a cake tomorrow")
[]
```
## Model performances
```
precision: 0.914157
recall: 0.788824
f1: 0.846878
```
|
Langboat/mengzi-oscar-base | 71863d5bfb453b86f08da6e205c6202a1d5a7373 | 2021-10-14T02:17:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Langboat | null | Langboat/mengzi-oscar-base | 68 | 4 | transformers | 5,422 | ---
language:
- zh
license: apache-2.0
---
# Mengzi-oscar-base (Chinese Multi-modal pre-training model)
Mengzi-oscar is trained based on the Multi-modal pre-training model [Oscar](https://github.com/microsoft/Oscar), and is initialized using [Mengzi-Bert-Base](https://github.com/Langboat/Mengzi). 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
LeBenchmark/wav2vec2-FR-7K-base | 17c16a4c81e1305df482a54c344c5bf490be8ab2 | 2021-11-23T18:01:03.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | LeBenchmark | null | LeBenchmark/wav2vec2-FR-7K-base | 68 | null | transformers | 5,423 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
SajjadAyoubi/clip-fa-text | ad048ce7b7e62cebc7fdbe96f9e745f79cb6823d | 2021-12-22T19:02:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2103.00020",
"transformers"
] | feature-extraction | false | SajjadAyoubi | null | SajjadAyoubi/clip-fa-text | 68 | null | transformers | 5,424 | # CLIPfa: Connecting Farsi Text and Images
OpenAI released [`the paper Learning Transferable Visual Models From Natural Language Supervision`](https://arxiv.org/abs/2103.00020) in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a vision encoder and a text encoder. These were trained on 400 Million images and corresponding captions. We have trained a Farsi (Persian) version of OpenAI's CLIP on a dataset of 400,000 (image, text) pairs. We used [`Farahani's RoBERTa-fa`](https://huggingface.co/m3hrdadfi/roberta-zwnj-wnli-mean-tokens) as the text encoder and [`ViT`](https://huggingface.co/openai/clip-vit-base-patch32) as the vision encoder from Original CLIP and finetuned them.
- It should be noted that only 400K pairs were used for this training, whereas 4 million pairs were used for the Original CLIP. Also, the training took 30 days across 592 GPUs powered by the V100 chip.
## How to use?
Both models generate vectors with 768 dimensions.
```python
from transformers import CLIPVisionModel, RobertaModel, AutoTokenizer, CLIPFeatureExtractor
# download pre-trained models
vision_encoder = CLIPVisionModel.from_pretrained('SajjadAyoubi/clip-fa-vision')
preprocessor = CLIPFeatureExtractor.from_pretrained('SajjadAyoubi/clip-fa-vision')
text_encoder = RobertaModel.from_pretrained('SajjadAyoubi/clip-fa-text')
tokenizer = AutoTokenizer.from_pretrained('SajjadAyoubi/clip-fa-text')
# define input image and input text
text = 'something'
image = PIL.Image.open('my_favorite_image.jpg')
# compute embeddings
text_embedding = text_encoder(**tokenizer(text,
return_tensors='pt')).pooler_output
image_embedding = vision_encoder(**preprocessor(image,
return_tensors='pt')).pooler_output
text_embedding.shape == image_embedding.shape
```
## Demo:
The followings are just some use cases of CLIPfa on 25K [`Unsplash images`](https://github.com/unsplash/datasets)
- use `pip install -q git+https://github.com/sajjjadayobi/clipfa.git`
```python
from clipfa import CLIPDemo
demo = CLIPDemo(vision_encoder, text_encoder, tokenizer)
demo.compute_text_embeddings(['گاو' ,'اسب' ,'ماهی'])
demo.compute_image_embeddings(test_df.image_path.to_list())
```
## Online Demo: [CLIPfa at Huggingface🤗 spaces](https://huggingface.co/spaces/SajjadAyoubi/CLIPfa-Demo)
We used a small set of images (25K) to keep this app almost real-time, but it's obvious that the quality of image search depends heavily on the size of the image database.
> Made with ❤️ in my basement🤫
|
Wikidepia/indobert-lite-squad | 9d94c6279e1488ae8ac75514fa1856b41425e2d4 | 2021-03-31T13:26:55.000Z | [
"pytorch",
"albert",
"question-answering",
"id",
"transformers",
"autotrain_compatible"
] | question-answering | false | Wikidepia | null | Wikidepia/indobert-lite-squad | 68 | 1 | transformers | 5,425 | ---
language: id
widget:
- text: "Kapan Einstein melepas kewarganegaraan Jerman?"
context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900."
---
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/indobert-lite-squad'
)
qa_pipeline = pipeline(
"question-answering",
model="Wikidepia/indobert-lite-squad",
tokenizer=tokenizer
)
qa_pipeline({
'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.",
'question': "Kapan Einstein melepas kewarganegaraan Jerman?"
})
```
# Output:
```json
{
"score":0.9799205660820007,
"start":147,
"end":151,
"answer":"1896"
}
```
README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
|
avichr/heBERT_NER | fc4a87775d8cb716b9803b73672ebcd53e933c4a | 2022-01-11T17:00:46.000Z | [
"pytorch",
"bert",
"token-classification",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible"
] | token-classification | false | avichr | null | avichr/heBERT_NER | 68 | null | transformers | 5,426 | # HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HeBERT is a Hebrew pretrained language model. It is based on [Google's BERT](https://arxiv.org/abs/1810.04805) architecture and it is BERT-Base config. <br>
HeBert was trained on three dataset:
1. A Hebrew version of [OSCAR](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences.
2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences
3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below).
## Named-entity recognition (NER)
The ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from [Ben Mordecai and M Elhadad (2005)](https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/), and evaluated with F1-score.
### How to use
```
from transformers import pipeline
# how to use?
NER = pipeline(
"token-classification",
model="avichr/heBERT_NER",
tokenizer="avichr/heBERT_NER",
)
NER('דויד לומד באוניברסיטה העברית שבירושלים')
```
## Other tasks
[**Emotion Recognition Model**](https://huggingface.co/avichr/hebEMO_trust).
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
<br>
[**Sentiment Analysis**](https://huggingface.co/avichr/heBERT_sentiment_analysis).
<br>
[**masked-LM model**](https://huggingface.co/avichr/heBERT) (can be fine-tunned to any down-stream task).
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
[git](https://github.com/avichaychriqui/HeBERT)
|
flax-community/gpt-neo-1.3B-code-clippy | d4807ec0fc2ee779705487898e1a4e9ff2c9eb28 | 2021-07-11T21:30:30.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt-neo-1.3B-code-clippy | 68 | 1 | transformers | 5,427 | Entry not found |
gealexandri/palobert-base-greek-uncased-v1 | 2a9e2202eef049421eeba46ecac7d2eb9a0b5d2c | 2021-08-18T07:25:30.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"el",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | gealexandri | null | gealexandri/palobert-base-greek-uncased-v1 | 68 | 1 | transformers | 5,428 | ---
language: el
---
# PaloBERT
## Model description
A Greek language model based on [RoBERTa](https://arxiv.org/abs/1907.11692)
## Training data
The training data is a corpus of 458,293 documents collected from Greek social media accounts. It also contains a GTP-2 tokenizer trained from scratch on the same corpus.
The training corpus has been collected and provided by [Palo LTD](http://www.paloservices.com/)
## Eval results
### BibTeX entry and citation info
```bibtex
@Article{info12080331,
AUTHOR = {Alexandridis, Georgios and Varlamis, Iraklis and Korovesis, Konstantinos and Caridakis, George and Tsantilas, Panagiotis},
TITLE = {A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media},
JOURNAL = {Information},
VOLUME = {12},
YEAR = {2021},
NUMBER = {8},
ARTICLE-NUMBER = {331},
URL = {https://www.mdpi.com/2078-2489/12/8/331},
ISSN = {2078-2489},
DOI = {10.3390/info12080331}
}
```
|
huggingtweets/spacebananaza | 219f8d2f57689b645766789ad1fbf49d2347da5e | 2021-05-22T23:35:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/spacebananaza | 68 | null | transformers | 5,429 | ---
language: en
thumbnail: https://www.huggingtweets.com/spacebananaza/1617774737011/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1370063594520408064/bC3Dbs4D_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tess 🤖 AI Bot </div>
<div style="font-size: 15px">@spacebananaza bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@spacebananaza's tweets](https://twitter.com/spacebananaza).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 593 |
| Retweets | 308 |
| Short tweets | 46 |
| Tweets kept | 239 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jzrx9ry/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @spacebananaza's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9vv9pgcs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9vv9pgcs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/spacebananaza')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
m3hrdadfi/albert-fa-base-v2-clf-persiannews | a2220e4c30ad56524876e75d618146f8fc956d6a | 2020-12-26T08:36:46.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-clf-persiannews | 68 | 1 | transformers | 5,430 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT |
|:-----------------:|:-----------------:|:-----------:|:-----:|
| Persian News | 97.01 | 97.19 | 95.79 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
m3hrdadfi/bert2bert-fa-news-headline | 7e55193b2a18886abc903639e5c8c1b88e316cda | 2020-12-11T21:50:16.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"fa",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | m3hrdadfi | null | m3hrdadfi/bert2bert-fa-news-headline | 68 | null | transformers | 5,431 | ---
language: fa
license: apache-2.0
tags:
- summarization
---
A Bert2Bert model on VoA Persian Corpus (a medium-sized corpus of 7.9 million words, 2003-2008) generates headlines. The model achieved a 25.30 ROUGE-2 score.
For more detail, please follow the [News Headline Generation](https://github.com/m3hrdadfi/news-headline-generation) repo.
## Eval results
The following table summarizes the ROUGE scores obtained by the Bert2Bert model.
| % | Precision | Recall | FMeasure |
|:-------:|:---------:|:------:|:--------:|
| ROUGE-1 | 43.78 | 45.52 | 43.54 |
| ROUGE-2 | 24.50 | 25.30* | 24.24 |
| ROUGE-L | 41.20 | 42.22 | 40.76 |
## Questions?
Post a Github issue on the [News Headline Generation](https://github.com/hooshvare/news-headline-generation/issues) repo.
|
mbeck/roberta-base-squad2 | 3bd552f8aba1fce5a55168821c0822c1417d72df | 2021-05-20T17:46:33.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mbeck | null | mbeck/roberta-base-squad2 | 68 | null | transformers | 5,432 | Entry not found |
prajjwal1/albert-base-v2-mnli | eabf43d13275c6db402b572349d7167393b9eab7 | 2021-10-05T17:55:48.000Z | [
"pytorch",
"albert",
"text-classification",
"arxiv:2110.01518",
"transformers"
] | text-classification | false | prajjwal1 | null | prajjwal1/albert-base-v2-mnli | 68 | null | transformers | 5,433 | If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). |
projecte-aina/roberta-base-ca-cased-pos | e9882f8a9d677b24076f267b16faf9d726cacd31 | 2022-06-14T15:13:27.000Z | [
"pytorch",
"roberta",
"token-classification",
"ca",
"dataset:universal_dependencies",
"arxiv:1907.11692",
"transformers",
"catalan",
"part of speech tagging",
"pos",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-cased-pos | 68 | null | transformers | 5,434 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "part of speech tagging"
- "pos"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "universal_dependencies"
metrics:
- f1
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-ca-cased-pos
results:
- task:
type: token-classification
dataset:
type: universal_dependencies
name: ancora-ca-pos
metrics:
- type: f1
value: 0.9893832385244624
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa (RoBERTa-base) finetuned for Part-of-speech-tagging (POS)
The **roberta-base-ca-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
## Evaluation and results
We evaluated the _roberta-base-ca-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-pos (F1) |
| ------------|:-------------|
| roberta-base-ca-cased-pos |**98.93** |
| mBERT | 98.82 |
| XLM-RoBERTa | 98.89 |
| WikiBERT-ca | 97.60 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
``` |
pucpr/clinicalnerpt-laboratory | f07078e758e3ad375d2ea33065d24bd8d95f2b4d | 2021-10-13T09:32:17.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-laboratory | 68 | 3 | transformers | 5,435 | ---
language: "pt"
widget:
- text: "Exame de creatinina urinaria: 41, 8 mg/dL."
- text: "Parcial de urina com 150mg/dL de priteinas, ph de 5,0 e 1034 leucocitos."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Laboratory
The Laboratory NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher | 7ef60acd1a14c67e772a7176147ff07fb907f680 | 2021-07-12T19:56:01.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers",
"fuzzy-matching",
"fuzzy-search",
"entity-resolution",
"record-linking",
"structured-data-search"
] | feature-extraction | false | shahrukhx01 | null | shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher | 68 | null | transformers | 5,436 | ---
tags:
- fuzzy-matching
- fuzzy-search
- entity-resolution
- record-linking
- structured-data-search
---
A Siamese BERT architecture trained at character levels tokens for embedding based Fuzzy matching.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
word1 = "fuzzformer"
word1 = " ".join([char for char in word1]) ## divide the word to char level to fuzzy match
word2 = "fizzformer"
word2 = " ".join([char for char in word2]) ## divide the word to char level to fuzzy match
words = [word1, word2]
model = SentenceTransformer('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
fuzzy_embeddings = model.encode(words)
print("Fuzzy Match score:")
print(util.cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
```
## Usage (HuggingFace Transformers)
```python
import torch
from transformers import AutoTokenizer, AutoModel
from torch import Tensor, device
def cos_sim(a: Tensor, b: Tensor):
"""
borrowed from sentence transformers repo
Computes the cosine similarity cos_sim(a[i], b[j]) for all i and j.
:return: Matrix with res[i][j] = cos_sim(a[i], b[j])
"""
if not isinstance(a, torch.Tensor):
a = torch.tensor(a)
if not isinstance(b, torch.Tensor):
b = torch.tensor(b)
if len(a.shape) == 1:
a = a.unsqueeze(0)
if len(b.shape) == 1:
b = b.unsqueeze(0)
a_norm = torch.nn.functional.normalize(a, p=2, dim=1)
b_norm = torch.nn.functional.normalize(b, p=2, dim=1)
return torch.mm(a_norm, b_norm.transpose(0, 1))
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Words we want fuzzy embeddings for
word1 = "fuzzformer"
word1 = " ".join([char for char in word1]) ## divide the word to char level to fuzzy match
word2 = "fizzformer"
word2 = " ".join([char for char in word2]) ## divide the word to char level to fuzzy match
words = [word1, word2]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
model = AutoModel.from_pretrained('shahrukhx01/paraphrase-mpnet-base-v2-fuzzy-matcher')
# Tokenize sentences
encoded_input = tokenizer(words, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
fuzzy_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Fuzzy Match score:")
print(cos_sim(fuzzy_embeddings[0], fuzzy_embeddings[1]))
```
## ACKNOWLEDGEMENT
A big thank you to [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) as their implementation really expedited the implementation of Fuzzformer.
## Citation
To cite FuzzTransformer in your work, please use the following bibtex reference:
@misc{shahrukhkhan2021fuzzTransformer, <br>
author = {Shahrukh Khan},<br>
title = {FuzzTransformer: A character level embedding based Siamese transformer for fuzzy string matching.},<br>
year = 2021,<br>
publisher = {Coming soon},<br>
doi = {Coming soon},<br>
url = {Coming soon}<br>
}
|
shibing624/code-autocomplete-gpt2-base | 3304a8144629d3241c9921fae39cf1d73aadf977 | 2022-02-15T07:21:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"code",
"autocomplete",
"license:apache-2.0"
] | text-generation | false | shibing624 | null | shibing624/code-autocomplete-gpt2-base | 68 | 1 | transformers | 5,437 | ---
language:
- en
tags:
- code
- autocomplete
- pytorch
- en
license: "apache-2.0"
---
# GPT2 for Code AutoComplete Model
code-autocomplete, a code completion plugin for Python.
**code-autocomplete** can automatically complete the code of lines and blocks with GPT2.
## Usage
Open source repo:[code-autocomplete](https://github.com/shibing624/code-autocomplete),support GPT2 model, usage:
```python
from autocomplete.gpt2_coder import GPT2Coder
m = GPT2Coder("shibing624/code-autocomplete-gpt2-base")
print(m.generate('import torch.nn as')[0])
```
Also, use huggingface/transformers:
*Please use 'GPT2' related functions to load this model!*
```python
import os
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-gpt2-base")
model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-gpt2-base")
model.to(device)
prompts = [
"""from torch import nn
class LSTM(Module):
def __init__(self, *,
n_tokens: int,
embedding_size: int,
hidden_size: int,
n_layers: int):""",
"""import numpy as np
import torch
import torch.nn as""",
"import java.util.ArrayList",
"def factorial(n):",
]
for prompt in prompts:
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt').to(device)
outputs = model.generate(input_ids=input_ids,
max_length=64 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
repetition_penalty=1.0,
do_sample=True,
num_return_sequences=1,
length_penalty=2.0,
early_stopping=True)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded)
print("=" * 20)
```
output:
```shell
from torch import nn
class LSTM(Module):
def __init__(self, *,
n_tokens: int,
embedding_size: int,
hidden_size: int,
n_layers: int):
self.embedding_size = embedding_size
====================
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
```
Model files:
```
code-autocomplete-gpt2-base
├── config.json
├── merges.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.json
```
### Train data
#### pytorch_awesome projects source code
download [code-autocomplete](https://github.com/shibing624/code-autocomplete),
```shell
cd autocomplete
python create_dataset.py
```
If you want train code-autocomplete GPT2 model,refer [https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py](https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py)
### About GPT2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Citation
```latex
@misc{code-autocomplete,
author = {Xu Ming},
title = {code-autocomplete: Code AutoComplete with GPT model},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://github.com/shibing624/code-autocomplete},
}
```
|
stanford-crfm/celebrimbor-gpt2-medium-x81 | 812043eceb4bef1b02ca07a90564cdd54d2a2d1f | 2022-06-20T11:22:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/celebrimbor-gpt2-medium-x81 | 68 | null | transformers | 5,438 | Entry not found |
uhhlt/am-roberta | 71c9111a6c3537a6709e9459b54b9ef73a185561 | 2022-01-29T12:41:40.000Z | [
"pytorch",
"roberta",
"fill-mask",
"am",
"dataset:Amharic corpus from LT group, UHH",
"transformers",
"Amharic",
"Semetic language",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | uhhlt | null | uhhlt/am-roberta | 68 | 1 | transformers | 5,439 | ---
language:
- am
thumbnail: "https://raw.githubusercontent.com/uhh-lt/amharicmodels/master/logo.png?token=AAIB2MYMI6TSIK7CHWYGHKTBQ3FQS"
tags:
- Amharic
- Semetic language
license: "mit"
datasets:
- Amharic corpus from LT group, UHH
widget:
- text: "አበበ <mask> በላ ።"
- text: "የአገሪቱ አጠቃላይ የስንዴ አቅርቦት ሶስት አራተኛው የሚመረተው በአገር <mask> ነው።"
- text: "ነገ ጥሩ <mask> የምንሰማ ይመስለኛል ።"
- text: "ግንባታውን የሚያከናውነው ተቋራጭ በቅርቡ እንደሚገለጽ <mask> አቶ መላኩ፣ ሕንፃው 1.2 ቢሊዮን ብር የሚደርስ ወጪ እንደሚጠይቅ አስታውቀዋል ።"
---
[](https://github.com/uhh-lt/amharicmodels)
# Introduction
This is the Amharic RoBERTa transformer-based LM. It is part of the effort to build benchmark datasets and models for Amharic NLP.
# Examples
If you want to test the model in the `Hosted inference API`, copy the following texts to the box (right side)
Example 1:
`አበበ <mask> በላ ። `
Example 2:
`የአገሪቱ አጠቃላይ የስንዴ አቅርቦት ሶስት አራተኛው የሚመረተው በአገር <mask> ነው።`
The example shows possible words for the `fill in the blank -- mask` task
# Resources and publication
More resource regarding Amharic NLP is available [here](https://github.com/uhh-lt/amharicmodels)
If you use the model in your work, please cite the following [paper](https://www.mdpi.com/1999-5903/13/11/275)
```
@Article{fi13110275,
AUTHOR = {Yimam, Seid Muhie and Ayele, Abinew Ali and Venkatesh, Gopalakrishnan and Gashaw, Ibrahim and Biemann, Chris},
TITLE = {Introducing Various Semantic Models for Amharic: Experimentation and Evaluation with Multiple Tasks and Datasets},
JOURNAL = {Future Internet},
VOLUME = {13},
YEAR = {2021},
NUMBER = {11},
ARTICLE-NUMBER = {275},
URL = {https://www.mdpi.com/1999-5903/13/11/275},
ISSN = {1999-5903},
DOI = {10.3390/fi13110275}
}
```
|
ziedsb19/tunbert_zied | a8f4668e9b2cbf83827b2e6a74e4c791c51006ab | 2021-09-15T14:04:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ziedsb19 | null | ziedsb19/tunbert_zied | 68 | 2 | transformers | 5,440 |
## 🧐 About <a name = "about"></a>
tunbert_zied is language model for the tunisian dialect based on a similar architecture to the RoBERTa model created BY zied sbabti.
The model was trained for over 600 000 phrases written in the tunisian dialect.
## 🏁 Getting Started <a name = "getting_started"></a>
Load <strong>tunbert_zied</strong> and its sub-word tokenizer
Don'use the <em>AutoTokenizer.from_pretrained(...)</em> method to load the tokenizer, instead use <em>BertTokeinzer.from_pretrained(...)</em> method. (this is because I haven't use the bultin tokenizer of roberta model which is the GPT tokenizer, instead i have used BertTokenizer)
### Example
```
import transformers as tr
tokenizer = tr.BertTokenizer.from_pretrained("ziedsb19/tunbert_zied")
model = tr.AutoModelForMaskedLM.from_pretrained("ziedsb19/tunbert_zied")
pipeline = tr.pipeline("fill-mask", model= model, tokenizer=tokenizer)
#test the model by masking a word in a phrase with [MASK]
pipeline("Ahla winek [MASK] lioum ?")
#results
"""
[{'sequence': 'ahla winek cv lioum?',
'score': 0.07968682795763016,
'token': 869,
'token_str': 'c v'},
{'sequence': 'ahla winek enty lioum?',
'score': 0.06116843968629837,
'token': 448,
'token_str': 'e n t y'},
{'sequence': 'ahla winek ch3amla lioum?',
'score': 0.057379286736249924,
'token': 7342,
'token_str': 'c h 3 a m l a'},
{'sequence': 'ahla winek cha3malt lioum?',
'score': 0.028112901374697685,
'token': 4663,
'token_str': 'c h a 3 m a l t'},
{'sequence': 'ahla winek enti lioum?',
'score': 0.025781650096178055,
'token': 436,
'token_str': 'e n t i'}]
"""
```
## ✍️ Authors <a name = "authors"></a>
- [zied sbabti](https://www.linkedin.com/in/zied-sbabti-a58a56139) - Idea & Initial work
|
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_essays_05_03_2022-06_26_52 | 47120759c6241041d834b7882ea8e257ecea0a8d | 2022-03-05T05:29:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_editorials_TEST_essays_05_03_2022-06_26_52 | 68 | null | transformers | 5,441 | Entry not found |
praf-choub/bart-CaPE-cnn | 0757e02672a22c8c99855157abcb73d8d75816e6 | 2022-06-14T04:50:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:2110.07166",
"transformers",
"summarization",
"license:bsd-3-clause",
"autotrain_compatible"
] | summarization | false | praf-choub | null | praf-choub/bart-CaPE-cnn | 68 | null | transformers | 5,442 | ---
language: en
tags:
- summarization
license: bsd-3-clause
datasets:
- cnn_dailymail
---
Citation
```
@misc{https://doi.org/10.48550/arxiv.2110.07166,
doi = {10.48550/ARXIV.2110.07166},
url = {https://arxiv.org/abs/2110.07166},
author = {Choubey, Prafulla Kumar and Fabbri, Alexander R. and Vig, Jesse and Wu, Chien-Sheng and Liu, Wenhao and Rajani, Nazneen Fatema},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in Abstractive Summarization},
publisher = {arXiv},
year = {2021},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
pszemraj/opt-350m-email-generation | 280daa0817ddb5dd42693037279a09a81c66a526 | 2022-06-25T22:30:34.000Z | [
"pytorch",
"opt",
"text-generation",
"dataset:aeslc",
"transformers",
"generated_from_trainer",
"custom-license",
"no-commercial",
"email",
"auto-complete",
"license:other"
] | text-generation | false | pszemraj | null | pszemraj/opt-350m-email-generation | 68 | 1 | transformers | 5,443 | ---
license: other
tags:
- generated_from_trainer
- opt
- custom-license
- no-commercial
- email
- auto-complete
datasets:
- aeslc
widget:
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work."
example_title: "fan"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value"
example_title: "value"
- text: "URGENT - I need"
example_title: "URGENT"
inference:
parameters:
min_length: 4
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: False
num_beams: 4
early_stopping: True
repetition_penalty: 3.5
---
> NOTE: there is currently a bug with huggingface API for OPT models. Please use the [colab notebook](https://colab.research.google.com/gist/pszemraj/40c46deed730bfca553b8c4b257a7b77/email-autocomplete-demo.ipynb) to test :)
# opt for email generation - 350M
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/opt-350m-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
use_fast=False,
do_sample=False,
early_stopping=True,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=64,
) # generate
```
- [Link to notebook](https://colab.research.google.com/gist/pszemraj/40c46deed730bfca553b8c4b257a7b77/email-autocomplete-demo.ipynb) on Colab
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
- This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [aeslc](https://huggingface.co/datasets/aeslc) dataset for six epochs.
- Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python.
- Note that API is restricted to generating 64 tokens - you can generate longer emails by using this in a text-generation `pipeline` object
## Intended uses & limitations
- in their everlasting wisdom, Facebook/Meta has decided to make a custom license for this, specifying several things. See [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) for details.
## Training and evaluation data
- the `email_body` field of train + validation (get more data) from the [aeslc](https://huggingface.co/datasets/aeslc) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
totoro4007/cryptoroberta-base-finetuned | 07c94393727e60bf60d85b8d377aefa3e2e83463 | 2022-05-21T08:38:59.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | totoro4007 | null | totoro4007/cryptoroberta-base-finetuned | 68 | null | transformers | 5,444 | Entry not found |
mehnaazasad/swin-tiny-patch4-window7-224-finetuned-eurosat | aa48e1ee5aa7f1285538c11b30f00bc8b2174dcd | 2022-05-25T16:11:42.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | mehnaazasad | null | mehnaazasad/swin-tiny-patch4-window7-224-finetuned-eurosat | 68 | null | transformers | 5,445 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0703
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2369 | 1.0 | 190 | 0.1683 | 0.9433 |
| 0.1812 | 2.0 | 380 | 0.0972 | 0.9670 |
| 0.1246 | 3.0 | 570 | 0.0703 | 0.9770 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
apple/deeplabv3-mobilevit-x-small | aacb7714f951e619793828143558cad3defed0db | 2022-06-02T10:53:23.000Z | [
"pytorch",
"coreml",
"mobilevit",
"dataset:pascal-voc",
"arxiv:2110.02178",
"arxiv:1706.05587",
"transformers",
"vision",
"image-segmentation",
"license:other"
] | image-segmentation | false | apple | null | apple/deeplabv3-mobilevit-x-small | 68 | 1 | transformers | 5,446 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- pascal-voc
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
example_title: Cat
---
# MobileViT + DeepLabV3 (extra small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-x-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-x-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|------------------|-----------------|-----------|-----------------------------------------------------------|
| MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small |
| **MobileViT-XS** | **77.1** | **2.9 M** | https://huggingface.co/apple/deeplabv3-mobilevit-x-small |
| MobileViT-S | 79.1 | 6.4 M | https://huggingface.co/apple/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
tinkoff-ai/response-quality-classifier-large | e64393251b97d9e1dec23e2e9e8b6eb29b3625eb | 2022-06-01T06:34:44.000Z | [
"pytorch",
"roberta",
"text-classification",
"ru",
"transformers",
"conversational",
"license:mit"
] | text-classification | false | tinkoff-ai | null | tinkoff-ai/response-quality-classifier-large | 68 | null | transformers | 5,447 | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [sberbank-ai/ruRoberta-large](https://huggingface.co/sberbank-ai/ruRoberta-large).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.59 | 0.86 | 0.83 |
| specificity | 0.61 | 0.85 | 0.86 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-large')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-large')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). |
facebook/genre-linking-aidayago2 | d93982cbdbd1559229535a5810d92c4d7b1a0883 | 2022-06-14T14:10:29.000Z | [
"pytorch",
"tf",
"jax",
"bart",
"text2text-generation",
"en",
"arxiv:2010.00904",
"arxiv:1910.13461",
"arxiv:1911.03814",
"transformers",
"retrieval",
"entity-retrieval",
"named-entity-disambiguation",
"entity-disambiguation",
"named-entity-linking",
"entity-linking",
"autotrain_compatible"
] | text2text-generation | false | facebook | null | facebook/genre-linking-aidayago2 | 68 | null | transformers | 5,448 | ---
language:
- en
tags:
- retrieval
- entity-retrieval
- named-entity-disambiguation
- entity-disambiguation
- named-entity-linking
- entity-linking
- text2text-generation
---
# GENRE
The GENRE (Generative ENtity REtrieval) system as presented in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904) implemented in pytorch.
In a nutshell, GENRE uses a sequence-to-sequence approach to entity retrieval (e.g., linking), based on fine-tuned [BART](https://arxiv.org/abs/1910.13461) architecture. GENRE performs retrieval generating the unique entity name conditioned on the input text using constrained beam search to only generate valid identifiers. The model was first released in the [facebookresearch/GENRE](https://github.com/facebookresearch/GENRE) repository using `fairseq` (the `transformers` models are obtained with a conversion script similar to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py).
This model was trained on the full training set of [BLINK](https://arxiv.org/abs/1911.03814) (i.e., 9M datapoints for entity-disambiguation grounded on Wikipedia) and then fine-tuned on [AIDA-YAGO2](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads).
## BibTeX entry and citation info
**Please consider citing our works if you use code from this repository.**
```bibtex
@inproceedings{decao2020autoregressive,
title={Autoregressive Entity Retrieval},
author={Nicola {De Cao} and Gautier Izacard and Sebastian Riedel and Fabio Petroni},
booktitle={International Conference on Learning Representations},
url={https://openreview.net/forum?id=5k8F6UU39V},
year={2021}
}
```
## Usage
Here is an example of generation for Wikipedia page disambiguation:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# OPTIONAL: load the prefix tree (trie), you need to additionally download
# https://huggingface.co/facebook/genre-linking-aidayago2/blob/main/trie.py and
# https://huggingface.co/facebook/genre-linking-aidayago2/blob/main/kilt_titles_trie_dict.pkl
# import pickle
# from trie import Trie
# with open("kilt_titles_trie_dict.pkl", "rb") as f:
# trie = Trie.load_from_dict(pickle.load(f))
tokenizer = AutoTokenizer.from_pretrained("facebook/genre-linking-aidayago2")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/genre-linking-aidayago2").eval()
sentences = ["Einstein was a [START_ENT] German [END_ENT] physicist."]
outputs = model.generate(
**tokenizer(sentences, return_tensors="pt"),
num_beams=5,
num_return_sequences=5,
# OPTIONAL: use constrained beam search
# prefix_allowed_tokens_fn=lambda batch_id, sent: trie.get(sent.tolist()),
)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
which outputs the following top-5 predictions (using constrained beam search)
```
['Germany',
'German Empire',
'Nazi Germany',
'German language',
'France']
```
|
yogeshchandrasekharuni/entity-extraction-v0 | fbd911f4e6c8ef1b28ef694dd5c49b7c239c3d75 | 2022-06-08T05:15:30.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | yogeshchandrasekharuni | null | yogeshchandrasekharuni/entity-extraction-v0 | 68 | null | transformers | 5,449 | Entry not found |
DancingIguana/music-generation | 1d40a863406913c97cdb15d877b15b4273276158 | 2022-06-13T16:48:57.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | DancingIguana | null | DancingIguana/music-generation | 68 | 1 | transformers | 5,450 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: music-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# music-generation
This model a trained from scratch version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset where the text represents musical notes. The [dataset](https://www.kaggle.com/datasets/soumikrakshit/classical-music-midi) consists of one stream of notes from MIDI files (the stream with most notes), where all of the melodies were transposed either to C major or A minor. Also, the BPM of the song is ignored, the duration of each note is based on its quarter length.
Each element in the melody is represented by a series of letters and numbers with the following structure.
* For a note: ns[pitch of the note as a string]s[duration]
* Examples: nsC4s0p25, nsF7s1p0,
* For a rest: rs[duration]:
* Examples: rs0p5, rs1q6
* For a chord: cs[number of notes in chord]s[pitches of chords separated by "s"]s[duration]
* Examples: cs2sE7sF7s1q3, cs2sG3sGw3s0p25
The following special symbols are replaced in the strings by the following:
* . = p
* / = q
* # =
* - = t
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
zluvolyote/CUBERT | 17f4fec984e4d169445d2985c2c09feb60b5f3e5 | 2022-07-12T15:09:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | zluvolyote | null | zluvolyote/CUBERT | 68 | null | transformers | 5,451 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: CUBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CUBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 5.5281 |
| No log | 2.0 | 116 | 5.2508 |
| No log | 3.0 | 174 | 5.2203 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.1
- Tokenizers 0.12.1
|
eslamxm/MBART-finetuned-Spanish | 0cef59aaf30a2567c20d6d20567717860d2c4e0d | 2022-06-18T11:37:56.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"Mbart",
"seq2seq",
"es",
"abstractive summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/MBART-finetuned-Spanish | 68 | null | transformers | 5,452 | ---
tags:
- summarization
- Mbart
- seq2seq
- es
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: MBART-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MBART-finetuned-Spanish
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7435
- Rouge-1: 23.72
- Rouge-2: 7.61
- Rouge-l: 22.97
- Gen Len: 51.33
- Bertscore: 70.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abhishek/deberta-v3-base-autotrain | d7c27f41f9cc47e73195973ddf02b5462dbb173c | 2022-06-30T13:13:18.000Z | [
"pytorch",
"deberta-v2",
"en",
"arxiv:2006.03654",
"arxiv:2111.09543",
"transformers",
"deberta",
"deberta-v3",
"fill-mask",
"license:mit"
] | fill-mask | false | abhishek | null | abhishek/deberta-v3-base-autotrain | 68 | null | transformers | 5,453 | ---
language: en
tags:
- deberta
- deberta-v3
- fill-mask
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. It has only 86M backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 2.0 and MNLI tasks.
| Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)|
|-------------------|----------|-------------------|-----------|----------|
| RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- |
| XLNet-base |32 |92 | -/80.2 | 86.8/- |
| ELECTRA-base |30 |86 | -/80.5 | 88.8/ |
| DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5|
| DeBERTa-v3-base |128|86 | **88.4/85.4** | **90.6/90.7**|
| DeBERTa-v3-base + SiFT |128|86 | -/- | 91.0/-|
We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
#### Fine-tuning with HF transformers
```bash
#!/bin/bash
cd transformers/examples/pytorch/text-classification/
pip install datasets
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
run_glue.py \
--model_name_or_path microsoft/deberta-v3-base \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--evaluation_strategy steps \
--max_seq_length 256 \
--warmup_steps 500 \
--per_device_train_batch_size ${batch_size} \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir $output_dir \
--overwrite_output_dir \
--logging_steps 1000 \
--logging_dir $output_dir
```
### Citation
If you find DeBERTa useful for your work, please cite the following papers:
``` latex
@misc{he2021debertav3,
title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing},
author={Pengcheng He and Jianfeng Gao and Weizhu Chen},
year={2021},
eprint={2111.09543},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
Farshid/distilbert-base-uncased_allagree3 | 3874321c2a69ffdbc33c65f914f7af302232f961 | 2022-07-04T21:04:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:financial_phrasebank",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Farshid | null | Farshid/distilbert-base-uncased_allagree3 | 68 | null | transformers | 5,454 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_allagree3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_allagree
metrics:
- name: Accuracy
type: accuracy
value: 0.9778761061946902
- name: F1
type: f1
value: 0.9780006392634297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_allagree3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0937
- Accuracy: 0.9779
- F1: 0.9780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6418 | 1.0 | 57 | 0.3340 | 0.8805 | 0.8768 |
| 0.1821 | 2.0 | 114 | 0.1088 | 0.9690 | 0.9691 |
| 0.0795 | 3.0 | 171 | 0.0822 | 0.9823 | 0.9823 |
| 0.0385 | 4.0 | 228 | 0.0939 | 0.9646 | 0.9646 |
| 0.0218 | 5.0 | 285 | 0.1151 | 0.9735 | 0.9737 |
| 0.0149 | 6.0 | 342 | 0.1126 | 0.9690 | 0.9694 |
| 0.006 | 7.0 | 399 | 0.0989 | 0.9779 | 0.9780 |
| 0.0093 | 8.0 | 456 | 0.1009 | 0.9779 | 0.9780 |
| 0.0063 | 9.0 | 513 | 0.0899 | 0.9779 | 0.9780 |
| 0.0039 | 10.0 | 570 | 0.0937 | 0.9779 | 0.9780 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vasugoel/K-12BERT | d980e0c88dfa477efb4b0325933f651601e36a6a | 2022-07-14T07:54:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:vasugoel/K-12Corpus",
"arxiv:2205.12335",
"transformers",
"education",
"K-12",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | vasugoel | null | vasugoel/K-12BERT | 68 | 2 | transformers | 5,455 | ---
language: en
tags:
- education
- K-12
license: apache-2.0
datasets:
- vasugoel/K-12Corpus
---
## K-12BERT model
K-12BERT is a model trained by performing continued pretraining on the K-12Corpus. Since, performance of BERT like models on domain adaptive tasks have shown great progress, we noticed the lack of such a model for the education domain (especially K-12 education). On that end we present K-12BERT, a BERT based model trained on our custom curated dataset, extracted from both open and proprietary education resources.
The model was trained using an MLM objective and in a continued pretraining fashion, due to the lack of resources available to train the model from ground up. This also, allowed us to save a lot of computational resources and utilize the existing knowledge of BERT. To that extent we also preserve the original vocabulary of BERT, to evaluate the performance under those conditions.
## Intended uses
We hope that the community especially researchers and professionals engaged in the education domain, are able to utilize this model to advance the domain of AI in education. With many fold usages for online education platforms, we hope we can contribute towards advancing education resources for the upcoming generation.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel, AutoTokenizer, AutoModelForMaskedLM
tokenizer = BertTokenizer.from_pretrained('vasugoel/K-12BERT') # AutoTokenizer.from_pretrained('vasugoel/K-12BERT')
model = BertModel.from_pretrained("vasugoel/K-12BERT") # AutoModelForMaskedLM.from_pretrained('vasugoel/K-12BERT')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2205.12335,
doi = {10.48550/ARXIV.2205.12335},
url = {https://arxiv.org/abs/2205.12335},
author = {Goel, Vasu and Sahnan, Dhruv and V, Venktesh and Sharma, Gaurav and Dwivedi, Deep and Mohania, Mukesh},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {K-12BERT: BERT for K-12 education},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
jordyvl/biobert-base-cased-v1.2_ncbi_disease-CRFsubset-first-ner | d0c22b4327f748dd456eb3d12d6e244033a9f0be | 2022-07-15T11:45:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"transformers"
] | null | false | jordyvl | null | jordyvl/biobert-base-cased-v1.2_ncbi_disease-CRFsubset-first-ner | 68 | null | transformers | 5,456 | Entry not found |
lisaterumi/postagger-bio-portuguese | 8c944d9922ba36daa986286010d328971c370d7d | 2022-07-15T14:13:28.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:MacMorpho",
"transformers",
"autotrain_compatible"
] | token-classification | false | lisaterumi | null | lisaterumi/postagger-bio-portuguese | 68 | 1 | transformers | 5,457 | ---
language: "pt"
widget:
- text: "O paciente recebeu no hospital e falou com a médica"
datasets:
- MacMorpho
---
# Postagger Bio Portuguese
## Citation
```
coming soon
```
|
derwahnsinn/gpt2-mediummedlavallc | 3f1760e57909c9f47b7f4a705fd08eedcaff1db2 | 2022-07-30T02:59:35.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediummedlavallc | 68 | null | transformers | 5,458 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediummedlavallc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediummedlavallc
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3167
- eval_runtime: 53.369
- eval_samples_per_second: 59.529
- eval_steps_per_second: 7.458
- epoch: 9.35
- step: 3722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CogComp/bart-faithful-summary-detector | 9ee8dc596655cde178f27a956659c4402bf6355f | 2021-06-13T17:18:36.000Z | [
"pytorch",
"jax",
"bart",
"text-classification",
"en",
"dataset:xsum",
"transformers",
"xsum",
"license:cc-by-sa-4.0"
] | text-classification | false | CogComp | null | CogComp/bart-faithful-summary-detector | 67 | null | transformers | 5,459 | ---
language:
- en
thumbnail: https://cogcomp.seas.upenn.edu/images/logo.png
tags:
- text-classification
- bart
- xsum
license: cc-by-sa-4.0
datasets:
- xsum
widget:
- text: "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
- text: "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
---
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` |
Recognai/zeroshot_selectra_small | 5ab73975f5066eab7d53aafe6dd8a4aa82e8d165 | 2022-03-27T09:33:26.000Z | [
"pytorch",
"electra",
"text-classification",
"es",
"dataset:xnli",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
] | zero-shot-classification | false | Recognai | null | Recognai/zeroshot_selectra_small | 67 | 3 | transformers | 5,460 | ---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: apache-2.0
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) |
Rexhaif/rubert-base-srl-seqlabeling | a0924ffc21c3fb6c2de68db82c561642e710dbe1 | 2022-04-12T13:50:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Rexhaif | null | Rexhaif/rubert-base-srl-seqlabeling | 67 | null | transformers | 5,461 | ---
tags:
- generated_from_trainer
model-index:
- name: rubert-base-srl-seqlabeling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl-seqlabeling
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1723
- Causator Precision: 0.8539
- Causator Recall: 0.8352
- Causator F1: 0.8444
- Causator Number: 91
- Expiriencer Precision: 0.9259
- Expiriencer Recall: 0.9740
- Expiriencer F1: 0.9494
- Expiriencer Number: 77
- Instrument Precision: 0.375
- Instrument Recall: 1.0
- Instrument F1: 0.5455
- Instrument Number: 3
- Other Precision: 0.0
- Other Recall: 0.0
- Other F1: 0.0
- Other Number: 1
- Predicate Precision: 0.9352
- Predicate Recall: 0.9902
- Predicate F1: 0.9619
- Predicate Number: 102
- Overall Precision: 0.8916
- Overall Recall: 0.9307
- Overall F1: 0.9107
- Overall Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Causator Precision | Causator Recall | Causator F1 | Causator Number | Expiriencer Precision | Expiriencer Recall | Expiriencer F1 | Expiriencer Number | Instrument Precision | Instrument Recall | Instrument F1 | Instrument Number | Other Precision | Other Recall | Other F1 | Other Number | Predicate Precision | Predicate Recall | Predicate F1 | Predicate Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------:|:------------:|:--------:|:------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2552 | 1.0 | 56 | 0.3471 | 0.8841 | 0.6703 | 0.7625 | 91 | 0.8421 | 0.8312 | 0.8366 | 77 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9259 | 0.9804 | 0.9524 | 102 | 0.8893 | 0.8212 | 0.8539 | 0.9203 |
| 0.2385 | 2.0 | 112 | 0.1608 | 0.9103 | 0.7802 | 0.8402 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.2857 | 0.6667 | 0.4 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9519 | 0.9706 | 0.9612 | 102 | 0.9182 | 0.9015 | 0.9098 | 0.9554 |
| 0.0367 | 3.0 | 168 | 0.1311 | 0.8902 | 0.8022 | 0.8439 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9709 | 0.9804 | 0.9756 | 102 | 0.9228 | 0.9161 | 0.9194 | 0.9673 |
| 0.0494 | 4.0 | 224 | 0.1507 | 0.7812 | 0.8242 | 0.8021 | 91 | 0.9241 | 0.9481 | 0.9359 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9524 | 0.9804 | 0.9662 | 102 | 0.8746 | 0.9161 | 0.8948 | 0.9637 |
| 0.0699 | 5.0 | 280 | 0.1830 | 0.8276 | 0.7912 | 0.8090 | 91 | 0.8941 | 0.9870 | 0.9383 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.875 | 0.9197 | 0.8968 | 0.9560 |
| 0.0352 | 6.0 | 336 | 0.1994 | 0.7857 | 0.8462 | 0.8148 | 91 | 0.9048 | 0.9870 | 0.9441 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9266 | 0.9902 | 0.9573 | 102 | 0.8595 | 0.9380 | 0.8970 | 0.9572 |
| 0.0186 | 7.0 | 392 | 0.1657 | 0.8652 | 0.8462 | 0.8556 | 91 | 0.9146 | 0.9740 | 0.9434 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0052 | 8.0 | 448 | 0.1716 | 0.8556 | 0.8462 | 0.8508 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 |
| 0.0094 | 9.0 | 504 | 0.1715 | 0.8444 | 0.8352 | 0.8398 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
| 0.0078 | 10.0 | 560 | 0.1723 | 0.8539 | 0.8352 | 0.8444 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
SEBIS/code_trans_t5_large_commit_generation_multitask_finetune | 491f62f380090fc7bfe6717c5b9a8bc785bb7a57 | 2021-06-23T08:28:55.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_commit_generation_multitask_finetune | 67 | null | transformers | 5,462 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
aware-ai/longformer-QA | 15ade5c427f009974f43f94b9b914c359a1a1cbb | 2020-08-07T09:40:36.000Z | [
"pytorch",
"tf",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/longformer-QA | 67 | null | transformers | 5,463 | Entry not found |
abhishek/autonlp-hindi-asr | b0a43e5f43208d20174596fb33aeddce821920f4 | 2021-07-05T18:39:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"autonlp",
"audio"
] | automatic-speech-recognition | false | abhishek | null | abhishek/autonlp-hindi-asr | 67 | 1 | transformers | 5,464 | ---
tags:
- autonlp
- automatic-speech-recognition
- audio
language: {language}
---
# Model Trained Using AutoNLP
- Problem type: Speech Recognition
|
andi611/distilbert-base-uncased-qa-boolq | 168ef0953f2bf4083670a8519a14db558a6f043c | 2021-08-02T09:45:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:boolq",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | andi611 | null | andi611/distilbert-base-uncased-qa-boolq | 67 | null | transformers | 5,465 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- boolq
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-boolq
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: boolq
type: boolq
args: default
metric:
name: Accuracy
type: accuracy
value: 0.7314984709480122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-boolq
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the boolq dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6506 | 1.0 | 531 | 0.6075 | 0.6681 |
| 0.575 | 2.0 | 1062 | 0.5816 | 0.6978 |
| 0.4397 | 3.0 | 1593 | 0.6137 | 0.7253 |
| 0.2524 | 4.0 | 2124 | 0.8124 | 0.7466 |
| 0.126 | 5.0 | 2655 | 1.1437 | 0.7370 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
bvanaken/CORe-clinical-mortality-prediction | 1af1c4eb615bb3628b936e62decb4519b5775ae2 | 2021-11-30T13:28:29.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"medical",
"clinical",
"mortality"
] | text-classification | false | bvanaken | null | bvanaken/CORe-clinical-mortality-prediction | 67 | 1 | transformers | 5,466 | ---
language: "en"
tags:
- bert
- medical
- clinical
- mortality
thumbnail: "https://core.app.datexis.com/static/paper.png"
---
# CORe Model - Clinical Mortality Risk Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of mortality risk prediction**.
The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.
#### How to use CORe Mortality Risk Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.softmax(output.logits.detach(), dim=1)
mortality_risk_prediction = predictions[0][1].item()
```
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
``` |
fagner/envoy | e80e352bfcde73267e69de0f58d6e599f60b3453 | 2022-04-12T22:14:20.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | fagner | null | fagner/envoy | 67 | null | transformers | 5,467 | Entry not found |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_50_Epochs | df4099c55f10d9182af5815ee749b6bff8dabe12 | 2022-02-12T21:16:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_50_Epochs | 67 | null | sentence-transformers | 5,468 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
laxya007/gpt2_tech | 8dac423df5b74acce9d264bc30f9d2ed24cab726 | 2021-05-23T08:18:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_tech | 67 | null | transformers | 5,469 | Entry not found |
mideind/IceBERT | 98f00d95344960c729e827d9779215a2deff9924 | 2022-03-17T13:50:07.000Z | [
"pytorch",
"roberta",
"fill-mask",
"is",
"arxiv:2201.05601",
"transformers",
"icelandic",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | mideind | null | mideind/IceBERT | 67 | null | transformers | 5,470 | ---
language: is
widget:
- text: Má bjóða þér <mask> í kvöld?
- text: Forseti <mask> er ágæt.
- text: Súpan var <mask> á bragðið.
tags:
- roberta
- icelandic
- masked-lm
- pytorch
license: agpl-3.0
---
# IceBERT
This model was trained with fairseq using the RoBERTa-base architecture. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.
| Dataset | Size | Tokens |
|------------------------------------------------------|---------|--------|
| Icelandic Gigaword Corpus v20.05 (IGC) | 8.2 GB | 1,388M |
| Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M |
| Greynir News articles | 456 MB | 76M |
| Icelandic Sagas | 9 MB | 1.7M |
| Open Icelandic e-books (Rafbókavefurinn) | 14 MB | 2.6M |
| Data from the medical library of Landspitali | 33 MB | 5.2M |
| Student theses from Icelandic universities (Skemman) | 2.2 GB | 367M |
| Total | 15.8 GB | 2,664M |
## Citation
The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
```
@article{DBLP:journals/corr/abs-2201-05601,
author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
Haukur Barri S{\'{\i}}monarson and
P{\'{e}}tur Orri Ragnarsson and
Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
Haukur P{\'{a}}ll J{\'{o}}nsson and
Vilhj{\'{a}}lmur {\TH}orsteinsson and
Hafsteinn Einarsson},
title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
Models},
journal = {CoRR},
volume = {abs/2201.05601},
year = {2022},
url = {https://arxiv.org/abs/2201.05601},
eprinttype = {arXiv},
eprint = {2201.05601},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
mrm8488/t5-small-finetuned-boolq | 9bb945b5918dcb17ca906454f802ad8a0b6ae2eb | 2020-08-13T18:47:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-boolq | 67 | null | transformers | 5,471 | Entry not found |
nyu-mll/roberta-med-small-1M-1 | a609533e0ddc94a74091c1a236c44f03f9f94772 | 2021-05-20T19:06:25.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-med-small-1M-1 | 67 | null | transformers | 5,472 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
pritamdeka/S-PubMedBert-MS-MARCO | 0f9763f8d7e4d13f5f798d8cfc32e4afb8c7c8dd | 2022-01-28T13:48:52.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | pritamdeka | null | pritamdeka/S-PubMedBert-MS-MARCO | 67 | null | sentence-transformers | 5,473 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/S-PubMedBert-MS-MARCO
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is the [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) model which has been fine-tuned over the MS-MARCO dataset using sentence-transformers framework. It can be used for the information retrieval task in the medical/health text domain.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-PubMedBert-MS-MARCO')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-PubMedBert-MS-MARCO')
model = AutoModel.from_pretrained('pritamdeka/S-PubMedBert-MS-MARCO')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
<!--- ## Evaluation Results -->
<!--- Describe how your model was evaluated -->
<!--- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) -->
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 31434 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.margin_mse_loss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pvcastro/bert-portuguese-cased-rel-cp | 3e2c38244420ea447bc14f9647fcd69096a623c7 | 2021-06-18T15:12:46.000Z | [
"pytorch",
"transformers"
] | null | false | pvcastro | null | pvcastro/bert-portuguese-cased-rel-cp | 67 | null | transformers | 5,474 | Entry not found |
tareknaous/bert2bert-empathetic-response-msa | 239bd1e14e18d697958594a3e5eaa4c49f4ff6e1 | 2021-12-23T15:33:55.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/bert2bert-empathetic-response-msa | 67 | 1 | transformers | 5,475 | Entry not found |
thu-coai/CDial-GPT2_LCCC-base | 4f06737e550db53a388329a43f634948b88c6de5 | 2020-12-23T07:10:27.000Z | [
"pytorch",
"transformers"
] | null | false | thu-coai | null | thu-coai/CDial-GPT2_LCCC-base | 67 | null | transformers | 5,476 | # CDial-GPT2_LCCC-base
https://github.com/thu-coai/CDial-GPT |
uer/roberta-small-word-chinese-cluecorpussmall | 6152f3fda1a704cf714a6af5f0be920811469613 | 2022-02-19T15:58:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/roberta-small-word-chinese-cluecorpussmall | 67 | 1 | transformers | 5,477 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "最近一趟去北京的[MASK]几点发车"
---
# Chinese word-based RoBERTa Miniatures
## Model description
This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
Most Chinese pre-trained weights are based on Chinese character. Compared with character-based models, word-based models are faster (because of shorter sequence length) and have better performance according to our experimental results. To this end, we released the 5 Chinese word-based RoBERTa models of different sizes. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
Notice that the output results of Hosted inference API (right) are not properly displayed. When the predicted word has multiple characters, the single word instead of entire sentence is displayed. One can click **JSON Output** for normal output results.
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **word-based RoBERTa-Tiny** | [**L=2/H=128 (Tiny)**][2_128] |
| **word-based RoBERTa-Mini** | [**L=4/H=256 (Mini)**][4_256] |
| **word-based RoBERTa-Small** | [**L=4/H=512 (Small)**][4_512] |
| **word-based RoBERTa-Medium** | [**L=8/H=512 (Medium)**][8_512] |
| **word-based RoBERTa-Base** | [**L=12/H=768 (Base)**][12_768] |
Compared with [char-based models](https://huggingface.co/uer/chinese_roberta_L-2_H-128), word-based models achieve better results in most cases. Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny(char) | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| **RoBERTa-Tiny(word)** | **74.3(+2.0)** | **86.4** | **93.2** | **82.0** | **66.4** | **58.2** | **59.6** |
| RoBERTa-Mini(char) | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| **RoBERTa-Mini(word)** | **76.7(+1.0)** | **87.6** | **94.1** | **85.4** | **66.9** | **59.2** | **67.3** |
| RoBERTa-Small(char) | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| **RoBERTa-Small(word)** | **78.1(+1.3)** | **88.5** | **94.7** | **87.4** | **67.6** | **60.9** | **69.8** |
| RoBERTa-Medium(char) | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| **RoBERTa-Medium(word)** | **78.9(+1.1)** | **89.2** | **95.1** | **88.0** | **67.8** | **60.6** | **73.0** |
| RoBERTa-Base(char) | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
| **RoBERTa-Base(word)** | **80.2(+0.7)** | **90.3** | **95.7** | **89.4** | **68.0** | **61.5** | **76.8** |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of word-based RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-medium-word-chinese-cluecorpussmall')
>>> unmasker("[MASK]的首都是北京。")
[
{'sequence': '中国 的首都是北京。',
'score': 0.21525809168815613,
'token': 2873,
'token_str': '中国'},
{'sequence': '北京 的首都是北京。',
'score': 0.15194718539714813,
'token': 9502,
'token_str': '北京'},
{'sequence': '我们 的首都是北京。',
'score': 0.08854265511035919,
'token': 4215,
'token_str': '我们'},
{'sequence': '美国 的首都是北京。',
'score': 0.06808705627918243,
'token': 7810,
'token_str': '美国'},
{'sequence': '日本 的首都是北京。',
'score': 0.06071401759982109,
'token': 7788,
'token_str': '日本'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Since BertTokenizer does not support sentencepiece, AlbertTokenizer is used here.
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
```
>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
model_prefix='cluecorpussmall_spm',
vocab_size=100000,
max_sentence_length=1024,
max_sentencepiece_length=6,
user_defined_symbols=['[MASK]','[unused1]','[unused2]',
'[unused3]','[unused4]','[unused5]','[unused6]',
'[unused7]','[unused8]','[unused9]','[unused10]'],
pad_id=0,
pad_piece='[PAD]',
unk_id=1,
unk_piece='[UNK]',
bos_id=2,
bos_piece='[CLS]',
eos_id=3,
eos_piece='[SEP]',
train_extremely_large_corpus=True
)
```
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of word-based RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall |
vesteinn/ScandiBERT | 90f4917ed06d894298489df7a92ee46cfa81fc07 | 2022-03-13T22:15:57.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"is",
"da",
"sv",
"no",
"fo",
"transformers",
"roberta",
"icelandic",
"norwegian",
"faroese",
"danish",
"swedish",
"masked-lm",
"license:agpl-3.0",
"autotrain_compatible"
] | fill-mask | false | vesteinn | null | vesteinn/ScandiBERT | 67 | 2 | transformers | 5,478 | ---
language:
- is
- da
- sv
- no
- fo
widget:
- text: Fina lilla<mask>, jag vill inte bliva stur.
- text: Nu ved jeg, at du frygter<mask> og end ikke vil nægte mig din eneste søn..
- text: Það er vorhret á<mask>, napur vindur sem hvín.
- text: Ja, Gud signi<mask>, mítt land.
- text: Alle dyrene i<mask> må være venner.
tags:
- roberta
- icelandic
- norwegian
- faroese
- danish
- swedish
- masked-lm
- pytorch
license: agpl-3.0
---
# ScandiBERT
Note: At an earlier date a half trained model went up here, it has since been removed. The model has since been updated.
This is a Scandinavian BERT model trained on a large collection of Danish, Faroese, Icelandic, Norwegian and Swedish text. It is currently the highest ranking model on the ScandEval leaderbord https://scandeval.github.io/pretrained/
|
Shanny/FinBERT | db0d6114e5739c69aa8b655d6412b05240b7b951 | 2022-03-12T11:12:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Shanny | null | Shanny/FinBERT | 67 | null | transformers | 5,479 | Entry not found |
navteca/ms-marco-MiniLM-L-12-v2 | 67d0e3a68308414b7d14df5dda93b40197609175 | 2022-03-14T15:56:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"sentence-transformers",
"license:mit"
] | text-classification | false | navteca | null | navteca/ms-marco-MiniLM-L-12-v2 | 67 | null | sentence-transformers | 5,480 | ---
language: en
license: mit
pipeline_tag: text-classification
tags:
- sentence-transformers
---
# Cross-Encoder for MS Marco
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
## Usage
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
Anudev08/model_3 | 39aebeb3b7f4a09c6948269f543129f9623b4b2b | 2022-03-16T16:21:00.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | Anudev08 | null | Anudev08/model_3 | 67 | null | transformers | 5,481 | Entry not found |
James-kc-min/AGT_Roberta2 | a9326cd110c1e224bd92044bb72b4620908b027e | 2022-04-20T12:51:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | James-kc-min | null | James-kc-min/AGT_Roberta2 | 67 | null | transformers | 5,482 | Hugging Face's logo
Hugging Face
Search models, datasets, users...
Models
Datasets
Spaces
Docs
Solutions
Pricing
Hugging Face is way more fun with friends and colleagues! 🤗 Join an organization
James-kc-min
/
AGT_Roberta Copied
like
0
Text Classification
PyTorch
Transformers
apache-2.0
roberta
generated_from_trainer
Eval Results
Infinity Compatible
Model card
Files and versions
Settings
AGT_Roberta
/
README.md
James-kc-min's picture
James-kc-min
update model card README.md
3abd7dd
about 3 hours ago
raw
history
blame
edit
delete
Safe
1.01 kB
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: AGT_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AGT_Roberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.0
- Tokenizers 0.12.1
|
allenai/multicite-multilabel-scibert | 7696929cf8ba26c21cc1da0daeb76010575dff80 | 2022-05-10T17:45:24.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"scibert",
"license:mit"
] | text-classification | false | allenai | null | allenai/multicite-multilabel-scibert | 67 | 1 | transformers | 5,483 | ---
language: en
tags:
- scibert
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with SciBERT (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite |
oliverguhr/spelling-correction-german-base | cb2310e6676c8dc01d681e13b0e7f23d48c3ad2d | 2022-06-13T12:08:25.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | oliverguhr | null | oliverguhr/spelling-correction-german-base | 67 | null | transformers | 5,484 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-spelling-de
results: []
widget:
- text: "das idst ein neuZr test"
example_title: "1"
---
This is an experimental model that should fix your typos and punctuation.
If you like to run your own experiments or train for a different language, have a look at [the code](https://github.com/oliverguhr/spelling).
## Model description
This is a proof of concept spelling correction model for german.
## Intended uses & limitations
This is a work in progress, be aware that the model can produce artefacts.
You can test the model using the pipeline-interface:
```python
from transformers import pipeline
fix_spelling = pipeline("text2text-generation",model="oliverguhr/spelling-correction-german-base")
print(fix_spelling("das idst ein neuZr test",max_length=2048))
```
|
witiko/mathberta | 3414676ab15cf98f39088c936d0cbd1b28b251b0 | 2022-06-29T17:22:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:arxmliv",
"dataset:math-stackexchange",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | witiko | null | witiko/mathberta | 67 | 0 | transformers | 5,485 | ---
language: en
license: mit
datasets:
- arxmliv
- math-stackexchange
---
# MathBERTa model
Pretrained model on English language and LaTeX using a masked language modeling
(MLM) objective. It was developed for [the ARQMath-3 shared task evaluation][1]
at CLEF 2022 and first released in [this repository][2]. This model is
case-sensitive: it makes a difference between english and English.
[1]: https://www.cs.rit.edu/~dprl/ARQMath/
[2]: https://github.com/witiko/scm-at-arqmath3
## Model description
MathBERTa is [the RoBERTa base transformer model][3] whose [tokenizer has been
extended with LaTeX math symbols][7] and which has been [fine-tuned on a large
corpus of English mathematical texts][8].
Like RoBERTa, MathBERTa has been fine-tuned with the Masked language modeling
(MLM) objective. Taking a sentence, the model randomly masks 15% of the words
and math symbols in the input then run the entire masked sentence through the
model and has to predict the masked words and symbols. This way, the model
learns an inner representation of the English language and LaTeX that can then
be used to extract features useful for downstream tasks.
[3]: https://huggingface.co/roberta-base
[7]: https://github.com/Witiko/scm-at-arqmath3/blob/main/02-train-tokenizers.ipynb
[8]: https://github.com/witiko/scm-at-arqmath3/blob/main/03-finetune-roberta.ipynb
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly
intended to be fine-tuned on a downstream task. See the [model
hub][4] to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use
the whole sentence (potentially masked) to make decisions, such as sequence
classification, token classification or question answering. For tasks such as
text generation you should look at model like GPT2.
[4]: https://huggingface.co/models?filter=roberta
### How to use
*Due to the large number of added LaTeX tokens, MathBERTa is affected by [a
software bug in the 🤗 Transformers library][9] that causes it to load for tens
of minutes. The bug was [fixed in version 4.20.0][10].*
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='witiko/mathberta')
>>> unmasker(r"If [MATH] \theta = \pi [/MATH] , then [MATH] \sin(\theta) [/MATH] is <mask>.")
[{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is zero.'
'score': 0.23291291296482086,
'token': 4276,
'token_str': ' zero'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 0.'
'score': 0.11734672635793686,
'token': 321,
'token_str': ' 0'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is real.'
'score': 0.0793389230966568,
'token': 588,
'token_str': ' real'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is 1.'
'score': 0.0753420740365982,
'token': 112,
'token_str': ' 1'},
{'sequence': ' If \\theta = \\pi, then\\sin(\\theta ) is even.'
'score': 0.06487451493740082,
'token': 190,
'token_str': ' even'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('witiko/mathberta')
model = AutoModel.from_pretrained('witiko/mathberta')
text = r"Replace me by any text and [MATH] \text{math} [/MATH] you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
Our model was fine-tuned on two datasets:
- [ArXMLiv 2020][5], a dataset consisting of 1,581,037 ArXiv documents.
- [Math StackExchange][6], a dataset of 2,466,080 questions and answers.
Together theses datasets weight 52GB of text and LaTeX.
## Intrinsic evaluation results
Our model achieves the following intrinsic evaluation results:
![Intrinsic evaluation results of MathBERTa][11]
[5]: https://sigmathling.kwarc.info/resources/arxmliv-dataset-2020/
[6]: https://www.cs.rit.edu/~dprl/ARQMath/arqmath-resources.html
[9]: https://github.com/huggingface/transformers/issues/16936
[10]: https://github.com/huggingface/transformers/pull/17119
[11]: https://huggingface.co/witiko/mathberta/resolve/main/learning-curves.png
## Citing
### Text
Vít Novotný and Michal Štefánik. “Combining Sparse and Dense Information
Retrieval. Soft Vector Space Model and MathBERTa at ARQMath-3”.
In: *Proceedings of the Working Notes of CLEF 2022*. To Appear.
CEUR-WS, 2022.
### Bib(La)TeX
``` bib
@inproceedings{novotny2022combining,
booktitle = {Proceedings of the Working Notes of {CLEF} 2022},
title = {Combining Sparse and Dense Information Retrieval},
subtitle = {Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)},
author = {Novotný, Vít and Štefánik, Michal},
publisher = {{CEUR-WS}},
year = {2022},
note = {To Appear},
}
``` |
NAACL2022/spider-trivia-question-encoder | 5540cd4b726dc5d6ae34fc232b56c085534b5632 | 2022-07-09T19:14:40.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2112.07708",
"transformers"
] | feature-extraction | false | NAACL2022 | null | NAACL2022/spider-trivia-question-encoder | 67 | 4 | transformers | 5,486 | # Spider-TriviaQA: Question Encoder
This is the question encoder of the model fine-tuned on TriviaQA (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRQuestionEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-trivia-question-encoder")
model = DPRQuestionEncoder.from_pretrained("NAACL2022/spider-trivia-question-encoder")
question = "Who is the villain in lord of the rings"
input_dict = tokenizer(question, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
sirreal/gpt-neo-125M-MC | 6036fadb648e9998f25a6bf4de7edaf4102a16d7 | 2022-07-12T00:59:31.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | sirreal | null | sirreal/gpt-neo-125M-MC | 67 | null | transformers | 5,487 | Entry not found |
HeyLucasLeao/gpt-neo-small-portuguese | 6c55098b12753c7a0848203d33195f6fa07cd092 | 2021-06-19T20:51:57.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | HeyLucasLeao | null | HeyLucasLeao/gpt-neo-small-portuguese | 66 | 1 | transformers | 5,488 | ## GPT-Neo Small Portuguese
#### Model Description
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
#### Training data
It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: https://archive.org/details/ptwiki-dump-20210520
#### Training Procedure
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **1**
#### Goals
My true intention was totally educational, thus making available a Portuguese version of this model.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
text = 'eu amo o brasil.'
generated = tokenizer(f'<|startoftext|> {text}',
return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=3,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=1.9,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\\\\
\\\\
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
#Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
#>> Generated text 1
#<|startoftext|> eu amo o brasil. O termo foi usado por alguns autores como uma forma de designar a formação do poder político do Brasil. A partir da década de 1960, o termo passou a ser usado para designar a formação política do Brasil. A partir de meados da década de 1970 e até o inicio dos anos 2000, o termo foi aplicado à formação político-administrativo do país, sendo utilizado por alguns autores como uma expressão de "política de direita". História Antecedentes O termo "político-administrário" foi usado pela primeira vez em 1891 por um gru
#>> Generated text 2
#<|startoftext|> eu amo o brasil. É uma das muitas pessoas do mundo, ao contrário da maioria das pessoas, que são chamados de "pessoas do Brasil", que são chamados de "brincos do país" e que têm uma carreira de mais de um século. O termo "brincal de ouro" é usado em referências às pessoas que vivem no Brasil, e que são chamados "brincos do país", que são "cidade" e que vivem na cidade de Nova York e que vive em um país onde a maior parte das pessoas são chamados de "cidades". Hist
#>> Generated text 3
#<|startoftext|> eu amo o brasil. É uma expressão que se refere ao uso de um instrumento musical em particular para se referir à qualidade musical, o que é uma expressão da qualidade da qualidade musical de uma pessoa. A expressão "amor" (em inglês, amo), é a expressão que pode ser usada com o intuito empregado em qualquer situação em que a vontade de uma pessoa de se sentir amado ou amoroso é mais do que um desejo de uma vontade. Em geral, a expressão "amoro" (do inglês, amo) pode também se referir tanto a uma pessoa como um instrumento de cordas ou de uma
``` |
dbmdz/german-gpt2-faust | d5aa8328284f7305e3b1a3d680cee886a12d2b2f | 2021-05-21T15:26:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"de",
"transformers",
"license:mit"
] | text-generation | false | dbmdz | null | dbmdz/german-gpt2-faust | 66 | 1 | transformers | 5,489 | ---
language: de
widget:
- text: "Schon um die Liebe"
license: mit
---
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
**Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it.
More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
## German GPT-2 fine-tuned on Faust I and II
We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from [Deutsches Textarchiv (DTA)](http://www.deutschestextarchiv.de/book/show/goethe_faust01_1808). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ")
Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).
We also open source this fine-tuned model. Text can be generated with:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="dbmdz/german-gpt2-faust",
tokenizer="dbmdz/german-gpt2-faust")
text = pipe("Schon um die Liebe", max_length=100)[0]["generated_text"]
print(text)
```
and could output:
```
Schon um die Liebe bitte ich, Herr! Wer mag sich die dreifach Ermächtigen?
Sei mir ein Held!
Und daß die Stunde kommt spreche ich nicht aus.
Faust (schaudernd).
Den schönen Boten finde' ich verwirrend;
```
# License
All models are licensed under [MIT](LICENSE).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/stefan-it/german-gpt/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
google/t5-11b-ssm-tqa | 55330fe7015e311b675fa8a90a04cf39d1c4fc88 | 2020-12-07T08:38:28.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:trivia_qa",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-11b-ssm-tqa | 66 | 3 | transformers | 5,490 | ---
language: en
datasets:
- c4
- wikipedia
- trivia_qa
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa).
**Note**: The model was fine-tuned on 100% of the train splits of [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa) for 10 steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Trivia QA - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-large-ssm-tqa**|**60.5**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-tqa|61.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-tqa")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-tqa")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
junnyu/bert_chinese_mc_base | ef8daad8e9da8105c084f4e57b4f935b0504edd5 | 2021-05-20T05:28:56.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/bert_chinese_mc_base | 66 | 2 | transformers | 5,491 | https://github.com/alibaba-research/ChineseBLUE |
mahaamami/distilgpt2-finetuned-wikitext2 | 38627f52bdce02d2df63b3b0dfa27752ace8b29d | 2022-01-05T00:03:59.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | mahaamami | null | mahaamami/distilgpt2-finetuned-wikitext2 | 66 | null | transformers | 5,492 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8565 | 1.0 | 948 | 3.5237 |
| 3.6142 | 2.0 | 1896 | 3.4570 |
| 3.5601 | 3.0 | 2844 | 3.4385 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
microsoft/deberta-xxlarge-v2 | 21131a5441d2eb7936b758ca02a67baaa7a34f85 | 2021-02-11T02:05:17.000Z | [
"pytorch",
"deberta-v2",
"en",
"transformers",
"deberta",
"license:mit"
] | null | false | microsoft | null | microsoft/deberta-xxlarge-v2 | 66 | null | transformers | 5,493 | ---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)
|
monologg/koelectra-small-discriminator | fc3894f2a606ae0742b993ef36af00947bb3601e | 2020-12-26T16:23:23.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers"
] | null | false | monologg | null | monologg/koelectra-small-discriminator | 66 | null | transformers | 5,494 | ---
language: ko
---
# KoELECTRA (Small Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-small-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-small-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
nguyenvulebinh/spoken-norm | cc1e681532fc4445d2ce47e9ed41a0be98d8204b | 2022-02-11T17:21:36.000Z | [
"pytorch",
"transformers"
] | null | false | nguyenvulebinh | null | nguyenvulebinh/spoken-norm | 66 | 4 | transformers | 5,495 | # Transformation spoken text to written text
This model is used for formatting raw asr text output from spoken text to written text (Eg. date, number, id, ...). It also supports formatting "out of vocab" by using external vocabulary.
Some of examples:
```text
input : tám giờ chín phút ngày mười tám tháng năm năm hai nghìn không trăm hai mươi hai
output : 8h9 18/5/2022
input : mã số quy đê tê tê đê hai tám chéo hai không không ba
output : mã số qdttd28/2003
input : thể tích tám mét khối trọng lượng năm mươi ki lô gam
output : thể tích 8 m3 trọng lượng 50 kg
input : ngày hai tám tháng tư cô vít bùng phát ở sờ cốt lờn chiếm tám mươi phần trăm là biến chủng đen ta và bê ta
ex_vocab : ['scotland', 'covid', 'delta', 'beta']
output : 28/4 covid bùng phát ở scotland chiếm 80 % là biến chủng delta và beta
```
## Model architecture

# Infer model
- Play around at [Huggingface Space](https://huggingface.co/spaces/nguyenvulebinh/spoken-norm)
```python
import torch
import model_handling
from data_handling import DataCollatorForNormSeq2Seq
from model_handling import EncoderDecoderSpokenNorm
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
## Init tokenizer and model
```python
tokenizer = model_handling.init_tokenizer()
model = EncoderDecoderSpokenNorm.from_pretrained('nguyenvulebinh/spoken-norm', cache_dir=model_handling.cache_dir)
data_collator = DataCollatorForNormSeq2Seq(tokenizer)
```
## Infer sample
```python
bias_list = ['scotland', 'covid', 'delta', 'beta']
input_str = 'ngày hai tám tháng tư cô vít bùng phát ở sờ cốt lờn chiếm tám mươi phần trăm là biến chủng đen ta và bê ta'
```
```python
inputs = tokenizer([input_str])
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
if len(bias_list) > 0:
bias = data_collator.encode_list_string(bias_list)
bias_input_ids = bias['input_ids']
bias_attention_mask = bias['attention_mask']
else:
bias_input_ids = None
bias_attention_mask = None
inputs = {
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask),
"bias_input_ids": bias_input_ids,
"bias_attention_mask": bias_attention_mask,
}
```
## Format input text **with** bias phrases
```python
outputs = model.generate(**inputs, output_attentions=True, num_beams=1, num_return_sequences=1)
for output in outputs.cpu().detach().numpy().tolist():
# print('\n', tokenizer.decode(output, skip_special_tokens=True).split(), '\n')
print(tokenizer.sp_model.DecodePieces(tokenizer.decode(output, skip_special_tokens=True).split()))
```
28/4 covid bùng phát ở scotland chiếm 80 % là biến chủng delta và beta
## Format input text **without** bias phrases
```python
outputs = model.generate(**{
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask),
"bias_input_ids": None,
"bias_attention_mask": None,
}, output_attentions=True, num_beams=1, num_return_sequences=1)
for output in outputs.cpu().detach().numpy().tolist():
# print('\n', tokenizer.decode(output, skip_special_tokens=True).split(), '\n')
print(tokenizer.sp_model.DecodePieces(tokenizer.decode(output, skip_special_tokens=True).split()))
```
28/4 cô vít bùng phát ở sờ cốt lờn chiếm 80 % là biến chủng đen ta và bê ta
## Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh) |
textattack/distilbert-base-uncased-RTE | 9afcea1db32e40187490918bccdb9fc9cd3d4563 | 2020-07-06T16:31:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-RTE | 66 | null | transformers | 5,496 | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.6570397111913358, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
Beri/legal-qa | 67b37f86c53224e8818bff88611bd82806922d36 | 2022-03-02T05:12:04.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Beri | null | Beri/legal-qa | 66 | null | transformers | 5,497 | Entry not found |
tsantos/PathologyBERT | 338fe40394931f12f34a7ac14260a666aa77f0c8 | 2022-07-21T01:13:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:1901.08746",
"arxiv:1903.10676",
"arxiv:1906.05474",
"arxiv:2205.06885",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tsantos | null | tsantos/PathologyBERT | 66 | null | transformers | 5,498 | ---
language: "en"
tags:
- fill-mask
---
# PathologyBERT - Masked Language Model with Breast Pathology Specimens.
Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Recently, several studies have explored the utility and efficacy of contextual models in the clinical, medical, and biomedical domains ([BioBERT](https://arxiv.org/pdf/1901.08746.pdf), [ClinicalBERT](https://aclanthology.org/W19-1909/), [SciBERT](https://arxiv.org/abs/1903.10676), [BlueBERT](https://arxiv.org/abs/1906.05474)
However, while there is a growing interest in developing language models for more specific domains, the current trend appears to prefer re-training general-domain models on specialized corpora rather than developing models from the ground up with specialized vocabulary. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. However, in fields requiring specialized terminology, such as pathology, these models often fail to perform adequately. One of the major reasons for this limitation is because BERT employs [Word-Pieces](https://www.semanticscholar.org/paper/Google%27s-Neural-Machine-Translation-System%3A-the-Gap-Wu-Schuster/dbde7dfa6cae81df8ac19ef500c42db96c3d1edd) for unsupervised input tokenization, a technique that relies on a predetermined set of Word-Pieces. The vocabulary is built such that it contains the most commonly used words or subword units and as a result, any new words can be represented by frequent subwords. Although WordPiece was built to handle suffixes and complex compound words, it often fails with domain-specific terms. For example, while [ClinicalBERT](https://aclanthology.org/W19-1909/) successfully tokenizes the word 'endpoint' as ['end', '##point'], it tokenize the word 'carcinoma' as ['car', '##cin', '##oma'] in which the word lost its actual meaning and replaced by some non-relevant junk words, such as `car'. The words which was replaced by the junk pieces, may not play the original role in deriving the contextual representation of the sentence or the paragraph, even when analyzed by the powerful transformer models.
To facilitate research on language representations in the pathology domain and assist researchers in addressing the current limitations and advancing cancer research, we preset PathologyBERT, a pre-trained masked language model trained on Histopathology Specimens Reports.
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 64 (mean size of report is 42±26), masked language model probability of 0.15, and a learning rate of 2e-5 for pre-training the Language Model . The model was trained for 300,000 steps. All other BERT default parameters were used.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> language_model = pipeline('fill-mask', model='tsantos/PathologyBERT')
>>> language_model("intraductal papilloma with [MASK] AND MICRO calcifications")
[{'sequence': '[CLS] intraductal papilloma with sclerosis and micro calcifications [SEP]',
'score': 0.871,
'token': 2364,
'token_str': 'sclerosis'},
{'sequence': '[CLS] intraductal papilloma with hyalinization and micro calcifications [SEP]',
'score': 0.032,
'token': 4046,
'token_str': 'hyalinization'},
{'sequence': '[CLS] intraductal papilloma with atypia and micro calcifications [SEP]',
'score': 0.013,
'token': 652,
'token_str': 'atypia'},
{'sequence': '[CLS] intraductal papilloma with sclerosing and micro calcifications [SEP]',
'score': 0.006,
'token': 923,
'token_str': 'sclerosing'},
{'sequence': '[CLS] intraductal papilloma with calcifications and micro calcifications [SEP]',
'score': 0.004,
'token': 614,
'token_str': 'calcifications'}]
>>> language_model("micro calcifications with usual ductal hyperplasia and no [MASK] lesions identified.")
[{'sequence': '[CLS] micro calcifications with usual ductal hyperplasia and no atypical lesions identified. [SEP]',
'score': 0.902,
'token': 472,
'token_str': 'atypical'},
{'sequence': '[CLS] micro calcifications with usual ductal hyperplasia and no proliferative lesions identified. [SEP]',
'score': 0.054,
'token': 667,
'token_str': 'proliferative'},
{'sequence': '[CLS] micro calcifications with usual ductal hyperplasia and no papillary lesions identified. [SEP]',
'score': 0.009,
'token': 1177,
'token_str': 'papillary'},
{'sequence': '[CLS] micro calcifications with usual ductal hyperplasia and no invasive lesions identified. [SEP]',
'score': 0.003,
'token': 385,
'token_str': 'invasive'},
{'sequence': '[CLS] micro calcifications with usual ductal hyperplasia and no malignant lesions identified. [SEP]',
'score': 0.003,
'token': 581,
'token_str': 'malignant'}]
```
## More Information
Refer to the original paper, [Pre-trained Vs. A New Transformer Language Model for A Specific Domain - Breast Pathology Use-case](https://arxiv.org/abs/2205.06885) for additional details and masked language performance on Pathology Specimen Reports
## Hierarchical BERT Classification For Breast Cancer
We also provided a Hierarchical Classification System that uses PathologyBERT as base to classify Pathology Specimen Reports.
You can test the system directly on HuggingFace: https://huggingface.co/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer
Github: https://github.com/thiagosantos1/HCSBC
## Questions?
If you have any questions, please email [email protected]
|
microsoft/dit-large | 95e907dd11353009449ca44007cfdeee05224a62 | 2022-03-08T10:40:24.000Z | [
"pytorch",
"beit",
"arxiv:2203.02378",
"transformers",
"dit"
] | null | false | microsoft | null | microsoft/dit-large | 66 | 4 | transformers | 5,499 | ---
tags:
- dit
inference: false
---
# Document Image Transformer (large-sized model)
Document Image Transformer (DiT) model pre-trained on IIT-CDIP (Lewis et al., 2006), a dataset that includes 42 million document images. It was introduced in the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/dit). Note that DiT is identical to the architecture of [BEiT](https://huggingface.co/docs/transformers/model_doc/beit).
Disclaimer: The team releasing DiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Document Image Transformer (DiT) is a transformer encoder model (BERT-like) pre-trained on a large collection of images in a self-supervised fashion. The pre-training objective for the model is to predict visual tokens from the encoder of a discrete VAE (dVAE), based on masked patches.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled document images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for encoding document images into a vector space, but it's mostly meant to be fine-tuned on tasks like document image classification, table detection or document layout analysis. See the [model hub](https://huggingface.co/models?search=microsoft/dit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BeitFeatureExtractor, BeitForMaskedImageModeling
import torch
from PIL import Image
image = Image.open('path_to_your_document_image').convert('RGB')
feature_extractor = BeitFeatureExtractor.from_pretrained("microsoft/dit-large")
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-large")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, logits = outputs.loss, outputs.logits
```
### BibTeX entry and citation info
```bibtex
@article{Lewis2006BuildingAT,
title={Building a test collection for complex document information processing},
author={David D. Lewis and Gady Agam and Shlomo Engelson Argamon and Ophir Frieder and David A. Grossman and Jefferson Heard},
journal={Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval},
year={2006}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.