modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sudo-s/exper_batch_16_e8 | b8556977f61df28e344812e2cb7917909fbb20fa | 2022-06-26T22:18:36.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper_batch_16_e8 | 151 | null | transformers | 4,000 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper_batch_16_e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper_batch_16_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
- Accuracy: 0.9129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8115 | 0.16 | 100 | 3.7948 | 0.1862 |
| 3.1194 | 0.31 | 200 | 3.0120 | 0.3281 |
| 2.3703 | 0.47 | 300 | 2.4791 | 0.4426 |
| 2.07 | 0.63 | 400 | 2.1720 | 0.5 |
| 1.6847 | 0.78 | 500 | 1.7291 | 0.5956 |
| 1.3821 | 0.94 | 600 | 1.4777 | 0.6299 |
| 0.9498 | 1.1 | 700 | 1.2935 | 0.6681 |
| 0.8741 | 1.25 | 800 | 1.1353 | 0.7051 |
| 0.8875 | 1.41 | 900 | 0.9951 | 0.7448 |
| 0.7233 | 1.56 | 1000 | 0.9265 | 0.7487 |
| 0.6696 | 1.72 | 1100 | 0.8660 | 0.7625 |
| 0.7364 | 1.88 | 1200 | 0.8710 | 0.7579 |
| 0.3933 | 2.03 | 1300 | 0.7162 | 0.8038 |
| 0.3443 | 2.19 | 1400 | 0.6305 | 0.8300 |
| 0.3376 | 2.35 | 1500 | 0.6273 | 0.8315 |
| 0.3071 | 2.5 | 1600 | 0.5988 | 0.8319 |
| 0.2863 | 2.66 | 1700 | 0.6731 | 0.8153 |
| 0.3017 | 2.82 | 1800 | 0.6042 | 0.8315 |
| 0.2382 | 2.97 | 1900 | 0.5118 | 0.8712 |
| 0.1578 | 3.13 | 2000 | 0.4917 | 0.8736 |
| 0.1794 | 3.29 | 2100 | 0.5302 | 0.8631 |
| 0.1093 | 3.44 | 2200 | 0.5035 | 0.8635 |
| 0.1076 | 3.6 | 2300 | 0.5186 | 0.8674 |
| 0.1219 | 3.76 | 2400 | 0.4723 | 0.8801 |
| 0.1017 | 3.91 | 2500 | 0.5132 | 0.8712 |
| 0.0351 | 4.07 | 2600 | 0.4709 | 0.8728 |
| 0.0295 | 4.23 | 2700 | 0.4674 | 0.8824 |
| 0.0416 | 4.38 | 2800 | 0.4836 | 0.8805 |
| 0.0386 | 4.54 | 2900 | 0.4663 | 0.8828 |
| 0.0392 | 4.69 | 3000 | 0.4003 | 0.8990 |
| 0.0383 | 4.85 | 3100 | 0.4187 | 0.8948 |
| 0.0624 | 5.01 | 3200 | 0.4460 | 0.8874 |
| 0.0188 | 5.16 | 3300 | 0.4169 | 0.9029 |
| 0.0174 | 5.32 | 3400 | 0.4098 | 0.8951 |
| 0.0257 | 5.48 | 3500 | 0.4289 | 0.8951 |
| 0.0123 | 5.63 | 3600 | 0.4295 | 0.9029 |
| 0.0052 | 5.79 | 3700 | 0.4395 | 0.8994 |
| 0.0081 | 5.95 | 3800 | 0.4217 | 0.9082 |
| 0.0032 | 6.1 | 3900 | 0.4216 | 0.9056 |
| 0.0033 | 6.26 | 4000 | 0.4113 | 0.9082 |
| 0.0024 | 6.42 | 4100 | 0.4060 | 0.9102 |
| 0.0022 | 6.57 | 4200 | 0.4067 | 0.9090 |
| 0.0031 | 6.73 | 4300 | 0.4005 | 0.9113 |
| 0.0021 | 6.89 | 4400 | 0.4008 | 0.9129 |
| 0.0021 | 7.04 | 4500 | 0.3967 | 0.9113 |
| 0.0043 | 7.2 | 4600 | 0.3960 | 0.9121 |
| 0.0022 | 7.36 | 4700 | 0.3962 | 0.9125 |
| 0.0021 | 7.51 | 4800 | 0.3992 | 0.9121 |
| 0.002 | 7.67 | 4900 | 0.3951 | 0.9129 |
| 0.0023 | 7.82 | 5000 | 0.3952 | 0.9125 |
| 0.0021 | 7.98 | 5100 | 0.3952 | 0.9129 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SetFit/distilbert-base-uncased__enron_spam__all-train | becd769a06f34c608663d0e04bda5fc3a8e04268 | 2022-01-26T21:35:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__enron_spam__all-train | 150 | null | transformers | 4,001 | Entry not found |
beomi/kcgpt2 | 5554362307183649f72b31624203308a96392c56 | 2021-11-22T16:02:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | beomi | null | beomi/kcgpt2 | 150 | 1 | transformers | 4,002 | Entry not found |
deepset/gbert-large-sts | df9bf7c1d1f0482e9109c07d82c3ce7ab6895046 | 2021-10-21T12:16:36.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"exbert",
"license:mit"
] | text-classification | false | deepset | null | deepset/gbert-large-sts | 150 | 5 | transformers | 4,003 | ---
language: de
license: mit
tags:
- exbert
---
## Overview
**Language model:** gbert-large-sts
**Language:** German
**Training data:** German STS benchmark train and dev set
**Eval data:** German STS benchmark test set
**Infrastructure**: 1x V100 GPU
**Published**: August 12th, 2021
## Details
- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the [STS benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), which is available [here](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark).
## Hyperparameters
```
batch_size = 16
n_epochs = 4
warmup_ratio = 0.1
learning_rate = 2e-5
lr_schedule = LinearWarmup
```
## Performance
Stay tuned... and watch out for new papers on arxiv.org ;)
## Authors
- Julian Risch: `julian.risch [at] deepset.ai`
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Gutsch: `julian.gutsch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
julien-c/reactiongif-roberta | 0d81232b9f158b051a4caee860ad287b08940d48 | 2021-06-11T15:59:26.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:julien-c/reactiongif",
"transformers",
"generated-from-trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | julien-c | null | julien-c/reactiongif-roberta | 150 | 1 | transformers | 4,004 | ---
license: apache-2.0
tags:
- generated-from-trainer
datasets:
- julien-c/reactiongif
metrics:
- accuracy
model-index:
- name: model
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.2662102282047272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9150
- Accuracy: 0.2662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0528 | 0.44 | 1000 | 3.0265 | 0.2223 |
| 2.9836 | 0.89 | 2000 | 2.9263 | 0.2332 |
| 2.7409 | 1.33 | 3000 | 2.9041 | 0.2533 |
| 2.7905 | 1.77 | 4000 | 2.8763 | 0.2606 |
| 2.4359 | 2.22 | 5000 | 2.9072 | 0.2642 |
| 2.4507 | 2.66 | 6000 | 2.9230 | 0.2644 |
### Framework versions
- Transformers 4.7.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it | f25016c4813e79ba2f3f663a93524c2a68b785a6 | 2020-12-11T21:56:44.000Z | [
"pytorch",
"camembert",
"question-answering",
"it",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it | 150 | null | transformers | 4,005 | ---
language: it
---
# UmBERTo Wikipedia Uncased + italian SQuAD v1 📚 🧐 ❓
[UmBERTo-Wikipedia-Uncased](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) fine-tuned on [Italian SQUAD v1 dataset](https://github.com/crux82/squad-it) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
[UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking.
UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from Wikipedia-ITA.
## Details of the downstream task (Q&A) - Dataset 📚
[SQuAD](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) [Rajpurkar et al. 2016] is a large scale dataset for training of question answering systems on factoid questions. It contains more than 100,000 question-answer pairs about passages from 536 articles chosen from various domains of Wikipedia.
**SQuAD-it** is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type bert \
--model_name_or_path 'Musixmatch/umberto-wikipedia-uncased-v1' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/SQuAD_it-train.json' \
--predict_file '/content/dataset/SQuAD_it-test.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/drive/My\ Drive/umberto-uncased-finetuned-squadv1-it \
--overwrite_output_dir \
--save_steps 1000
```
With 10 epochs the model overfits the train dataset so I evaluated the different checkpoints created during training (every 1000 steps) and chose the best (In this case the one created at 17000 steps).
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **60.50** |
| **F1** | **72.41** |
```json
{
'exact': 60.50729399395453,
'f1': 72.4141113348361,
'total': 7609,
'HasAns_exact': 60.50729399395453,
'HasAns_f1': 72.4141113348361,
'HasAns_total': 7609,
'best_exact': 60.50729399395453,
'best_exact_thresh': 0.0,
'best_f1': 72.4141113348361,
'best_f1_thresh': 0.0
}
```
## Comparison ⚖️
| Model | EM | F1 score |
| -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- |
| [DrQA-it trained on SQuAD-it ](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 |
| This one |60.50 |72.41 |
| [bert-italian-finedtuned-squadv1-it-alfa](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa) |**62.51** |**74.16** | | **62.51** | **74.16** |
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it')
QnA_pipeline({
'context': 'Marco Aurelio era un imperatore romano che praticava lo stoicismo come filosofia di vita .',
'question': 'Quale filosofia seguì Marco Aurelio ?'
})
# Output:
{'answer': 'stoicismo', 'end': 65, 'score': 0.9477770241566028, 'start': 56}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
uer/roberta-tiny-word-chinese-cluecorpussmall | 17eeb8ac048438331c97281b3142afc7827af294 | 2022-02-19T15:57:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/roberta-tiny-word-chinese-cluecorpussmall | 150 | 1 | transformers | 4,006 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "最近一趟去北京的[MASK]几点发车"
---
# Chinese word-based RoBERTa Miniatures
## Model description
This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
Most Chinese pre-trained weights are based on Chinese character. Compared with character-based models, word-based models are faster (because of shorter sequence length) and have better performance according to our experimental results. To this end, we released the 5 Chinese word-based RoBERTa models of different sizes. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
Notice that the output results of Hosted inference API (right) are not properly displayed. When the predicted word has multiple characters, the single word instead of entire sentence is displayed. One can click **JSON Output** for normal output results.
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **word-based RoBERTa-Tiny** | [**L=2/H=128 (Tiny)**][2_128] |
| **word-based RoBERTa-Mini** | [**L=4/H=256 (Mini)**][4_256] |
| **word-based RoBERTa-Small** | [**L=4/H=512 (Small)**][4_512] |
| **word-based RoBERTa-Medium** | [**L=8/H=512 (Medium)**][8_512] |
| **word-based RoBERTa-Base** | [**L=12/H=768 (Base)**][12_768] |
Compared with [char-based models](https://huggingface.co/uer/chinese_roberta_L-2_H-128), word-based models achieve better results in most cases. Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny(char) | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| **RoBERTa-Tiny(word)** | **74.3(+2.0)** | **86.4** | **93.2** | **82.0** | **66.4** | **58.2** | **59.6** |
| RoBERTa-Mini(char) | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| **RoBERTa-Mini(word)** | **76.7(+1.0)** | **87.6** | **94.1** | **85.4** | **66.9** | **59.2** | **67.3** |
| RoBERTa-Small(char) | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| **RoBERTa-Small(word)** | **78.1(+1.3)** | **88.5** | **94.7** | **87.4** | **67.6** | **60.9** | **69.8** |
| RoBERTa-Medium(char) | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| **RoBERTa-Medium(word)** | **78.9(+1.1)** | **89.2** | **95.1** | **88.0** | **67.8** | **60.6** | **73.0** |
| RoBERTa-Base(char) | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
| **RoBERTa-Base(word)** | **80.2(+0.7)** | **90.3** | **95.7** | **89.4** | **68.0** | **61.5** | **76.8** |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of word-based RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-medium-word-chinese-cluecorpussmall')
>>> unmasker("[MASK]的首都是北京。")
[
{'sequence': '中国 的首都是北京。',
'score': 0.21525809168815613,
'token': 2873,
'token_str': '中国'},
{'sequence': '北京 的首都是北京。',
'score': 0.15194718539714813,
'token': 9502,
'token_str': '北京'},
{'sequence': '我们 的首都是北京。',
'score': 0.08854265511035919,
'token': 4215,
'token_str': '我们'},
{'sequence': '美国 的首都是北京。',
'score': 0.06808705627918243,
'token': 7810,
'token_str': '美国'},
{'sequence': '日本 的首都是北京。',
'score': 0.06071401759982109,
'token': 7788,
'token_str': '日本'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Since BertTokenizer does not support sentencepiece, AlbertTokenizer is used here.
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
```
>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
model_prefix='cluecorpussmall_spm',
vocab_size=100000,
max_sentence_length=1024,
max_sentencepiece_length=6,
user_defined_symbols=['[MASK]','[unused1]','[unused2]',
'[unused3]','[unused4]','[unused5]','[unused6]',
'[unused7]','[unused8]','[unused9]','[unused10]'],
pad_id=0,
pad_piece='[PAD]',
unk_id=1,
unk_piece='[UNK]',
bos_id=2,
bos_piece='[CLS]',
eos_id=3,
eos_piece='[SEP]',
train_extremely_large_corpus=True
)
```
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of word-based RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall |
zhufy/xquad-th-mbert-base | 48bcc21b1821b1d36d00d1eeecc1e6732a2d4538 | 2022-04-23T05:07:59.000Z | [
"pytorch",
"bert",
"question-answering",
"Thai",
"dataset:xquad.th",
"transformers",
"bert-base",
"autotrain_compatible"
] | question-answering | false | zhufy | null | zhufy/xquad-th-mbert-base | 150 | null | transformers | 4,007 | ---
language: Thai
task: extractive question answering
datasets: xquad.th
tags:
- bert-base
---
# Model Description
This model is for Thai extractive question answering. It is based on the multilingual BERT [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model, and it is case-sensitive: it makes a difference between english and English
# Training data
We split the original [xquad](https://github.com/deepmind/xquad) dataset into the training/validation/testing set. Totally, there are 876/161/153 question-answer pairs from 34/7/7 articles in the training/validation/testing set separately. You can find the details of the dataset here [xquad_split](https://huggingface.co/datasets/zhufy/xquad_split).
# How to use
You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/xquad-th-mbert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/xquad-th-mbert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "ดินดอนสามเหลี่ยม ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น
ใกล้มิลลิงเงิน อาน เดอ เรน ใกล้ชายแดนเนเธอร์แลนด์ติดกับเยอรมัน
โดยมีสาขาของไรน์ไหลเข้าสู่แม่น้ำวาลและเนเดอร์เรน เนื่องจากน้ำส่วนใหญ่จากแม่น้ำไรน์
คำว่า ดินดอนสามเหลี่ยมไรน์ ซึ่งสั้นกว่าจึงเป็นคำที่ใช้เรียกกันทั่วไป อย่างไรก็ดี
ชื่อนี้ยังใช้เรียกดินดอนสามเหลี่ยมบริเวณแม่น้ำซึ่งแม่น้ำไรน์ไหลเข้าสู่ทะเลสาบคอนสแตนซ์อีกด้วย
ดังนั้นการเรียกดินดอนสามเหลี่ยมซึ่งใหญ่กว่าว่าไรน์-เมิส หรือแม้กระทั่งดินแดนสามเหลี่ยมไรน์
-เมิส-สเกลต์จึงชัดเจนกว่า เนื่องจากแม่น้ำสเกลต์สิ้นสุดที่ดินดอนสามเหลี่ยมเดียวกัน"
>>> question = "ดินดอนสามเหลี่ยมในเนเธอร์แลนด์มีชื่อว่าอะไร?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.9426798224449158,
'start': 17,
'end': 84,
'answer': 'ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น'}
``` |
eslamxm/mbart-finetune-en-cnn | 2b086abad9a7c4f45730188d2b7056580721b628 | 2022-06-17T16:48:32.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"en",
"seq2seq",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mbart-finetune-en-cnn | 150 | null | transformers | 4,008 | ---
tags:
- summarization
- en
- seq2seq
- mbart
- Abstractive Summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: mbert-finetune-en-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetune-en-cnn
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5577
- Rouge-1: 37.69
- Rouge-2: 16.47
- Rouge-l: 35.53
- Gen Len: 79.93
- Bertscore: 74.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ricardo-filho/bert_base_tcm_0.7 | 6f6251c0533dd90f79f7a587ff5c9972d719e8dc | 2022-06-17T13:40:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ricardo-filho | null | ricardo-filho/bert_base_tcm_0.7 | 150 | null | transformers | 4,009 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert_base_tcm_0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_0.7
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Criterio Julgamento Precision: 0.8235
- Criterio Julgamento Recall: 0.9032
- Criterio Julgamento F1: 0.8615
- Criterio Julgamento Number: 93
- Data Sessao Precision: 0.7324
- Data Sessao Recall: 0.9286
- Data Sessao F1: 0.8189
- Data Sessao Number: 56
- Modalidade Licitacao Precision: 0.9415
- Modalidade Licitacao Recall: 0.9769
- Modalidade Licitacao F1: 0.9589
- Modalidade Licitacao Number: 346
- Numero Exercicio Precision: 0.9486
- Numero Exercicio Recall: 0.9486
- Numero Exercicio F1: 0.9486
- Numero Exercicio Number: 175
- Objeto Licitacao Precision: 0.5352
- Objeto Licitacao Recall: 0.6909
- Objeto Licitacao F1: 0.6032
- Objeto Licitacao Number: 55
- Valor Objeto Precision: 0.8
- Valor Objeto Recall: 0.8649
- Valor Objeto F1: 0.8312
- Valor Objeto Number: 37
- Overall Precision: 0.8680
- Overall Recall: 0.9318
- Overall F1: 0.8987
- Overall Accuracy: 0.9966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0267 | 1.0 | 2332 | 0.0175 | 0.8333 | 0.9140 | 0.8718 | 93 | 0.6825 | 0.7679 | 0.7227 | 56 | 0.9342 | 0.9855 | 0.9592 | 346 | 0.9194 | 0.9771 | 0.9474 | 175 | 0.4154 | 0.4909 | 0.45 | 55 | 0.5 | 0.7568 | 0.6022 | 37 | 0.8303 | 0.9121 | 0.8693 | 0.9954 |
| 0.0211 | 2.0 | 4664 | 0.0158 | 0.7154 | 0.9462 | 0.8148 | 93 | 0.7812 | 0.8929 | 0.8333 | 56 | 0.9319 | 0.9884 | 0.9593 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.4 | 0.6545 | 0.4966 | 55 | 0.8293 | 0.9189 | 0.8718 | 37 | 0.8353 | 0.9449 | 0.8867 | 0.9956 |
| 0.0127 | 3.0 | 6996 | 0.0157 | 0.8218 | 0.8925 | 0.8557 | 93 | 0.8254 | 0.9286 | 0.8739 | 56 | 0.9522 | 0.9798 | 0.9658 | 346 | 0.96 | 0.96 | 0.96 | 175 | 0.5735 | 0.7091 | 0.6341 | 55 | 0.6857 | 0.6486 | 0.6667 | 37 | 0.8835 | 0.9252 | 0.9038 | 0.9957 |
| 0.0074 | 4.0 | 9328 | 0.0128 | 0.8235 | 0.9032 | 0.8615 | 93 | 0.7324 | 0.9286 | 0.8189 | 56 | 0.9415 | 0.9769 | 0.9589 | 346 | 0.9486 | 0.9486 | 0.9486 | 175 | 0.5352 | 0.6909 | 0.6032 | 55 | 0.8 | 0.8649 | 0.8312 | 37 | 0.8680 | 0.9318 | 0.8987 | 0.9966 |
| 0.0065 | 5.0 | 11660 | 0.0177 | 0.8113 | 0.9247 | 0.8643 | 93 | 0.675 | 0.9643 | 0.7941 | 56 | 0.9444 | 0.9827 | 0.9632 | 346 | 0.9392 | 0.9714 | 0.9551 | 175 | 0.5075 | 0.6182 | 0.5574 | 55 | 0.7674 | 0.8919 | 0.825 | 37 | 0.8566 | 0.9409 | 0.8968 | 0.9958 |
| 0.005 | 6.0 | 13992 | 0.0161 | 0.8485 | 0.9032 | 0.875 | 93 | 0.7164 | 0.8571 | 0.7805 | 56 | 0.9496 | 0.9798 | 0.9644 | 346 | 0.9556 | 0.9829 | 0.9690 | 175 | 0.6290 | 0.7091 | 0.6667 | 55 | 0.8108 | 0.8108 | 0.8108 | 37 | 0.8878 | 0.9344 | 0.9105 | 0.9967 |
| 0.0039 | 7.0 | 16324 | 0.0185 | 0.8925 | 0.8925 | 0.8925 | 93 | 0.7812 | 0.8929 | 0.8333 | 56 | 0.9602 | 0.9769 | 0.9685 | 346 | 0.9607 | 0.9771 | 0.9688 | 175 | 0.5224 | 0.6364 | 0.5738 | 55 | 0.8378 | 0.8378 | 0.8378 | 37 | 0.8951 | 0.9291 | 0.9118 | 0.9966 |
| 0.0035 | 8.0 | 18656 | 0.0188 | 0.8431 | 0.9247 | 0.8821 | 93 | 0.7903 | 0.875 | 0.8305 | 56 | 0.9571 | 0.9682 | 0.9626 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.6981 | 0.6727 | 0.6852 | 55 | 0.8462 | 0.8919 | 0.8684 | 37 | 0.9068 | 0.9318 | 0.9191 | 0.9969 |
| 0.0017 | 9.0 | 20988 | 0.0207 | 0.8529 | 0.9355 | 0.8923 | 93 | 0.7727 | 0.9107 | 0.8361 | 56 | 0.9630 | 0.9769 | 0.9699 | 346 | 0.9605 | 0.9714 | 0.9659 | 175 | 0.7143 | 0.6364 | 0.6731 | 55 | 0.8462 | 0.8919 | 0.8684 | 37 | 0.9107 | 0.9370 | 0.9237 | 0.9968 |
| 0.002 | 10.0 | 23320 | 0.0191 | 0.8614 | 0.9355 | 0.8969 | 93 | 0.7647 | 0.9286 | 0.8387 | 56 | 0.9549 | 0.9798 | 0.9672 | 346 | 0.9553 | 0.9771 | 0.9661 | 175 | 0.6167 | 0.6727 | 0.6435 | 55 | 0.825 | 0.8919 | 0.8571 | 37 | 0.8954 | 0.9436 | 0.9188 | 0.9968 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zhifei/autotrain-autotrain-chinese-title-summarization-9-1101340178 | dc42d4057c06576b99763c108124aecea13b0b56 | 2022-07-07T10:49:19.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:zhifei/autotrain-data-autotrain-chinese-title-summarization-9",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | zhifei | null | zhifei/autotrain-autotrain-chinese-title-summarization-9-1101340178 | 150 | null | transformers | 4,010 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zhifei/autotrain-data-autotrain-chinese-title-summarization-9
co2_eq_emissions: 1.565396518204961
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1101340178
- CO2 Emissions (in grams): 1.565396518204961
## Validation Metrics
- Loss: 0.00012778821110259742
- Rouge1: 29.2308
- Rouge2: 0.0
- RougeL: 29.2308
- RougeLsum: 29.2308
- Gen Len: 18.4462
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-autotrain-chinese-title-summarization-9-1101340178
``` |
pszemraj/grammar-synthesis-base | 4b24c4281298966b47251d88feada34754849af6 | 2022-07-22T08:30:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:jfleg",
"arxiv:2107.06751",
"transformers",
"grammar",
"spelling",
"punctuation",
"error-correction",
"grammar synthesis",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/grammar-synthesis-base | 150 | null | transformers | 4,011 | ---
license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
datasets:
- jfleg
widget:
- text: "i can has cheezburger"
example_title: "cheezburger"
- text: "There car broke down so their hitching a ride to they're class."
example_title: "compound-1"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
example_title: "lowercased audio transcription output"
- text: "Semo eaxmeslp of bda gmaramr ttah occru deu to nounprnooun ageremten errrso inlceud Anan adn Pat aer mairred he has bnee togethre fro 20 yaesr Anna and Pta aer plraul wheil he is sniurgla Teh sentecne suhold rdea Aann adn Pat are mraried tyhe heav"
example_title: "descramble unintelligible text"
parameters:
max_length: 128
min_length: 2
num_beams: 8
repetition_penalty: 1.3
length_penalty: 0.95
early_stopping: True
---
# grammar-synthesis-base (beta)
a fine-tuned version of [google/t5-base-lm-adapt](https://huggingface.co/google/t5-base-lm-adapt) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset. Check out a [demo notebook on Colab here](https://colab.research.google.com/gist/pszemraj/91abb08aa99a14d9fdc59e851e8aed66/demo-for-grammar-synthesis-base.ipynb).
usage in Python (after `pip install transformers`):
```
from transformers import pipeline
corrector = pipeline(
'text2text-generation',
'pszemraj/grammar-synthesis-base',
)
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)
```
## Model description
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
## Limitations
- dataset: `cc-by-nc-sa-4.0`
- model: `apache-2.0`
- this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
## Use Cases
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
- To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
> An example of this model running on CPU with beam search:
```
original response:
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
```
_Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
## Training and evaluation data
More information needed 😉
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ThomasNLG/t5-qg_webnlg_synth-en | 10d5367deb88263e2fcb3a25ae6eb8bd93aedebf | 2021-07-09T07:45:44.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:squad_v2",
"arxiv:2104.07555",
"transformers",
"qa",
"question",
"generation",
"SQuAD",
"data2text",
"metric",
"nlg",
"t5-small",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ThomasNLG | null | ThomasNLG/t5-qg_webnlg_synth-en | 149 | 2 | transformers | 4,012 | ---
language: en
tags:
- qa
- question
- generation
- SQuAD
- data2text
- metric
- nlg
- t5-small
license: mit
datasets:
- squad_v2
model-index:
- name: t5-qg_webnlg_synth-en
results:
- task:
name: Data Question Generation
type: Text To Text Generation
widget:
- text: "The Eagle </s> name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"
---
# t5-qg_webnlg_synth-en
## Model description
This model is a *Data Question Generation* model based on T5-small, that generates questions, given a structured table as input and the conditioned answer.
It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is, for QG only.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-qg_webnlg_synth-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-qg_webnlg_synth-en")
```
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
`text_input = "{ANSWER} </s> {CONTEXT}"`
where `CONTEXT is a structured table that is linearised this way:
`CONTEXT = "name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"`
## Training data
The model was trained on synthetic data as described in [Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation](https://arxiv.org/abs/2104.07555).
### Citation info
```bibtex
@article{rebuffel2021data,
title={Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation},
author={Rebuffel, Cl{\'e}ment and Scialom, Thomas and Soulier, Laure and Piwowarski, Benjamin and Lamprier, Sylvain and Staiano, Jacopo and Scoutheeten, Geoffrey and Gallinari, Patrick},
journal={arXiv preprint arXiv:2104.07555},
year={2021}
}
``` |
asapp/sew-mid-100k | 8ea839f6cd7d4258bca9c76c0790e2041081f728 | 2021-10-26T19:38:18.000Z | [
"pytorch",
"sew",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-mid-100k | 149 | null | transformers | 4,013 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-mid
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
bespin-global/klue-roberta-small-3i4k-intent-classification | cd3371a10fc8d6da5be80d66cc212a07d3e6aaed | 2021-12-20T05:56:59.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"ko",
"dataset:kor_3i4k",
"transformers",
"intent-classification",
"license:cc-by-nc-4.0"
] | text-classification | false | bespin-global | null | bespin-global/klue-roberta-small-3i4k-intent-classification | 149 | 2 | transformers | 4,014 | ---
language: ko
tags:
- intent-classification
datasets:
- kor_3i4k
license: cc-by-nc-4.0
---
## Finetuning
- Pretrain Model : [klue/roberta-small](https://github.com/KLUE-benchmark/KLUE)
- Dataset for fine-tuning : [3i4k](https://github.com/warnikchow/3i4k)
- Train : 46,863
- Validation : 8,271 (15% of Train)
- Test : 6,121
- Label info
- 0: "fragment",
- 1: "statement",
- 2: "question",
- 3: "command",
- 4: "rhetorical question",
- 5: "rhetorical command",
- 6: "intonation-dependent utterance"
- Parameters of Training
```
{
"epochs": 3 (setting 10 but early stopped),
"batch_size":32,
"optimizer_class": "<keras.optimizer_v2.adam.Adam'>",
"optimizer_params": {
"lr": 5e-05
},
"min_delta": 0.01
}
```
## Usage
``` python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, TextClassificationPipeline
# Load fine-tuned MRC model by HuggingFace Model Hub
HUGGINGFACE_MODEL_PATH = "bespin-global/klue-roberta-small-3i4k-intent-classification"
loaded_tokenizer = RobertaTokenizerFast.from_pretrained(HUGGINGFACE_MODEL_PATH )
loaded_model = RobertaForSequenceClassification.from_pretrained(HUGGINGFACE_MODEL_PATH )
# using Pipeline
text_classifier = TextClassificationPipeline(
tokenizer=loaded_tokenizer,
model=loaded_model,
return_all_scores=True
)
# predict
text = "your text"
preds_list = text_classifier(text)
best_pred = preds_list[0]
print(f"Label of Best Intentatioin: {best_pred['label']}")
print(f"Score of Best Intentatioin: {best_pred['score']}")
```
## Evaluation
```
precision recall f1-score support
command 0.89 0.92 0.90 1296
fragment 0.98 0.96 0.97 600
intonation-depedent utterance 0.71 0.69 0.70 327
question 0.95 0.97 0.96 1786
rhetorical command 0.87 0.64 0.74 108
rhetorical question 0.61 0.63 0.62 174
statement 0.91 0.89 0.90 1830
accuracy 0.90 6121
macro avg 0.85 0.81 0.83 6121
weighted avg 0.90 0.90 0.90 6121
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/) |
ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune | 695b2d48d52ff60ee5cae7395d02f7347e40e94d | 2022-05-16T11:41:05.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"nl",
"dataset:ml6team/cnn_dailymail_nl",
"transformers",
"bart",
"summarization",
"autotrain_compatible"
] | summarization | false | ml6team | null | ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune | 149 | 9 | transformers | 4,015 | ---
language:
- nl
tags:
- mbart
- bart
- summarization
datasets:
- ml6team/cnn_dailymail_nl
pipeline_tag: summarization
widget:
- text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.'
- text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.'
---
# mbart-large-cc25-cnn-dailymail-nl
## Model description
Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97)
## Intended uses & limitations
It's meant for summarizing Dutch news articles.
#### How to use
```python
import transformers
undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained(
"ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune"
)
tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
summarization_pipeline = transformers.pipeline(
task="summarization",
model=undisputed_best_model,
tokenizer=tokenizer,
)
summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[
"nl_XX"
]
article = "Kan je dit even samenvatten alsjeblief." # Dutch
summarization_pipeline(
article,
do_sample=True,
top_p=0.75,
top_k=50,
# num_beams=4,
min_length=50,
early_stopping=True,
truncation=True,
)[0]["summary_text"]
```
## Training data
Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) and another smaller dataset that we can't open source because we scraped it from the internet. For more information check out our blog post [here](https://blog.ml6.eu/). |
mrm8488/distilroberta-finetuned-tweets-hate-speech | 9ddfe248ddc5e5e916c43ef64bfdf8406dbfe266 | 2021-05-20T18:25:15.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:tweets_hate_speech_detection",
"transformers",
"twitter",
"hate",
"speech"
] | text-classification | false | mrm8488 | null | mrm8488/distilroberta-finetuned-tweets-hate-speech | 149 | 3 | transformers | 4,016 | ---
language: en
tags:
- twitter
- hate
- speech
datasets:
- tweets_hate_speech_detection
widget:
- text: "the fuck done with #mansplaining and other bullshit."
---
# distilroberta-base fine-tuned on tweets_hate_speech_detection dataset for hate speech detection
Validation accuray: 0.98 |
ydshieh/tiny-random-gptj-for-sequence-classification | 265b0629231ade56e81aa62e4a8c4623970a7cf9 | 2022-04-08T10:21:26.000Z | [
"pytorch",
"tf",
"gptj",
"text-classification",
"transformers"
] | text-classification | false | ydshieh | null | ydshieh/tiny-random-gptj-for-sequence-classification | 149 | null | transformers | 4,017 | Entry not found |
lightonai/RITA_l | b38b5daa35b416f710216305c7258e3138503b62 | 2022-05-19T08:23:12.000Z | [
"pytorch",
"rita",
"text-generation",
"protein",
"dataset:uniref-100",
"arxiv:2205.05789",
"transformers"
] | text-generation | false | lightonai | null | lightonai/RITA_l | 149 | null | transformers | 4,018 | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-L
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[**Large**](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_l, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_l")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
rowidabelaal/wordpred_arabert | 236fb04271eeaeb770c9e18938622f34d3d8f8ad | 2022-05-12T19:41:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rowidabelaal | null | rowidabelaal/wordpred_arabert | 149 | null | transformers | 4,019 | Entry not found |
pszemraj/gpt2-medium-email-generation | ae52296cb2964425706ec826cf12595b5a7295fe | 2022-07-27T10:46:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:aeslc",
"transformers",
"generated_from_trainer",
"email generation",
"email",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/gpt2-medium-email-generation | 149 | 1 | transformers | 4,020 | ---
license:
- mit
tags:
- generated_from_trainer
- email generation
- email
datasets:
- aeslc
widget:
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work."
example_title: "fan"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value"
example_title: "value"
- text: "URGENT - I need the TPS reports"
example_title: "URGENT"
- text: "Hi <NAME>,\n\nI hope this email finds you extremely well."
example_title: "emails that find you"
parameters:
min_length: 4
max_length: 96
length_penalty: 0.7
no_repeat_ngram_size: 2
do_sample: False
num_beams: 4
early_stopping: True
repetition_penalty: 2.5
thumbnail: https://i.imgur.com/0wk0p4D.png
---
# gpt2-medium-email-generation
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/gpt2-medium-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
use_fast=False,
do_sample=False,
early_stopping=True,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=64,
) # generate
```
A script to use this on CPU/command line can be found [here](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) :)
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the `aeslc` dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.4189
- eval_runtime: 301.28
- eval_samples_per_second: 6.333
- eval_steps_per_second: 0.793
- epoch: 4.0
- step: 516
## Training and evaluation data
- the [aeslc](https://huggingface.co/datasets/aeslc) dataset.
- Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
albert-xlarge-v1 | a74dd20bb0c3d36e36938da0b86adc5d2ec60315 | 2021-01-13T15:30:39.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | albert-xlarge-v1 | 148 | null | transformers | 4,021 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = AlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = TFAlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Geotrend/distilbert-base-en-cased | 7c8066c5447b0aeee182db36d1e1cecf38b38860 | 2021-08-16T13:16:37.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-cased | 148 | null | transformers | 4,022 | ---
language: en
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-pl-es | 46860c7251776618d4decd484f405ff0593fe858 | 2021-09-10T14:01:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pl-es | 148 | null | transformers | 4,023 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-es
* source languages: pl
* target languages: es
* OPUS readme: [pl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.es | 46.9 | 0.654 |
|
Muennighoff/SGPT-125M-weightedmean-nli-bitfit | f0dd127410940d2c0b30cc1f02a81a0ad28d3cc7 | 2022-06-18T12:55:07.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-125M-weightedmean-nli-bitfit | 148 | null | sentence-transformers | 4,024 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa | 81c21061d2ab35f9d60597be8273679e1954f437 | 2021-09-11T09:28:19.000Z | [
"pytorch",
"camembert",
"question-answering",
"th",
"transformers",
"autotrain_compatible"
] | question-answering | false | airesearch | null | airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa | 148 | null | transformers | 4,025 | ---
language: th
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# wangchanberta-base-wiki-20210520-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-spmd` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n --model_name $MODEL_NAME \\n --dataset_name chimera_qa \\n --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n --model_max_length 400 \\n --pad_on_right \\n --fp16
``` |
clue/roberta_chinese_base | 9328ef0a5bea8a3a0085a190af7c7148417dc66e | 2021-05-20T15:23:58.000Z | [
"pytorch",
"jax",
"roberta",
"zh",
"transformers"
] | null | false | clue | null | clue/roberta_chinese_base | 148 | 2 | transformers | 4,026 | ---
language: zh
---
## roberta_chinese_base
### Overview
**Language model:** roberta-base
**Model size:** 392M
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!!
```
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_base")
roberta = BertModel.from_pretrained("clue/roberta_chinese_base")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
flyhero/gpt-j-6B | c7aaf3b7617a255aee737e38d44a18da649a07e7 | 2021-08-19T05:47:39.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | flyhero | null | flyhero/gpt-j-6B | 148 | 7 | transformers | 4,027 | ### Model Description
GPT-J 6B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-J refers to the class of models, while 6B represents the number of parameters of this particular pre-trained model.
The original GPT-J-6B model is trained with TPUs, which is not easy to use for normal users. Thus, through a converting script, we convert the TPU version GPT-J-6B into GPU version, which could be load and fine-tuned with GPUs.
As we have tried, the model can be loaded with 1 GPU with 16G memory to do inference. For fine-tune, we used 8 * 32G GPUs with DeepSpeed library to distribute the model, data and gradients, in order to allocate the huge amount of model parameters. |
hyesunyun/update-summarization-bart-large-longformer | b7ebb57f52314c93149cc1efa26621dadca43b63 | 2022-04-21T15:46:55.000Z | [
"pytorch",
"led",
"text2text-generation",
"en",
"transformers",
"update summarization",
"longformer",
"BART",
"autotrain_compatible"
] | text2text-generation | false | hyesunyun | null | hyesunyun/update-summarization-bart-large-longformer | 148 | null | transformers | 4,028 | ---
language:
- en
tags:
- update summarization
- longformer
- transformers
- BART
metrics:
- edit distance
- ROUGE
- BertScore
---
# Update Summarization with BART Large and Longformer Encoder Decoder
## Model description
This model is a Transformer-based model that supports long document generative sequence-to-sequence.
Based on [BART Large](https://huggingface.co/transformers/model_doc/bart.html) with [Longformer Encode Decoder](https://huggingface.co/transformers/model_doc/led.html) to allow for longer inputs.
## Intended uses & limitations
#### How to use
Format your data so that each new article or evidence to add have `<EV>` token in front with each title prefixed by `<t>` and each abstract prefixed by `<abs>`. Please have the original summary also in the same format. You can have the list of articles and original summary concatenated in any order as long as they have the correct separator tokens.
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
tokenizer = LEDTokenizer.from_pretrained("hyesunyun/update-summmarization-bart-large-longformer")
model = LEDForConditionalGeneration.from_pretrained("hyesunyun/update-summarization-bart-large-longformer")
input = "<EV> <t> Hypoglycemic effect of bitter melon compared with metformin in newly diagnosed type 2 diabetes patients. <abs> ETHNOPHARMACOLOGICAL RELEVANCE: Bitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use. AIM OF STUDY: This study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin. MATERIALS AND METHODS: This is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks. RESULTS: There was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 mumol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 mumol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 mumol/L, respectively). CONCLUSIONS: Bitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day. <EV> <t> Momordica charantia for type 2 diabetes mellitus. <abs> There is insufficient evidence to recommend momordica charantia for type 2 diabetes mellitus. Further studies are therefore required to address the issues of standardization and the quality control of preparations. For medical nutritional therapy, further observational trials evaluating the effects of momordica charantia are needed before RCTs are established to guide any recommendations in clinical practice."
inputs_dict = tokenizer(input, padding="max_length", max_length=10240, return_tensors="pt", truncation=True)
input_ids = inputs_dict.input_ids
attention_mask = inputs_dict.attention_mask
global_attention_mask = torch.zeros_like(attention_mask)
# put global attention on <s> token
global_attention_mask[:, 0] = 1
predicted_summary_ids = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
print(tokenizer.batch_decode(predicted_summary_ids, skip_special_tokens=True))
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Used pre-trained [LED model](https://huggingface.co/transformers/model_doc/led.html) and fine-tuned using the dataset found in [this github repo](https://github.com/hyesunyun/update_summarization_data).
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2021}
}
``` |
lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2 | 1019e06ef5b5d61bf5d56bed7b8e7557084f20ec | 2022-04-01T20:35:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"arxiv:2012.03411",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2 | 148 | 4 | transformers | 4,029 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: wav2vec2-large-xlsr-open-brazilian-portuguese-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 10.69
license: apache-2.0
---
# Wav2vec 2.0 With Open Brazilian Portuguese Datasets v2
This a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
- [Common Voice 6.1](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages to train ASR models. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). The set in Portuguese (mostly Brazilian variant) used in this work is the 6.1 version (pt_63h_2020-12-11) that contains about 50 validated hours and 1,120 unique speakers.
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively.
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one.
__NOTE: The common voice test reports 10% of WER, however, this model was trained using all the validated instances of Common Voice, except the instances of the test set. This means that some speakers of the train set can be present on the test set.__
## Imports and dependencies
```python
%%capture
!pip install datasets
!pip install jiwer
!pip install torchaudio
!pip install transformers
!pip install soundfile
```
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
```
## Preparation
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
wer = load_metric("wer")
device = "cuda"
```
```python
model_name = 'lgris/wav2vec2-large-xlsr-open-brazilian-portuguese-v2'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
```
```python
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["predicted"] = [pred.lower() for pred in batch["predicted"]]
batch["target"] = batch["sentence"]
return batch
```
## Tests
### Test against Common Voice (In-domain)
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
for pred, target in zip(result["predicted"][:10], result["target"][:10]):
print(pred, "|", target)
```
**Result**: 10.69%
### Test against [TEDx](http://www.openslr.org/100/) (Out-of-domain)
```python
!gdown --id 1HJEnvthaGYwcV_whHEywgH2daIN4bQna
!tar -xf tedx.tar.gz
```
```python
dataset = load_dataset('csv', data_files={'test': 'test.csv'})['test']
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
for pred, target in zip(result["predicted"][:10], result["target"][:10]):
print(pred, "|", target)
```
**Result**: 34.53% |
Matej/bert-base-buddhist-sanskrit | 0cc5dbdae27fec90217e4c5f82b1af86450b4b52 | 2022-04-15T08:54:45.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Matej | null | Matej/bert-base-buddhist-sanskrit | 148 | null | transformers | 4,030 | ---
tags:
- Buddhist Sanskrit
- BERT
- name: bert-base-buddhist-sanskrit
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-buddhist-sanskrit
The best performing model of the research described in the paper 'Embeddings models for Buddhist Sanskrit' published at LREC 2022 (Link to the paper will be added after
the publication of conference proceedings).
## Model description
The model has the bert-base architecture and configuration and was pretrained from scratch as a masked language model
on the Sanskrit reference corpus, and fine-tuned on the smaller corpus of Buddhist Sanskrit.
## How to use it
```
model = AutoModelForMaskedLM.from_pretrained("Matej/bert-base-buddhist-sanskrit")
tokenizer = AutoTokenizer.from_pretrained("Matej/bert-base-buddhist-sanskrit", use_fast=True)
```
## Intended uses & limitations
MIT license, no limitations
## Training and evaluation data
See the paper 'Embeddings models for Buddhist Sanskrit' for details on the corpora and the evaluation procedure.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 28
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300.0
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
has-abi/distilBERT-finetuned-resumes-sections | 1e3cc903614b8398834ec8d9da059b92015ff3fc | 2022-07-22T09:47:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | has-abi | null | has-abi/distilBERT-finetuned-resumes-sections | 148 | null | transformers | 4,031 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilBERT-finetuned-resumes-sections
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-finetuned-resumes-sections
This model is a fine-tuned version of [Geotrend/distilbert-base-en-fr-cased](https://huggingface.co/Geotrend/distilbert-base-en-fr-cased) on a private resume sections dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0487
- F1: 0.9512
- Roc Auc: 0.9729
- Accuracy: 0.9482
## Model description
This model classifies a resume section into 12 classes.
### Possible classes for a resume section
**awards**, **certificates**, **contact/name/title**, **education**, **interests**, **languages**, **para**, **professional_experiences**, **projects**, **skills**, **soft_skills**, **summary**.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.058 | 1.0 | 1083 | 0.0457 | 0.9186 | 0.9494 | 0.9020 |
| 0.0277 | 2.0 | 2166 | 0.0393 | 0.9327 | 0.9614 | 0.9251 |
| 0.0154 | 3.0 | 3249 | 0.0333 | 0.9425 | 0.9671 | 0.9367 |
| 0.0104 | 4.0 | 4332 | 0.0408 | 0.9357 | 0.9645 | 0.9293 |
| 0.0084 | 5.0 | 5415 | 0.0405 | 0.9376 | 0.9643 | 0.9298 |
| 0.0065 | 6.0 | 6498 | 0.0419 | 0.9439 | 0.9699 | 0.9385 |
| 0.0051 | 7.0 | 7581 | 0.0450 | 0.9412 | 0.9674 | 0.9376 |
| 0.0034 | 8.0 | 8664 | 0.0406 | 0.9433 | 0.9684 | 0.9372 |
| 0.0035 | 9.0 | 9747 | 0.0441 | 0.9403 | 0.9664 | 0.9358 |
| 0.0024 | 10.0 | 10830 | 0.0492 | 0.9419 | 0.9678 | 0.9367 |
| 0.0026 | 11.0 | 11913 | 0.0470 | 0.9468 | 0.9708 | 0.9436 |
| 0.0022 | 12.0 | 12996 | 0.0514 | 0.9424 | 0.9679 | 0.9395 |
| 0.0013 | 13.0 | 14079 | 0.0458 | 0.9478 | 0.9715 | 0.9441 |
| 0.0019 | 14.0 | 15162 | 0.0494 | 0.9477 | 0.9711 | 0.9450 |
| 0.0007 | 15.0 | 16245 | 0.0492 | 0.9496 | 0.9719 | 0.9464 |
| 0.0009 | 16.0 | 17328 | 0.0487 | 0.9512 | 0.9729 | 0.9482 |
| 0.001 | 17.0 | 18411 | 0.0510 | 0.9480 | 0.9711 | 0.9441 |
| 0.0006 | 18.0 | 19494 | 0.0532 | 0.9477 | 0.9709 | 0.9441 |
| 0.0007 | 19.0 | 20577 | 0.0511 | 0.9487 | 0.9720 | 0.9445 |
| 0.0005 | 20.0 | 21660 | 0.0522 | 0.9471 | 0.9710 | 0.9436 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
VictorSanh/roberta-base-finetuned-yelp-polarity | b16b941c51bcec0298e9ee5eab2c1e2602e1a141 | 2021-05-20T12:30:20.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:yelp_polarity",
"transformers"
] | text-classification | false | VictorSanh | null | VictorSanh/roberta-base-finetuned-yelp-polarity | 147 | 1 | transformers | 4,032 | ---
language: en
datasets:
- yelp_polarity
---
# RoBERTa-base-finetuned-yelp-polarity
This is a [RoBERTa-base](https://huggingface.co/roberta-base) checkpoint fine-tuned on binary sentiment classifcation from [Yelp polarity](https://huggingface.co/nlp/viewer/?dataset=yelp_polarity).
It gets **98.08%** accuracy on the test set.
## Hyper-parameters
We used the following hyper-parameters to train the model on one GPU:
```python
num_train_epochs = 2.0
learning_rate = 1e-05
weight_decay = 0.0
adam_epsilon = 1e-08
max_grad_norm = 1.0
per_device_train_batch_size = 32
gradient_accumulation_steps = 1
warmup_steps = 3500
seed = 42
```
|
akhooli/personachat-arabic | b5f47486bdda2d38da9f27c9e10df2c716268ce8 | 2021-05-21T12:39:24.000Z | [
"pytorch",
"gpt2",
"ar",
"transformers",
"conversational",
"license:mit"
] | conversational | false | akhooli | null | akhooli/personachat-arabic | 147 | null | transformers | 4,033 | ---
tags:
- conversational
language:
- ar
license: mit
---
## personachat-arabic (conversational AI)
This is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English)
and fine-tuned from [akhooli/gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) which is a limited text generation model.
Usage: see the last section of this [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set which was machine translated (do not use for production).
|
google/bert_uncased_L-10_H-512_A-8 | 386dd3983f52f94116d5add2abc7bb5f1ed167ba | 2021-05-19T17:24:16.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-10_H-512_A-8 | 147 | null | transformers | 4,034 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
kleinay/nominalization-candidate-classifier | 45e4bddfa88473e390d89cab5de5868e7a8c2050 | 2022-01-11T04:12:39.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:kleinay/qanom",
"transformers",
"nominalizations",
"autotrain_compatible"
] | token-classification | false | kleinay | null | kleinay/nominalization-candidate-classifier | 147 | null | transformers | 4,035 | ---
language:
- en
tags:
- pytorch
- token-classification
- nominalizations
datasets:
- kleinay/qanom
---
# Nominalization Detector
This model identifies "predicative nominalizations", that is, nominalizations that carry an eventive (or "verbal") meaning in context. It is a `bert-base-cased` pretrained model, fine-tuned for token classification on top of the "nominalization detection" task as defined and annotated by the QANom project [(Klein et. al., COLING 2020)](https://www.aclweb.org/anthology/2020.coling-main.274/).
## Task Description
The model is trained as a binary classifier, classifying candidate nominalizations.
The candidates are extracted using a POS tagger (filtering common nouns) and additionally lexical resources (e.g. WordNet and CatVar), filtering nouns that have (at least one) derivationally-related verb. In the QANom annotation project, these candidates are given to annotators to decide whether they carry a "verbal" meaning in the context of the sentence. The current model reproduces this binary classification.
## Demo
Check out our cool [demo](https://huggingface.co/spaces/kleinay/nominalization-detection-demo)!
## Usage
The candidate extraction algorithm is implemented inside the `qanom` package - see the README in the [QANom github repo](https://github.com/kleinay/QANom) for full documentation. The `qanom` package is also available via `pip install qanom`.
For ease of use, we encapsulated the full nominalization detection pipeline (i.e. candidate extraction + predicate classification) in the `qanom.nominalization_detector.NominalizationDetector` class, which internally utilize this `nominalization-candidate-classifier`:
```python
from qanom.nominalization_detector import NominalizationDetector
detector = NominalizationDetector()
raw_sentences = ["The construction of the officer 's building finished right after the beginning of the destruction of the previous construction ."]
print(detector(raw_sentences, return_all_candidates=True))
print(detector(raw_sentences, threshold=0.75, return_probability=False))
```
Outputs:
```json
[[{'predicate_idx': 1,
'predicate': 'construction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.7626778483390808,
'verb_form': 'construct'},
{'predicate_idx': 4,
'predicate': 'officer',
'predicate_detector_prediction': False,
'predicate_detector_probability': 0.19832570850849152,
'verb_form': 'officer'},
{'predicate_idx': 6,
'predicate': 'building',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.5794129371643066,
'verb_form': 'build'},
{'predicate_idx': 11,
'predicate': 'beginning',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.8937646150588989,
'verb_form': 'begin'},
{'predicate_idx': 14,
'predicate': 'destruction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.8501205444335938,
'verb_form': 'destruct'},
{'predicate_idx': 18,
'predicate': 'construction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.7022264003753662,
'verb_form': 'construct'}]]
```
```json
[[{'predicate_idx': 1, 'predicate': 'construction', 'verb_form': 'construct'},
{'predicate_idx': 11, 'predicate': 'beginning', 'verb_form': 'begin'},
{'predicate_idx': 14, 'predicate': 'destruction', 'verb_form': 'destruct'}]]
```
## Cite
```latex
@inproceedings{klein2020qanom,
title={QANom: Question-Answer driven SRL for Nominalizations},
author={Klein, Ayal and Mamou, Jonathan and Pyatkin, Valentina and Stepanov, Daniela and He, Hangfeng and Roth, Dan and Zettlemoyer, Luke and Dagan, Ido},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={3069--3083},
year={2020}
}
```
|
mpoyraz/wav2vec2-xls-r-300m-cv6-turkish | 787df48a382597a0112afe7830a6c0258f4d8044 | 2022-03-23T18:26:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mpoyraz | null | mpoyraz/wav2vec2-xls-r-300m-cv6-turkish | 147 | 1 | transformers | 4,036 | ---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- common_voice
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 8.83
- name: Test CER
type: cer
value: 2.37
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 32.81
- name: Test CER
type: cer
value: 11.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 34.86
---
# wav2vec2-xls-r-300m-cv6-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 6.1 TR](https://huggingface.co/datasets/common_voice) All `validated` split except `test` split was used for training.
- [MediaSpeech](https://www.openslr.org/108/)
## Training procedure
To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2e-4
- num_train_epochs 10
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.1
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.1
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `common_voice` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset common_voice --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 6.1 TR test split| 8.83 | 2.37 |
|Speech Recognition Community dev data| 32.81 | 11.22 |
|
mrm8488/bert-base-german-finetuned-ler | 025ad559e4cefe898079e3cbaf89f6a54b00636d | 2021-05-20T00:20:06.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"de",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/bert-base-german-finetuned-ler | 147 | null | transformers | 4,037 | ---
language: de
---
# German BERT + LER (Legal Entity Recognition) ⚖️
German BERT ([BERT-base-german-cased](https://huggingface.co/bert-base-german-cased)) fine-tuned on [Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition) dataset for **LER** (NER) downstream task.
## Details of the downstream task (NER) - Dataset
[Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition): Fine-grained Named Entity Recognition in Legal Documents.
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
| Split | # Samples |
| ---------------------- | ----- |
| Train | 1657048 |
| Eval | 500000 |
- Training script: [Fine-tuning script for NER provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py)
Colab: [How to fine-tune a model for NER using HF scripts](https://colab.research.google.com/drive/156Qrd7NsUHwA3nmQ6gXdZY0NzOvqk9AT?usp=sharing)
- Labels covered (and its distribution):
```
107 B-AN
918 B-EUN
2238 B-GRT
13282 B-GS
1113 B-INN
704 B-LD
151 B-LDS
2490 B-LIT
282 B-MRK
890 B-ORG
1374 B-PER
1480 B-RR
10046 B-RS
401 B-ST
68 B-STR
1011 B-UN
282 B-VO
391 B-VS
2648 B-VT
46 I-AN
6925 I-EUN
1957 I-GRT
70257 I-GS
2931 I-INN
153 I-LD
26 I-LDS
28881 I-LIT
383 I-MRK
1185 I-ORG
330 I-PER
106 I-RR
138938 I-RS
34 I-ST
55 I-STR
1259 I-UN
1572 I-VO
2488 I-VS
11121 I-VT
1348525 O
```
- [Annotation Guidelines (German)](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf)
## Metrics on evaluation set
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **85.67**
| Precision | **84.35** |
| Recall | **87.04** |
| Accuracy | **98.46** |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
nlp_ler = pipeline(
"ner",
model="mrm8488/bert-base-german-finetuned-ler",
tokenizer="mrm8488/bert-base-german-finetuned-ler"
)
text = "Your German legal text here"
nlp_ler(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
patrickvonplaten/wav2vec2-2-bart-base | 109c6165bef1351c39403fa008c5e72c89530167 | 2021-12-29T15:53:10.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-2-bart-base | 147 | 2 | transformers | 4,038 | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
model-index:
- name: wav2vec2-2-bart-base
results: []
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) and [bart-base](https://huggingface.co/facebook/bart-base) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.405
- Wer: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
tals/albert-base-vitaminc-fever | 15de029c9f4e929c8f6cc56bc39fcac22f6bdbc3 | 2022-06-22T23:57:17.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-base-vitaminc-fever | 147 | null | transformers | 4,039 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
vblagoje/dpr-ctx_encoder-single-lfqa-base | 57dcaa1817f316e6c284bf9db697914225c607b7 | 2022-01-17T15:34:10.000Z | [
"pytorch",
"dpr",
"en",
"dataset:vblagoje/lfqa",
"transformers",
"license:mit"
] | null | false | vblagoje | null | vblagoje/dpr-ctx_encoder-single-lfqa-base | 147 | null | transformers | 4,040 | ---
language: en
datasets:
- vblagoje/lfqa
license: mit
---
## Introduction
The context/passage encoder model based on [DPRContextEncoder](https://huggingface.co/docs/transformers/master/en/model_doc/dpr#transformers.DPRContextEncoder) architecture. It uses the transformer's pooler outputs as context/passage representations.
## Training
We trained vblagoje/dpr-ctx_encoder-single-lfqa-base using FAIR's dpr-scale starting with PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples - we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity.
## Performance
LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-base and vblagoje/dpr-ctx_encoder-single-lfqa-base) had a score of 6.69 for R-precision and 14.5 for Recall@5 on KILT benchmark.
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model = DPRQuestionEncoder.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base").to(device)
tokenizer = AutoTokenizer.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base")
input_ids = tokenizer("Why do airplanes leave contrails in the sky?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Author
- Vladimir Blagojevic: `dovlex [at] gmail.com` [Twitter](https://twitter.com/vladblagoje) | [LinkedIn](https://www.linkedin.com/in/blagojevicvladimir/) |
silencesys/paraphrase-xlm-r-multilingual-v1-fine-tuned-for-latin | 1fca272cc64dc4dda8b456dc8bbc3b82d489b397 | 2022-04-12T17:08:30.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | silencesys | null | silencesys/paraphrase-xlm-r-multilingual-v1-fine-tuned-for-latin | 147 | null | sentence-transformers | 4,041 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9455 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 9,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nbroad/bigbird-base-health-fact | 21b2deb28b7fd1bfa1b7df9e40a3dd86040dbcf4 | 2022-06-29T18:29:17.000Z | [
"pytorch",
"big_bird",
"text-classification",
"en",
"dataset:health_fact",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | nbroad | null | nbroad/bigbird-base-health-fact | 147 | null | transformers | 4,042 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- health_fact
model-index:
- name: bigbird-base-health-fact
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: health_fact
type: health_fact
split: test
metrics:
- name: F1
type: f1
value: 0.6694031411935434
- name: Accuracy
type: accuracy
value: 0.7948094079480941
- name: False Accuracy
type: accuracy
value: 0.8092783505154639
- name: Mixture Accuracy
type: accuracy
value: 0.4975124378109453
- name: True Accuracy
type: accuracy
value: 0.9148580968280468
- name: Unproven Accuracy
type: accuracy
value: 0.4
---
# bigbird-base-health-fact
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the health_fact dataset.
It achieves the following results on the VALIDATION set:
- Overall Accuracy: 0.8228995057660626
- Macro F1: 0.6979224830442152
- False Accuracy: 0.8289473684210527
- Mixture Accuracy: 0.47560975609756095
- True Accuracy: 0.9332273449920508
- Unproven Accuracy: 0.4634146341463415
It achieves the following results on the TEST set:
- Overall Accuracy: 0.7948094079480941
- Macro F1: 0.6694031411935434
- Mixture Accuracy: 0.4975124378109453
- False Accuracy: 0.8092783505154639
- True Accuracy: 0.9148580968280468
- Unproven Accuracy: 0.4
## Model description
Here is how you can use the model:
```python
import torch
from transformers import pipeline
claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him."
text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, “EEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, “If you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, “How dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, “Oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s “I left my mother and came to Seoul” and the mother’s “I won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (“My little girl ran away, scared of my mom’s eye” and “I screamed at her, ‘How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … “your mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, “mom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, “who are you? !” “I don’t know you!! !” as if trying to make that real. I screamed at her, “How dare you come to my house and scare my daughter!” “GET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, “oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, ‘it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to “connect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another — we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the “notes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner “dates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce."
label = "false"
device = 0 if torch.cuda.is_available() else -1
pl = pipeline("text-classification", model="nbroad/bigbird-base-health-fact", device=device)
input_text = claim+pl.tokenizer.sep_token+text
print(len(pl.tokenizer(input_text).input_ids))
# 2303 (which is why bigbird is useful)
pl(input_text)
# [{'label': 'false', 'score': 0.3866822123527527}]
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:|
| 0.5563 | 1.0 | 1226 | 0.5020 | 0.7949 | 0.6062 | 0.7926 | 0.4591 | 0.8986 | 0.2745 |
| 0.5048 | 2.0 | 2452 | 0.4969 | 0.8180 | 0.6846 | 0.8202 | 0.4342 | 0.9126 | 0.5714 |
| 0.3454 | 3.0 | 3678 | 0.5864 | 0.8130 | 0.6874 | 0.8114 | 0.4557 | 0.9154 | 0.5672 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
vitouphy/wav2vec2-xls-r-300m-timit-phoneme | bef918530717a49a4c5cf20bc4e364e885cb6a5b | 2022-06-29T12:37:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-300m-timit-phoneme | 147 | null | transformers | 4,043 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- pytorch
- transformers
- en
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-phoneme
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: DARPA TIMIT
type: timit
args: en
metrics:
- name: Test CER
type: cer
value: 7.996
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Timit dataset. Check [this notebook](https://www.kaggle.com/code/vitouphy/phoneme-recognition-with-wav2vec2) for training detail.
## Usage
**Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
```python
from transformers import pipeline
# Load the model
pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-phoneme")
# Process raw audio
output = pipe("audio_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
```
**Approach 2:** More custom way to predict phonemes.
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
import soundfile as sf
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-phoneme")
model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-phoneme")
# Read and process the input
audio_input, sample_rate = sf.read("audio_file.wav")
inputs = processor(audio_input, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
# Decode id into string
predicted_ids = torch.argmax(logits, axis=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
print(predicted_sentences)
```
## Training and evaluation data
We use [DARPA TIMIT dataset](https://www.kaggle.com/datasets/mfekadu/darpa-timit-acousticphonetic-continuous-speech) for this model.
- We split into **80/10/10** for training, validation, and testing respectively.
- That roughly corresponds to about **137/17/17** minutes.
- The model obtained **7.996%** on this test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
juridics/jurisbert-base-portuguese-uncased | 29c1d9c8a4444029c2583a23ff8d9395f3733112 | 2022-07-02T13:44:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | juridics | null | juridics/jurisbert-base-portuguese-uncased | 147 | null | transformers | 4,044 | ---
tags:
- generated_from_trainer
model-index:
- name: bertlawbr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertlawbr
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 6.1291 | 0.22 | 2500 | 5.9888 |
| 4.8604 | 0.44 | 5000 | 4.4841 |
| 3.3321 | 0.66 | 7500 | 3.1190 |
| 2.7579 | 0.87 | 10000 | 2.6089 |
| 2.4135 | 1.09 | 12500 | 2.3029 |
| 2.2136 | 1.31 | 15000 | 2.1244 |
| 2.0735 | 1.53 | 17500 | 1.9931 |
| 1.9684 | 1.75 | 20000 | 1.8878 |
| 1.891 | 1.97 | 22500 | 1.8077 |
| 1.8215 | 2.18 | 25000 | 1.7487 |
| 1.7577 | 2.4 | 27500 | 1.6875 |
| 1.7113 | 2.62 | 30000 | 1.6444 |
| 1.6776 | 2.84 | 32500 | 1.6036 |
| 1.6203 | 3.06 | 35000 | 1.5608 |
| 1.6018 | 3.28 | 37500 | 1.5293 |
| 1.5602 | 3.5 | 40000 | 1.5044 |
| 1.5429 | 3.71 | 42500 | 1.4753 |
| 1.5148 | 3.93 | 45000 | 1.4472 |
| 1.4786 | 4.15 | 47500 | 1.4302 |
| 1.4653 | 4.37 | 50000 | 1.4128 |
| 1.4496 | 4.59 | 52500 | 1.3991 |
| 1.4445 | 4.81 | 55000 | 1.3943 |
| 1.5114 | 5.02 | 57500 | 1.4551 |
| 1.5054 | 5.24 | 60000 | 1.4525 |
| 1.4817 | 5.46 | 62500 | 1.4259 |
| 1.48 | 5.68 | 65000 | 1.4077 |
| 1.4526 | 5.9 | 67500 | 1.3912 |
| 1.4272 | 6.12 | 70000 | 1.3726 |
| 1.4078 | 6.34 | 72500 | 1.3596 |
| 1.399 | 6.55 | 75000 | 1.3450 |
| 1.386 | 6.77 | 77500 | 1.3328 |
| 1.3704 | 6.99 | 80000 | 1.3192 |
| 1.3538 | 7.21 | 82500 | 1.3131 |
| 1.3468 | 7.43 | 85000 | 1.2916 |
| 1.323 | 7.65 | 87500 | 1.2871 |
| 1.322 | 7.86 | 90000 | 1.2622 |
| 1.2956 | 8.08 | 92500 | 1.2624 |
| 1.2869 | 8.3 | 95000 | 1.2547 |
| 1.2763 | 8.52 | 97500 | 1.2404 |
| 1.275 | 8.74 | 100000 | 1.2305 |
| 1.2709 | 8.96 | 102500 | 1.2301 |
| 1.2514 | 9.18 | 105000 | 1.2179 |
| 1.2563 | 9.39 | 107500 | 1.2134 |
| 1.2487 | 9.61 | 110000 | 1.2111 |
| 1.2337 | 9.83 | 112500 | 1.2041 |
| 1.3215 | 10.05 | 115000 | 1.2879 |
| 1.3364 | 10.27 | 117500 | 1.2850 |
| 1.3286 | 10.49 | 120000 | 1.2779 |
| 1.3202 | 10.7 | 122500 | 1.2730 |
| 1.3181 | 10.92 | 125000 | 1.2651 |
| 1.2952 | 11.14 | 127500 | 1.2544 |
| 1.2889 | 11.36 | 130000 | 1.2506 |
| 1.2747 | 11.58 | 132500 | 1.2339 |
| 1.2729 | 11.8 | 135000 | 1.2277 |
| 1.2699 | 12.02 | 137500 | 1.2201 |
| 1.2508 | 12.23 | 140000 | 1.2163 |
| 1.2438 | 12.45 | 142500 | 1.2091 |
| 1.2445 | 12.67 | 145000 | 1.2003 |
| 1.2314 | 12.89 | 147500 | 1.1957 |
| 1.2188 | 13.11 | 150000 | 1.1843 |
| 1.2071 | 13.33 | 152500 | 1.1805 |
| 1.2123 | 13.54 | 155000 | 1.1766 |
| 1.2016 | 13.76 | 157500 | 1.1661 |
| 1.2079 | 13.98 | 160000 | 1.1625 |
| 1.1884 | 14.2 | 162500 | 1.1525 |
| 1.177 | 14.42 | 165000 | 1.1419 |
| 1.1793 | 14.64 | 167500 | 1.1454 |
| 1.173 | 14.85 | 170000 | 1.1379 |
| 1.1502 | 15.07 | 172500 | 1.1371 |
| 1.1504 | 15.29 | 175000 | 1.1295 |
| 1.146 | 15.51 | 177500 | 1.1203 |
| 1.1487 | 15.73 | 180000 | 1.1137 |
| 1.1329 | 15.95 | 182500 | 1.1196 |
| 1.1259 | 16.17 | 185000 | 1.1075 |
| 1.1287 | 16.38 | 187500 | 1.1037 |
| 1.126 | 16.6 | 190000 | 1.1042 |
| 1.1199 | 16.82 | 192500 | 1.0953 |
| 1.1072 | 17.04 | 195000 | 1.0885 |
| 1.1043 | 17.26 | 197500 | 1.0877 |
| 1.1007 | 17.48 | 200000 | 1.0835 |
| 1.0879 | 17.69 | 202500 | 1.0819 |
| 1.1 | 17.91 | 205000 | 1.0744 |
| 1.0863 | 18.13 | 207500 | 1.0774 |
| 1.087 | 18.35 | 210000 | 1.0759 |
| 1.0755 | 18.57 | 212500 | 1.0618 |
| 1.0832 | 18.79 | 215000 | 1.0628 |
| 1.0771 | 19.01 | 217500 | 1.0611 |
| 1.0703 | 19.22 | 220000 | 1.0555 |
| 1.069 | 19.44 | 222500 | 1.0552 |
| 1.0706 | 19.66 | 225000 | 1.0509 |
| 1.0633 | 19.88 | 227500 | 1.0465 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
NbAiLab/nb-wav2vec2-300m-bokmaal | ce53dd19f2b45a32c6e150496db480b53c5aa7e6 | 2022-06-13T10:54:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"nb-NO",
"dataset:NbAiLab/NPSC",
"transformers",
"NbAiLab/NPSC",
"no",
"nb",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | NbAiLab | null | NbAiLab/nb-wav2vec2-300m-bokmaal | 146 | null | transformers | 4,045 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- no
- nb
- nb-NO
datasets:
- NbAiLab/NPSC
language:
- nb-NO
model-index:
- name: nb-wav2vec2-300m-bokmaal
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NPSC
type: NbAiLab/NPSC
args: 16K_mp3_bokmaal
metrics:
- name: Test (Bokmål) WER
type: wer
value: 0.0703
- name: Test (Bokmål) CER
type: cer
value: 0.0269
---
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Bokmål
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.0703** (0.0979)
- **CER: 0.0269** (0.0311)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| NbAiLab/nb-wav2vec2-300m-bokmaal (this model) | 7.03 | |
| [NbAiLab/nb-wav2vec2-300m-nynorsk](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-nynorsk) | 12.22 | |
## Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_bokmaal"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="15"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
|
ThomasNLG/t5-weighter_cnndm-en | bf39bc60189bedbbf4980588a32b27ffbe4b85f1 | 2021-07-09T07:45:02.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"dataset:cnndm",
"arxiv:2103.12693",
"transformers",
"qa",
"classification",
"question",
"answering",
"SQuAD",
"metric",
"nlg",
"t5-small",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ThomasNLG | null | ThomasNLG/t5-weighter_cnndm-en | 146 | null | transformers | 4,046 | ---
language: en
tags:
- qa
- classification
- question
- answering
- SQuAD
- metric
- nlg
- t5-small
license: mit
datasets:
- squad
- cnndm
model-index:
- name: t5-weighter_cnndm-en
results:
- task:
name: Classification
type: Question Weighter
widget:
- text: "a Buckingham Palace guard </s> Who felt on a manhole? </s> This is the embarrassing moment a Buckingham Palace guard slipped and fell on a manhole cover in front of hundreds of shocked tourists as he took up position in his sentry box. [...] The Guard comprises two detachments, one each for Buckingham Palace and St James’s Palace, under the command of the Captain of The Queen’s Guard."
---
# t5-weighter_cnndm-en
## Model description
This model is a *Classifier* model based on T5-small, that predicts if a answer / question couple is considered as important fact or not (Is this answer enough relevant to appear in a plausible summary?).
It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
```
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
`text_input = "{ANSWER} </s> {QUESTION} </s> {CONTEXT}"`
## Training data
The model was trained on synthetic data as described in [Questeval: Summarization asks for fact-based evaluation](https://arxiv.org/abs/2103.12693).
### Citation info
```bibtex
@article{scialom2021questeval,
title={Questeval: Summarization asks for fact-based evaluation},
author={Scialom, Thomas and Dray, Paul-Alexis and Gallinari, Patrick and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo and Wang, Alex},
journal={arXiv preprint arXiv:2103.12693},
year={2021}
}
``` |
airKlizz/t5-base-multi-fr-wiki-news | 3cedcec1afc1e62ccc2d0f701b34805645deff1e | 2021-10-17T20:09:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fr",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | airKlizz | null | airKlizz/t5-base-multi-fr-wiki-news | 146 | null | transformers | 4,047 | ---
language: fr
license: mit
---
|
alienspaceman/rus_dreamgen_fulltext_medium | c4e352a97929f34f8a5d9db3801a1bc17fb6d9c0 | 2021-05-21T13:06:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | alienspaceman | null | alienspaceman/rus_dreamgen_fulltext_medium | 146 | null | transformers | 4,048 | Entry not found |
m3hrdadfi/icelandic-ner-roberta | b361605cd6f50451d831bd8c8ef12b0cec9c9255 | 2021-05-27T17:13:07.000Z | [
"pytorch",
"tf",
"roberta",
"token-classification",
"is",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | m3hrdadfi | null | m3hrdadfi/icelandic-ner-roberta | 146 | null | transformers | 4,049 | ---
language: is
license: apache-2.0
widget:
- text: "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
- text: "Til hvers að kjósa flokk , sem þykist vera Jafnaðarmannaflokkur rétt fyrir kosningar , þegar að það er hægt að kjósa sannnan jafnaðarmannaflokk , sjálfan Jafnaðarmannaflokk Íslands - Samfylkinguna ."
- text: "Það sannaðist svo eftirminnilega á plötunni Það þarf fólk eins og þig sem kom út fyrir þremur árum , en á henni hann Fálka úr Keflavík og Gáluna , son sinn , til að útsetja lög hans og spila inn ."
- text: "Lögin hafa áður komið út sem aukalög á smáskífum af Hail to the Thief , en á disknum er líka myndband og fleira efni fyrir tölvur ."
- text: "Britney gerði honum viðvart og hann ók henni á UCLA-sjúkrahúsið í Santa Monica en það er í nágrenni hljóðversins ."
---
# IcelandicNER RoBERTa
This model was fine-tuned on the MIM-GOLD-NER dataset for the Icelandic language.
The [MIM-GOLD-NER](http://hdl.handle.net/20.500.12537/42) corpus was developed at [Reykjavik University](https://en.ru.is/) in 2018–2020 that covered eight types of entities:
- Date
- Location
- Miscellaneous
- Money
- Organization
- Percent
- Person
- Time
## Dataset Information
| | Records | B-Date | B-Location | B-Miscellaneous | B-Money | B-Organization | B-Percent | B-Person | B-Time | I-Date | I-Location | I-Miscellaneous | I-Money | I-Organization | I-Percent | I-Person | I-Time |
|:------|----------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|---------:|-------------:|------------------:|----------:|-----------------:|------------:|-----------:|---------:|
| Train | 39988 | 3409 | 5980 | 4351 | 729 | 5754 | 502 | 11719 | 868 | 2112 | 516 | 3036 | 770 | 2382 | 50 | 5478 | 790 |
| Valid | 7063 | 570 | 1034 | 787 | 100 | 1078 | 103 | 2106 | 147 | 409 | 76 | 560 | 104 | 458 | 7 | 998 | 136 |
| Test | 8299 | 779 | 1319 | 935 | 153 | 1315 | 108 | 2247 | 172 | 483 | 104 | 660 | 167 | 617 | 10 | 1089 | 158 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| entity | precision | recall | f1-score | support |
|:-------------:|:---------:|:--------:|:--------:|:-------:|
| Date | 0.961881 | 0.971759 | 0.966794 | 779.0 |
| Location | 0.963047 | 0.968158 | 0.965595 | 1319.0 |
| Miscellaneous | 0.884946 | 0.880214 | 0.882574 | 935.0 |
| Money | 0.980132 | 0.967320 | 0.973684 | 153.0 |
| Organization | 0.924300 | 0.928517 | 0.926404 | 1315.0 |
| Percent | 1.000000 | 1.000000 | 1.000000 | 108.0 |
| Person | 0.978591 | 0.976413 | 0.977501 | 2247.0 |
| Time | 0.965116 | 0.965116 | 0.965116 | 172.0 |
| micro avg | 0.951258 | 0.952476 | 0.951866 | 7028.0 |
| macro avg | 0.957252 | 0.957187 | 0.957209 | 7028.0 |
| weighted avg | 0.951237 | 0.952476 | 0.951849 | 7028.0 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "m3hrdadfi/icelandic-ner-roberta"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Kristin manneskja getur ekki lagt frásagnir af Jesú Kristi á hilluna vegna þess að hún sé búin að lesa þær ."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [IcelandicNER Issues](https://github.com/m3hrdadfi/icelandic-ner/issues) repo.
|
razent/cotext-1-cc | 2def0f774019e02e1f26570b3f1c6689459dc397 | 2022-03-15T03:02:50.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"feature-extraction",
"code",
"dataset:code_search_net",
"transformers"
] | feature-extraction | false | razent | null | razent/cotext-1-cc | 146 | null | transformers | 4,050 | ---
language: code
datasets:
- code_search_net
---
# CoText (1-CC)
## Introduction
Paper: [CoTexT: Multi-task Learning with Code-Text Transformer](https://aclanthology.org/2021.nlp4prog-1.5.pdf)
Authors: _Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, Yanfang Ye_
## How to use
Supported languages:
```shell
"go"
"java"
"javascript"
"php"
"python"
"ruby"
```
For more details, do check out [our Github repo](https://github.com/justinphan3110/CoTexT).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/cotext-1-cc")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/cotext-1-cc")
sentence = "def add(a, b): return a + b"
text = "python: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
```
## Citation
```
@inproceedings{phan-etal-2021-cotext,
title = "{C}o{T}ex{T}: Multi-task Learning with Code-Text Transformer",
author = "Phan, Long and
Tran, Hieu and
Le, Daniel and
Nguyen, Hieu and
Annibal, James and
Peltekian, Alec and
Ye, Yanfang",
booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nlp4prog-1.5",
doi = "10.18653/v1/2021.nlp4prog-1.5",
pages = "40--47"
}
``` |
ml6team/keyphrase-generation-keybart-inspec | efdb4ed322cf3d95cfaeb8d59486e80263346ac8 | 2022-06-16T18:02:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:midas/inspec",
"arxiv:2112.08547",
"transformers",
"keyphrase-generation",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ml6team | null | ml6team/keyphrase-generation-keybart-inspec | 146 | null | transformers | 4,051 | ---
language: en
license: mit
tags:
- keyphrase-generation
datasets:
- midas/inspec
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks."
example_title: "Example 2"
model-index:
- name: DeDeckerThomas/keyphrase-generation-keybart-inspec
results:
- task:
type: keyphrase-generation
name: Keyphrase Generation
dataset:
type: midas/inspec
name: inspec
metrics:
- type: F1@M (Present)
value: 0.361
name: F1@M (Present)
- type: F1@O (Present)
value: 0.329
name: F1@O (Present)
- type: F1@M (Absent)
value: 0.083
name: F1@M (Absent)
- type: F1@O (Absent)
value: 0.080
name: F1@O (Absent)
---
# 🔑 Keyphrase Generation Model: KeyBART-inspec
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KeyBART](https://huggingface.co/bloomberg/KeyBART) as its base model and fine-tunes it on the [Inspec dataset](https://huggingface.co/datasets/midas/inspec). KeyBART focuses on learning a better representation of keyphrases in a generative setting. It produces the keyphrases associated with the input document from a corrupted input. The input is changed by token masking, keyphrase masking and keyphrase replacement. This model can already be used without any fine-tuning, but can be fine-tuned if needed.
You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547).
Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021).
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* This keyphrase generation model is very domain-specific and will perform very well on abstracts of scientific papers. It's not recommended to use this model for other domains, but you are free to test it out.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
# Model parameters
from transformers import (
Text2TextGenerationPipeline,
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
class KeyphraseGenerationPipeline(Text2TextGenerationPipeline):
def __init__(self, model, keyphrase_sep_token=";", *args, **kwargs):
super().__init__(
model=AutoModelForSeq2SeqLM.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
self.keyphrase_sep_token = keyphrase_sep_token
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs
)
return [[keyphrase.strip() for keyphrase in result.get("generated_text").split(self.keyphrase_sep_token) if keyphrase != ""] for result in results]
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-generation-keybart-inspec"
generator = KeyphraseGenerationPipeline(model=model_name)
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = generator(text)
print(keyphrases)
```
```
# Output
[['keyphrase extraction', 'text analysis', 'keyphrases', 'human annotators', 'artificial']]
```
## 📚 Training Dataset
[Inspec](https://huggingface.co/datasets/midas/inspec) is a keyphrase extraction/generation dataset consisting of 2000 English scientific papers from the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The keyphrases are annotated by professional indexers or editors.
You can find more information in the [paper](https://dl.acm.org/doi/10.3115/1119355.1119383).
## 👷♂️ Training Procedure
For more in detail information, you can take a look at the [training notebook]().
### Training Parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 5e-5 |
| Epochs | 15 |
| Early Stopping Patience | 1 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding keyphrases. The only thing that must be done is tokenization and joining all keyphrases into one string with a certain seperator of choice( ```;``` ).
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART", add_prefix_space=True)
# Dataset parameters
dataset_full_name = "midas/inspec"
dataset_subset = "raw"
dataset_document_column = "document"
keyphrase_sep_token = ";"
def preprocess_keyphrases(text_ids, kp_list):
kp_order_list = []
kp_set = set(kp_list)
text = tokenizer.decode(
text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
text = text.lower()
for kp in kp_set:
kp = kp.strip()
kp_index = text.find(kp.lower())
kp_order_list.append((kp_index, kp))
kp_order_list.sort()
present_kp, absent_kp = [], []
for kp_index, kp in kp_order_list:
if kp_index < 0:
absent_kp.append(kp)
else:
present_kp.append(kp)
return present_kp, absent_kp
def preprocess_fuction(samples):
processed_samples = {"input_ids": [], "attention_mask": [], "labels": []}
for i, sample in enumerate(samples[dataset_document_column]):
input_text = " ".join(sample)
inputs = tokenizer(
input_text,
padding="max_length",
truncation=True,
)
present_kp, absent_kp = preprocess_keyphrases(
text_ids=inputs["input_ids"],
kp_list=samples["extractive_keyphrases"][i]
+ samples["abstractive_keyphrases"][i],
)
keyphrases = present_kp
keyphrases += absent_kp
target_text = f" {keyphrase_sep_token} ".join(keyphrases)
with tokenizer.as_target_tokenizer():
targets = tokenizer(
target_text, max_length=40, padding="max_length", truncation=True
)
targets["input_ids"] = [
(t if t != tokenizer.pad_token_id else -100)
for t in targets["input_ids"]
]
for key in inputs.keys():
processed_samples[key].append(inputs[key])
processed_samples["labels"].append(targets["input_ids"])
return processed_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing
For the post-processing, you will need to split the string based on the keyphrase separator.
```python
def extract_keyphrases(examples):
return [example.split(keyphrase_sep_token) for example in examples]
```
## 📝 Evaluation results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. In keyphrase generation you also look at F1@O where O stands for the number of ground truth keyphrases.
The model achieves the following results on the Inspec test set:
### Extractive Keyphrases
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Inspec Test Set | 0.40 | 0.37 | 0.35 | 0.20 | 0.37 | 0.24 | 0.42 | 0.37 | 0.36 | 0.33 | 0.33 | 0.33 |
### Abstractive Keyphrases
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Inspec Test Set | 0.07 | 0.12 | 0.08 | 0.03 | 0.12 | 0.05 | 0.08 | 0.12 | 0.08 | 0.08 | 0.12 | 0.08 |
For more information on the evaluation process, you can take a look at the keyphrase extraction [evaluation notebook]().
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
oliverguhr/wav2vec2-large-xlsr-53-german-cv9 | 15d44405d5bda6bf702f4194727ac51067ba5942 | 2022-07-27T07:27:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_9_0",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | oliverguhr | null | oliverguhr/wav2vec2-large-xlsr-53-german-cv9 | 146 | null | transformers | 4,052 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: wav2vec2-large-xlsr-53-german-cv9
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 9
type: mozilla-foundation/common_voice_9_0
args: de
metrics:
- name: Test WER
type: wer
value: 9.480663281840769
- name: Test CER
type: cer
value: 1.9167347943074394
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 9
type: mozilla-foundation/common_voice_9_0
args: de
metrics:
- name: Test WER (+LM)
type: wer
value: 7.49027762774117
- name: Test CER (+LM)
type: cer
value: 1.9167347943074394
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 8.122005951166668
- name: Test CER
type: cer
value: 1.
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER (+LM)
type: wer
value: 6.1453182045203544
- name: Test CER (+LM)
type: cer
value: 1.5247743373447677
---
# wav2vec2-large-xlsr-53-german-cv9
This model is a fine-tuned version of [./facebook/wav2vec2-large-xlsr-53](https://huggingface.co/./facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - DE dataset.
It achieves the following results on the test set:
- CER: 2.273015898213336
- Wer: 9.480663281840769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Eval Wer|
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.4129 | 1.0 | 3557 | 0.3015 | 0.2499 |
| 0.2121 | 2.0 | 7114 | 0.1596 | 0.1567 |
| 0.1455 | 3.0 | 10671 | 0.1377 | 0.1354 |
| 0.1436 | 4.0 | 14228 | 0.1301 | 0.1282 |
| 0.1144 | 5.0 | 17785 | 0.1225 | 0.1245 |
| 0.1219 | 6.0 | 21342 | 0.1254 | 0.1208 |
| 0.104 | 7.0 | 24899 | 0.1198 | 0.1232 |
| 0.1016 | 8.0 | 28456 | 0.1149 | 0.1174 |
| 0.1093 | 9.0 | 32013 | 0.1186 | 0.1186 |
| 0.0858 | 10.0 | 35570 | 0.1182 | 0.1164 |
| 0.102 | 11.0 | 39127 | 0.1191 | 0.1186 |
| 0.0834 | 12.0 | 42684 | 0.1161 | 0.1096 |
| 0.0916 | 13.0 | 46241 | 0.1147 | 0.1107 |
| 0.0811 | 14.0 | 49798 | 0.1174 | 0.1136 |
| 0.0814 | 15.0 | 53355 | 0.1132 | 0.1114 |
| 0.0865 | 16.0 | 56912 | 0.1134 | 0.1097 |
| 0.0701 | 17.0 | 60469 | 0.1096 | 0.1054 |
| 0.0891 | 18.0 | 64026 | 0.1110 | 0.1076 |
| 0.071 | 19.0 | 67583 | 0.1141 | 0.1074 |
| 0.0726 | 20.0 | 71140 | 0.1094 | 0.1093 |
| 0.0647 | 21.0 | 74697 | 0.1088 | 0.1095 |
| 0.0643 | 22.0 | 78254 | 0.1105 | 0.1044 |
| 0.0764 | 23.0 | 81811 | 0.1072 | 0.1042 |
| 0.0605 | 24.0 | 85368 | 0.1095 | 0.1026 |
| 0.0722 | 25.0 | 88925 | 0.1144 | 0.1066 |
| 0.0597 | 26.0 | 92482 | 0.1087 | 0.1022 |
| 0.062 | 27.0 | 96039 | 0.1073 | 0.1027 |
| 0.0536 | 28.0 | 99596 | 0.1068 | 0.1027 |
| 0.0616 | 29.0 | 103153 | 0.1097 | 0.1037 |
| 0.0642 | 30.0 | 106710 | 0.1117 | 0.1020 |
| 0.0555 | 31.0 | 110267 | 0.1109 | 0.0990 |
| 0.0632 | 32.0 | 113824 | 0.1104 | 0.0977 |
| 0.0482 | 33.0 | 117381 | 0.1108 | 0.0958 |
| 0.0601 | 34.0 | 120938 | 0.1095 | 0.0957 |
| 0.0508 | 35.0 | 124495 | 0.1079 | 0.0973 |
| 0.0526 | 36.0 | 128052 | 0.1068 | 0.0967 |
| 0.0487 | 37.0 | 131609 | 0.1081 | 0.0966 |
| 0.0495 | 38.0 | 135166 | 0.1099 | 0.0956 |
| 0.0528 | 39.0 | 138723 | 0.1091 | 0.0923 |
| 0.0439 | 40.0 | 142280 | 0.1111 | 0.0928 |
| 0.0467 | 41.0 | 145837 | 0.1131 | 0.0943 |
| 0.0407 | 42.0 | 149394 | 0.1115 | 0.0944 |
| 0.046 | 43.0 | 152951 | 0.1106 | 0.0935 |
| 0.0447 | 44.0 | 156508 | 0.1083 | 0.0919 |
| 0.0434 | 45.0 | 160065 | 0.1093 | 0.0909 |
| 0.0472 | 46.0 | 163622 | 0.1092 | 0.0921 |
| 0.0414 | 47.0 | 167179 | 0.1106 | 0.0922 |
| 0.0501 | 48.0 | 170736 | 0.1094 | 0.0918 |
| 0.0388 | 49.0 | 174293 | 0.1099 | 0.0918 |
| 0.0428 | 50.0 | 177850 | 0.1103 | 0.0915 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
NlpHUST/t5-vi-en-base | 67987bafd5a35ece746cd7528d3a64f323e49ce3 | 2021-06-23T03:40:44.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NlpHUST | null | NlpHUST/t5-vi-en-base | 145 | null | transformers | 4,053 | ---
language:
- vi
tags:
- t5
- seq2seq
# Machine translation for vietnamese
## Model Description
T5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture.
## Training data
T5-vi-en-base was trained on 4M sentence pairs (english,vietnamese)
### How to use
```py
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-base")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-base")
model.to(device)
src = "Theo lãnh đạo Sở Y tế, 3 người này không có triệu chứng sốt, ho, khó thở, đã được lấy mẫu xét nghiệm và cách ly tập trung."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
According to the head of the Department of Health, the three people had no symptoms of fever, cough, shortness of breath, were taken samples for testing and concentrated quarantine.
``` |
cardiffnlp/bertweet-base-irony | 49963c2b5ddec67a8f0337345a6aaccea1d52505 | 2021-05-20T14:48:25.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-irony | 145 | null | transformers | 4,054 | |
google/pegasus-aeslc | 0bc6bad1a4385c7d99237faf84d66e035dfb57ed | 2020-08-25T18:50:01.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"arxiv:1912.08777",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | google | null | google/pegasus-aeslc | 145 | null | transformers | 4,055 | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
imvladikon/charbert-bert-wiki | efb4ab028d25eef2b0af50f327037c990229d831 | 2022-01-30T11:35:48.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"transformers",
"language model"
] | null | false | imvladikon | null | imvladikon/charbert-bert-wiki | 145 | null | transformers | 4,056 | ---
language:
- en
tags:
- language model
datasets:
- wikipedia
---
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mrm8488/t5-small-finetuned-wikiSQL | 056271dcea0ef8323a836f4f836a1632b0579234 | 2021-06-23T13:09:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wikisql",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-wikiSQL | 145 | 2 | transformers | 4,057 | ---
language: en
datasets:
- wikisql
---
# T5-small fine-tuned on WikiSQL
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **English** to **SQL** **translation**.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the Dataset 📚
Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| wikisql | train | 56355 |
| wikisql | valid | 14436 |
How to load it from [nlp](https://github.com/huggingface/nlp)
```python
train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION)
```
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-wikiSQL")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-wikiSQL")
def get_sql(query):
input_text = "translate English to SQL: %s </s>" % query
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
query = "How many millions of params there are in HF-hub?"
get_sql(query)
# output: 'SELECT COUNT Params FROM table WHERE Location = HF-hub'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
raynardj/wenyanwen-chinese-translate-to-ancient | a5d4494b86f434a71fcde04d09e8e8dd73687663 | 2021-11-29T14:42:25.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"zh",
"transformers",
"translation",
"文言文",
"ancient",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | raynardj | null | raynardj/wenyanwen-chinese-translate-to-ancient | 145 | 2 | transformers | 4,058 | ---
language:
- zh
- zh
tags:
- translation
- 文言文
- ancient
license: apache-2.0
widget:
- text: "轻轻的我走了,正如我轻轻的来。我轻轻的招手,作别西天的云彩。"
example_title: "再别康桥"
- text: "当恐惧逝去,我会打开心眼,看清它的轨迹。"
example_title: "沙丘"
- text: "暴力是无能者的最后手段"
example_title: "基地"
---
# From modern Chinese to Ancient Chinese
> This model translate modern Chinese to Classical Chinese, so I guess who's interested in the problemset can speak at least modern Chinese, so... let me continue the documentation in Chinese
* 从现代文到文言文的翻译器, 欢迎前往[github文言诗词项目页面:渊, 讨论&加⭐️ ](https://github.com/raynardj/yuan)
* 还有同款的[🤗文言文到现代文模型](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern),原文输入可以**断句** 也可以是**未断句**的哦
* 训练语料是就是九十多万句句对, [数据集链接📚](https://github.com/BangBOOM/Classical-Chinese)。
## 推荐的inference 通道
**注意**, 你必须将```generate```函数的```eos_token_id```设置为102就可以翻译出完整的语句, 不然翻译完了会有残留的语句(因为做熵的时候用pad标签=-100导致)。
目前huggingface 页面上compute按钮会有这个问题, 推荐使用以下代码来得到翻译结果🎻
```python
from transformers import (
EncoderDecoderModel,
AutoTokenizer
)
PRETRAINED = "raynardj/wenyanwen-chinese-translate-to-ancient"
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
model = EncoderDecoderModel.from_pretrained(PRETRAINED)
def inference(text):
tk_kwargs = dict(
truncation=True,
max_length=128,
padding="max_length",
return_tensors='pt')
inputs = tokenizer([text,],**tk_kwargs)
with torch.no_grad():
return tokenizer.batch_decode(
model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=3,
bos_token_id=101,
eos_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
), skip_special_tokens=True)
```
## 目前版本的案例
> 大家如果有好玩的调戏案例, 也欢迎反馈
```python
>>> inference('你连一百块都不肯给我')
['不 肯 与 我 百 钱 。']
```
```python
>>> inference("他不能做长远的谋划")
['不 能 为 远 谋 。']
```
```python
>>> inference("我们要干一番大事业")
['吾 属 当 举 大 事 。']
```
```python
>>> inference("这感觉,已经不对,我努力,在挽回")
['此 之 谓 也 , 已 不 可 矣 , 我 勉 之 , 以 回 之 。']
```
```python
>>> inference("轻轻地我走了, 正如我轻轻地来, 我挥一挥衣袖,不带走一片云彩")
['轻 我 行 , 如 我 轻 来 , 挥 袂 不 携 一 片 云 。']
```
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
|
rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment | efeb9edba760887149556619b6b81586027795f1 | 2021-05-19T00:35:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"hi",
"en",
"dataset:SAIL 2017",
"transformers",
"codemix",
"license:apache-2.0"
] | text-classification | false | rohanrajpal | null | rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment | 145 | null | transformers | 4,059 | ---
language:
- hi
- en
tags:
- hi
- en
- codemix
license: "apache-2.0"
datasets:
- SAIL 2017
metrics:
- fscore
- accuracy
---
# BERT codemixed base model for hinglish (cased)
## Model description
Input for the model: Any codemixed hinglish text
Output for the model: Sentiment. (0 - Negative, 1 - Neutral, 2 - Positive)
I took a bert-base-multilingual-cased model from Huggingface and finetuned it on [SAIL 2017](http://www.dasdipankar.com/SAILCodeMixed.html) dataset.
Performance of this model on the SAIL 2017 dataset
| metric | score |
|------------|----------|
| acc | 0.588889 |
| f1 | 0.582678 |
| acc_and_f1 | 0.585783 |
| precision | 0.586516 |
| recall | 0.588889 |
## Intended uses & limitations
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import BertTokenizer, BertModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in *TensorFlow*:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('rohanrajpal/bert-base-codemixed-uncased-sentiment')
model = TFBertModel.from_pretrained("rohanrajpal/bert-base-codemixed-uncased-sentiment")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
#### Limitations and bias
Coming soon!
## Training data
I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased).
## Training procedure
No preprocessing.
## Eval results
### BibTeX entry and citation info
```bibtex
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
zhufy/squad-en-bert-base | 413398c40a09b607b5bcc5dff592f456112dda89 | 2022-04-23T05:09:27.000Z | [
"pytorch",
"bert",
"question-answering",
"English",
"dataset:SQuAD 2.0",
"transformers",
"bert-base",
"autotrain_compatible"
] | question-answering | false | zhufy | null | zhufy/squad-en-bert-base | 145 | null | transformers | 4,060 | ---
language: English
task: extractive question answering
datasets: SQuAD 2.0
tags:
- bert-base
---
# Model Description
This model is for English extractive question answering. It is based on the [bert-base-cased](https://huggingface.co/bert-base-uncased) model, and it is case-sensitive: it makes a difference between english and English.
# Training data
[English SQuAD v2.0](https://rajpurkar.github.io/SQuAD-explorer/)
# How to use
You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "A problem is regarded as inherently difficult if its
solution requires significant resources, whatever the
algorithm used. The theory formalizes this intuition,
by introducing mathematical models of computation to
study these problems and quantifying the amount of
resources needed to solve them, such as time and storage.
Other complexity measures are also used, such as the
amount of communication (used in communication complexity),
the number of gates in a circuit (used in circuit
complexity) and the number of processors (used in parallel
computing). One of the roles of computational complexity
theory is to determine the practical limits on what
computers can and cannot do."
>>> question = "What are two basic primary resources used to
guage complexity?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.8589141368865967,
'start': 305,
'end': 321,
'answer': 'time and storage'}
``` |
kakife3586/Null | b7de9ff384a3a30d568109cb59c392b3aba5261d | 2022-07-10T03:27:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kakife3586 | null | kakife3586/Null | 145 | null | transformers | 4,061 | Entry not found |
autoevaluate/roberta-base-squad2 | 55f272a1e6e3c73e8797e7d38630dff8c37de8d9 | 2022-07-20T13:11:11.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | autoevaluate | null | autoevaluate/roberta-base-squad2 | 145 | null | transformers | 4,062 | ---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# roberta-base for QA
> Note: this is a clone of [`roberta-base-squad2`](https://huggingface.co/deepset/roberta-base-squad2) for internal testing.
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
Using the official [question answering notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) from `transformers` yields:
```
{'HasAns_exact': 77.93522267206478,
'HasAns_f1': 83.93715663402219,
'HasAns_total': 5928,
'NoAns_exact': 81.90075693860386,
'NoAns_f1': 81.90075693860386,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.0,
'best_f1': 82.91749890730902,
'best_f1_thresh': 0.0,
'exact': 79.92082877116145,
'f1': 82.91749890730917,
'total': 11873}
```
which is consistent with the officially reported results. Using the question answering `Evaluator` from `evaluate` gives:
```
{'HasAns_exact': 77.91835357624831,
'HasAns_f1': 84.07820736158186,
'HasAns_total': 5928,
'NoAns_exact': 81.91757779646763,
'NoAns_f1': 81.91757779646763,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.996823787689209,
'best_f1': 82.99634576260925,
'best_f1_thresh': 0.996823787689209,
'exact': 79.92082877116145,
'f1': 82.9963457626089,
'latency_in_seconds': 0.016523243643392558,
'samples_per_second': 60.52080460605492,
'total': 11873,
'total_time_in_seconds': 196.18047177799986}
```
which is also consistent with the officially reported results.
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
allenai/t5-small-next-word-generator-qoogle | f97b573aaec7ba6a86bfce5777d0711ccb829628 | 2021-06-23T11:14:19.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/t5-small-next-word-generator-qoogle | 144 | 1 | transformers | 4,063 | Next word generator trained on questions. Receives partial questions and tries to predict the next word.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-next-word-generator-qoogle"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Which")
run_model("Which two")
run_model("Which two counties")
run_model("Which two counties are")
run_model("Which two counties are the")
run_model("Which two counties are the biggest")
run_model("Which two counties are the biggest economic")
run_model("Which two counties are the biggest economic powers")
```
which should result in the following:
```
['one']
['statements']
['are']
['in']
['most']
['in']
['zones']
['of']
```
|
allenai/unifiedqa-v2-t5-small-1251000 | 46255a0defae614c9e7057e461ae5276432c3212 | 2022-02-21T23:11:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/unifiedqa-v2-t5-small-1251000 | 144 | null | transformers | 4,064 | # Further details: https://github.com/allenai/unifiedqa |
textattack/roberta-base-rotten-tomatoes | 3fd77299e29adc4fe8c5f181810a6a1e7498dbd4 | 2021-05-20T22:17:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/roberta-base-rotten-tomatoes | 144 | null | transformers | 4,065 | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
moussaKam/AraBART | cb67d617b84f52755c0cbf34684a41fe0f01cdb1 | 2022-05-05T13:17:29.000Z | [
"pytorch",
"mbart",
"feature-extraction",
"ar",
"transformers",
"summarization",
"bart",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | moussaKam | null | moussaKam/AraBART | 144 | 3 | transformers | 4,066 | ---
tags:
- summarization
- bart
language:
- ar
widget:
- text: بيروت هي عاصمة <mask>.
license: apache-2.0
pipeline_tag: "fill-mask"
---
AraBART is the first Arabic model in which the encoder and the decoder are pretrained end-to-end, based on BART. AraBART follows the architecture of BART-Base
which has 6 encoder and 6 decoder layers and 768 hidden dimensions. In total AraBART has 139M parameters.
AraBART achieves the best performance on multiple abstractive summarization datasets, outperforming strong baselines including a pretrained Arabic BERT-based models and multilingual mBART and mT5 models.
|
Helsinki-NLP/opus-mt-tc-big-he-en | d92e718946e9085ef6607711ccdc90ec1abd77c8 | 2022-06-01T12:59:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"he",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-he-en | 144 | null | transformers | 4,067 | ---
language:
- en
- he
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-he-en
results:
- task:
name: Translation heb-eng
type: translation
args: heb-eng
dataset:
name: flores101-devtest
type: flores_101
args: heb eng devtest
metrics:
- name: BLEU
type: bleu
value: 44.1
- task:
name: Translation heb-eng
type: translation
args: heb-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: heb-eng
metrics:
- name: BLEU
type: bleu
value: 53.8
---
# opus-mt-tc-big-he-en
Neural machine translation model for translating from Hebrew (he) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): heb
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT heb-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"היא שכחה לכתוב לו.",
"אני רוצה לדעת מיד כשמשהו יקרה."
]
model_name = "pytorch-models/opus-mt-tc-big-he-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# She forgot to write to him.
# I want to know as soon as something happens.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-he-en")
print(pipe("היא שכחה לכתוב לו."))
# expected output: She forgot to write to him.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| heb-eng | tatoeba-test-v2021-08-07 | 0.68565 | 53.8 | 10519 | 77427 |
| heb-eng | flores101-devtest | 0.68116 | 44.1 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:27:12 EEST 2022
* port machine: LM0-400-22516.local
|
mriggs/wikisource_lemmatized_epoch1 | 7e024a1b9d63ba7a72dcaeb30d0ff5de4cc7729c | 2022-05-16T08:30:44.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/wikisource_lemmatized_epoch1 | 144 | null | transformers | 4,068 | Entry not found |
ahmeddbahaa/xlmroberta2xlmroberta-finetune-summarization-ur | 25c716a9b2682c8b3fd5d1d27854b44f375d3d24 | 2022-06-16T10:27:20.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"ur",
"xlm-roberta",
"Abstractive Summarization",
"roberta",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/xlmroberta2xlmroberta-finetune-summarization-ur | 144 | null | transformers | 4,069 | ---
tags:
- summarization
- ur
- encoder-decoder
- xlm-roberta
- Abstractive Summarization
- roberta
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: xlmroberta2xlmroberta-finetune-summarization-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta2xlmroberta-finetune-summarization-ur
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4576
- Rouge-1: 26.51
- Rouge-2: 9.4
- Rouge-l: 23.21
- Gen Len: 19.99
- Bertscore: 68.15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
samroni/puisi_gpt2 | 97c5062e5b99efbcf1d542e7e8b0546e55770e7c | 2022-07-11T20:37:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | samroni | null | samroni/puisi_gpt2 | 144 | null | transformers | 4,070 | Entry not found |
codeparrot/codeparrot-small-text-to-code | 5d1a6204d944095ea923facac39081b8ba092b72 | 2022-07-19T15:46:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"code",
"dataset:codeparrot/codeparrot-clean",
"dataset:codeparrot/github-jupyter-text-to-code",
"transformers",
"generation",
"license:apache-2.0"
] | text-generation | false | codeparrot | null | codeparrot/codeparrot-small-text-to-code | 144 | null | transformers | 4,071 | ---
language:
- code
license: apache-2.0
tags:
- code
- gpt2
- generation
datasets:
- "codeparrot/codeparrot-clean"
- "codeparrot/github-jupyter-text-to-code"
---
# CodeParrot 🦜 small for text-t-code generation
This model is [CodeParrot-small](https://huggingface.co/codeparrot/codeparrot-small) (from `branch megatron`) Fine-tuned on [github-jupyter-text-t-code](https://huggingface.co/datasets/codeparrot/github-jupyter-text-to-code), a dataset where the samples are a succession of docstrings and their Python code, originally extracted from Jupyter notebooks parsed in this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed).
|
Connorvr/BrightBot-small | c74836e9e613e57b8a934a6c240644db62d27972 | 2021-11-11T07:33:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Connorvr | null | Connorvr/BrightBot-small | 143 | null | transformers | 4,072 | ---
tags:
- conversational
---
#enlightened GPT model |
Emanuel/bertweet-emotion-base | 7a6581aad00369a89a2eb9e2ca9aec2ee4792f4e | 2022-07-13T12:37:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Emanuel | null | Emanuel/bertweet-emotion-base | 143 | 1 | transformers | 4,073 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bertweet-emotion-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.945
---
# bertweet-emotion-base
This model is a fine-tuned version of [Bertweet](https://huggingface.co/vinai/bertweet-base). It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.945
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 |
cambridgeltl/trans-encoder-cross-simcse-roberta-large | 6cc9a073d62636bc9fac1730c13e6736c2b7c98e | 2021-11-26T18:30:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/trans-encoder-cross-simcse-roberta-large | 143 | 1 | transformers | 4,074 | Entry not found |
ceostroff/harry-potter-gpt2-fanfiction | 868cd2c6bbd57898211b69baab308f775d4e5c4c | 2021-05-21T14:51:47.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"harry-potter",
"license:mit"
] | text-generation | false | ceostroff | null | ceostroff/harry-potter-gpt2-fanfiction | 143 | null | transformers | 4,075 | ---
language:
- en
tags:
- harry-potter
license: mit
---
# Harry Potter Fanfiction Generator
This is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product.
|
deepset/bert-base-german-cased-sentiment-Germeval17 | ece355d4692fd2233094b5b491d0a585526ae5a7 | 2021-05-19T15:27:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | deepset | null | deepset/bert-base-german-cased-sentiment-Germeval17 | 143 | 3 | transformers | 4,076 | Entry not found |
gargam/roberta-base-crest | 1fabf3dd2c00917840c3dbf5222659f1c8cd3842 | 2021-05-20T16:31:49.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | gargam | null | gargam/roberta-base-crest | 143 | null | transformers | 4,077 | Entry not found |
sachin/vit2distilgpt2 | 51be2b2bcadf5f37a4b2da6466f64e764c85add7 | 2022-01-27T12:15:27.000Z | [
"pytorch",
"vision-encoder-decoder",
"en",
"dataset:coco2017",
"transformers",
"image-to-text",
"license:mit"
] | image-to-text | false | sachin | null | sachin/vit2distilgpt2 | 143 | 6 | transformers | 4,078 | ---
language:
- en
tags:
- image-to-text
license: mit
datasets:
- coco2017
---
# Vit2-DistilGPT2
This model takes in an image and outputs a caption. It was trained using the Coco dataset and the full training script can be found in [this kaggle kernel](https://www.kaggle.com/sachin/visionencoderdecoder-model-training)
## Usage
```python
import Image
from transformers import AutoModel, GPT2Tokenizer, ViTFeatureExtractor
model = AutoModel.from_pretrained("sachin/vit2distilgpt2")
vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
# make sure GPT2 appends EOS in begin and end
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
return outputs
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
image = (Image.open(image_path).convert("RGB"), return_tensors="pt").pixel_values
encoder_outputs = model.generate(image.unsqueeze(0))
generated_sentences = gpt2_tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True)
```
Note that the output sentence may be repeated, hence a post processing step may be required.
## Bias Warning
This model may be biased due to dataset, lack of long training and the model itself. The following gender bias is an example.

## Results
<iframe src="https://wandb.ai/sachinruk/Vit2GPT2/reports/Shared-panel-22-01-27-23-01-56--VmlldzoxNDkyMTM3?highlightShare" style="border:none;height:1024px;width:100%">
|
seyonec/BPE_SELFIES_PubChem_shard00_150k | 4f2cf5d53ee98bf947bc3874243ddc5a7fdedbc2 | 2021-05-20T20:44:59.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/BPE_SELFIES_PubChem_shard00_150k | 143 | null | transformers | 4,079 | Entry not found |
PoloHuggingface/French_grammar_error_corrector | b46c8948a54491fb6d1448c2cba868cb0d01a644 | 2022-05-03T13:32:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fr",
"transformers",
"text2text generation",
"autotrain_compatible"
] | text2text-generation | false | PoloHuggingface | null | PoloHuggingface/French_grammar_error_corrector | 143 | 2 | transformers | 4,080 | ---
content:
language:
- fr
tags:
- text2text generation
widget:
- text: "improve grammar: Elle ne peux jamais aller au cinéma avec son amis"
example_title: "Grammar correction"
---
# Finetuned T5 on the french part of Lang-8 to automatically correct sentences.
Since the Lang-8 dataset contains really short sentences, the model does not generalize well with sentences larger than 10 words.
I'll upload soon the cleaned dataset that I've used for training. |
oliverguhr/fullstop-punctuation-multilingual-sonar-base | a4b84ab2a2de103ddc13f5ccf8b60d11dd10368f | 2022-05-17T08:15:02.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"en",
"de",
"fr",
"it",
"nl",
"dataset:wmt/europarl",
"transformers",
"punctuation prediction",
"punctuation",
"license:mit",
"autotrain_compatible"
] | token-classification | false | oliverguhr | null | oliverguhr/fullstop-punctuation-multilingual-sonar-base | 143 | null | transformers | 4,081 | ---
language:
- en
- de
- fr
- it
- nl
tags:
- punctuation prediction
- punctuation
datasets: wmt/europarl
license: mit
widget:
- text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
example_title: "Italian"
- text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre"
example_title: "French"
- text: "Ist das eine Frage Frau Müller"
example_title: "German"
- text: "My name is Clara and I live in Berkeley California"
example_title: "English"
- text: "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
example_title: "Dutch"
metrics:
- f1
---
# Work in progress |
djagatiya/ner-bert-base-cased-ontonotesv5-englishv4 | eefd7c2a76d27e8ed21b549ccf6d1641b4c6d520 | 2022-07-03T11:34:55.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:djagatiya/ner-ontonotes-v5-eng-v4",
"transformers",
"autotrain_compatible"
] | token-classification | false | djagatiya | null | djagatiya/ner-bert-base-cased-ontonotesv5-englishv4 | 143 | null | transformers | 4,082 | ---
tags:
- token-classification
task_ids:
- named-entity-recognition
datasets:
- djagatiya/ner-ontonotes-v5-eng-v4
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
# (NER) bert-base-cased : conll2012_ontonotesv5-english-v4
This `bert-base-cased` NER model was finetuned on `conll2012_ontonotesv5` version `english-v4` dataset. <br>
Check out [NER-System Repository](https://github.com/djagatiya/NER-System) for more information.
## Evaluation
- Precision: 87.85
- Recall: 89.63
- F1-Score: 88.73
> check out this [eval.log](eval.log) file for evaluation metrics and classification report.
```
precision recall f1-score support
CARDINAL 0.86 0.87 0.86 935
DATE 0.84 0.88 0.86 1602
EVENT 0.65 0.67 0.66 63
FAC 0.69 0.71 0.70 135
GPE 0.97 0.93 0.95 2240
LANGUAGE 0.76 0.73 0.74 22
LAW 0.54 0.55 0.54 40
LOC 0.73 0.80 0.76 179
MONEY 0.87 0.90 0.88 314
NORP 0.93 0.96 0.94 841
ORDINAL 0.80 0.87 0.83 195
ORG 0.88 0.90 0.89 1795
PERCENT 0.88 0.90 0.89 349
PERSON 0.94 0.95 0.94 1988
PRODUCT 0.62 0.76 0.69 76
QUANTITY 0.74 0.81 0.77 105
TIME 0.61 0.67 0.64 212
WORK_OF_ART 0.56 0.66 0.61 166
micro avg 0.88 0.90 0.89 11257
macro avg 0.77 0.81 0.79 11257
weighted avg 0.88 0.90 0.89 11257
``` |
deepset/deberta-v3-large-squad2 | 54d10aab0793cc5eb1a3403bdd2282020cedbdb9 | 2022-07-25T13:29:54.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"deberta",
"deberta-v3",
"deberta-v3-large",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/deberta-v3-large-squad2 | 143 | 4 | transformers | 4,083 | ---
language: en
datasets:
- squad_v2
license: cc-by-4.0
tags:
- deberta
- deberta-v3
- deberta-v3-large
model-index:
- name: deepset/deberta-v3-large-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 88.0876
verified: true
- name: F1
type: f1
value: 91.1623
verified: true
---
# deberta-v3-large for QA
This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** deberta-v3-large
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 1x NVIDIA A10G
## Hyperparameters
```
batch_size = 2
grad_acc_steps = 32
n_epochs = 6
base_LM_model = "microsoft/deberta-v3-large"
max_seq_len = 512
learning_rate = 7e-6
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/deberta-v3-large-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/deberta-v3-large-squad2",tokenizer="deepset/deberta-v3-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/deberta-v3-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 87.6105449338836,
"f1": 90.75307008866517,
"total": 11873,
"HasAns_exact": 84.37921727395411,
"HasAns_f1": 90.6732795483674,
"HasAns_total": 5928,
"NoAns_exact": 90.83263246425568,
"NoAns_f1": 90.83263246425568,
"NoAns_total": 5945
```
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
IMSyPP/hate_speech_it | 3bf99526e30770a40cd4656fd87a0c95a8a050cb | 2022-05-16T06:13:29.000Z | [
"pytorch",
"bert",
"text-classification",
"it",
"transformers",
"license:mit"
] | text-classification | false | IMSyPP | null | IMSyPP/hate_speech_it | 142 | null | transformers | 4,084 | ---
widget:
- text: "Ciao, mi chiamo Marcantonio, sono di Roma. Studio informatica all'Università di Roma."
language:
- it
license: mit
---
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent |
Narrativaai/fake-news-detection-spanish | 0eab6d956dbfc035dfd4ef2d0d40700876401ad3 | 2021-10-28T11:03:28.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"es",
"dataset:fakedes",
"transformers",
"generated_from_trainer",
"fake",
"news",
"competition",
"model-index"
] | text-classification | false | Narrativaai | null | Narrativaai/fake-news-detection-spanish | 142 | 3 | transformers | 4,085 | ---
language: es
tags:
- generated_from_trainer
- fake
- news
- competition
datasets:
- fakedes
widget:
- text: 'La palabra "haiga", aceptada por la RAE [SEP] La palabra "haiga", aceptada por la RAE La Real Academia de la Lengua (RAE), ha aceptado el uso de "HAIGA", para su utilización en las tres personas del singular del presente del subjuntivo del verbo hacer, aunque asegura que la forma más recomendable en la lengua culta para este tiempo, sigue siendo "haya".
Así lo han confirmado fuentes de la RAE, que explican que este cambio ha sido propuesto y aprobado por el pleno de la Academia de la Lengua, tras la extendida utilización por todo el territorio nacional, sobre todo, empleado por personas carentes de estudios o con estudios básicos de graduado escolar. Ya no será objeto de burla ese compañero que a diario repite aquello de "Mientras que haiga faena, no podemos quejarnos" o esa abuela que repite aquello de "El que haiga sacao los juguetes, que los recoja".
Entre otras palabras novedosas que ha aceptado la RAE, contamos también con "Descambiar", significa deshacer un cambio, por ejemplo "devolver la compra". Visto lo visto, nadie apostaría que la palabra "follamigos" sea la siguiente de la lista.'
metrics:
- f1
- accuracy
model-index:
- name: roberta-large-fake-news-detection-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-large-fake-news-detection-spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an [Spanish Fake News Dataset](https://sites.google.com/view/iberlef2020/#h.p_w0c31bn0r-SW).
It achieves the following results on the evaluation set:
- Loss: 1.7474
- F1: **0.7717**
- Accuracy: 0.7797
> So, based on the [leaderboard](https://sites.google.com/view/fakedes/results?authuser=0) our model **outperforms** the best model (scores F1 = 0.7666).
## Model description
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019.
## Intended uses & limitations
The objective of this task is to decide if a news item is fake or real by analyzing its textual representation.
## Training and evaluation data
**FakeDeS**: [Fake News Detection in Spanish Shared Task](https://sites.google.com/view/fakedes/home)
Fake news provides information that aims to manipulate people for different purposes: terrorism, political elections, advertisement, satire, among others. In social networks, misinformation extends in seconds among thousands of people, so it is necessary to develop tools that help control the amount of false information on the web. Similar tasks are detection of popularity in social networks and detection of subjectivity of messages in this media. A fake news detection system aims to help users detect and filter out potentially deceptive news. The prediction of intentionally misleading news is based on the analysis of truthful and fraudulent previously reviewed news, i.e., annotated corpora.
The Spanish Fake News Corpus is a collection of news compiled from several web sources: established newspapers websites,media companies websites, special websites dedicated to validating fake news, websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Mexican Spanish.
The corpus has 971 news collected from January to July, 2018, from different sources:
- Established newspapers websites,
- Media companies websites,
- Special websites dedicated to validating fake news,
- Websites designated by different journalists as sites that regularly publish fake news.
The corpus was tagged considering only two classes (true or fake), following a manual labeling process:
- A news is true if there is evidence that it has been published in reliable sites.
- A news is fake if there is news from reliable sites or specialized website in detection of deceptive content that contradicts it or no other evidence was found about the news besides the source.
- We collected the true-fake news pair of an event so there is a correlation of news in the corpus.
In order to avoid topic bias, the corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. As it can be seen in the table below, the number of fake and true news is quite balanced. Approximately 70% will be used as training corpus (676 news), and the 30% as testing corpus (295 news).
The training corpus contains the following information:
- Category: Fake/ True
- Topic: Science/ Sport/ Economy/ Education/ Entertainment/ Politics, Health/ Security/ Society
- Headline: The title of the news.
- Text: The complete text of the news.
- Link: The URL where the news was published.
More information needed
## Training procedure
TBA
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 243 | 0.6282 | 0.7513 | 0.75 |
| No log | 2.0 | 486 | 0.9600 | 0.7346 | 0.7587 |
| 0.5099 | 3.0 | 729 | 1.2128 | 0.7656 | 0.7570 |
| 0.5099 | 4.0 | 972 | 1.4001 | 0.7606 | 0.7622 |
| 0.1949 | 5.0 | 1215 | 1.9748 | 0.6475 | 0.7220 |
| 0.1949 | 6.0 | 1458 | 1.7386 | 0.7706 | 0.7710 |
| 0.0263 | 7.0 | 1701 | 1.7474 | 0.7717 | 0.7797 |
| 0.0263 | 8.0 | 1944 | 1.8114 | 0.7695 | 0.7780 |
| 0.0046 | 9.0 | 2187 | 1.8444 | 0.7709 | 0.7797 |
| 0.0046 | 10.0 | 2430 | 1.8552 | 0.7709 | 0.7797 |
### Fast usage with HF `pipelines`
```python
from transformers import pipeline
ckpt = "Narrativaai/fake-news-detection-spanish"
classifier = pipeline("text-classification", model=ckpt)
headline = "Your headline"
text = "Your article text here..."
classifier(headline + " [SEP] " + text)
```
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
Sahajtomar/NER_legal_de | 4db2b2c5b19c24ff4c0064b07bdf5f97359d5951 | 2021-05-18T22:27:00.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"de",
"dataset:legal entity recognition",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | Sahajtomar | null | Sahajtomar/NER_legal_de | 142 | null | transformers | 4,086 | ---
language: de
tags:
- pytorch
- tf
- bert
- NER
datasets:
- legal entity recognition
---
### NER model trained on BERT
MODEL used for fine tuning is GBERT Large by deepset.ai
## Test
Accuracy: 98 \
F1: 84.1 \
Precision: 82.7 \
Recall: 85.5
## Model inferencing:
```python
!pip install -q transformers
from transformers import pipeline
ner = pipeline(
"ner",
model="Sahajtomar/NER_legal_de",
tokenizer="Sahajtomar/NER_legal_de")
nlp_ner("Für eine Zuständigkeit des Verwaltungsgerichts Berlin nach § 52 Nr. 1 bis 4 VwGO hat der \
Antragsteller keine Anhaltspunkte vorgetragen .")
```
|
cambridgeltl/trans-encoder-bi-simcse-bert-large | 81194d51c150bd001ec10bab083fdcd04d315aef | 2021-11-26T18:26:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2109.13059",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/trans-encoder-bi-simcse-bert-large | 142 | null | transformers | 4,087 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-large
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-bert-large-uncased](https://huggingface.co/princeton-nlp/unsup-simcse-bert-large-uncased) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
cointegrated/rubert-base-cased-nli-twoway | bb84213f0b1b56fedaf025f8cbc54583b0393774 | 2021-10-10T11:08:15.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification"
] | zero-shot-classification | false | cointegrated | null | cointegrated/rubert-base-cased-nli-twoway | 142 | null | transformers | 4,088 | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: "Я хочу поехать в Австралию"
candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
hypothesis_template: "Тема текста - {}."
---
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
|
junnyu/roformer_chinese_small | ff58affc8d7472c3b792fad18f68baa8490672b5 | 2022-01-03T15:44:37.000Z | [
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"tf2.0",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/roformer_chinese_small | 142 | 2 | transformers | 4,089 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
salesken/paraphrase_diversity_ranker | f1547a5d7f3c7fab96b346ec682f9049ff52999e | 2021-05-20T20:05:19.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers",
"salesken",
"license:apache-2.0"
] | text-classification | false | salesken | null | salesken/paraphrase_diversity_ranker | 142 | 1 | transformers | 4,090 | ---
tags: salesken
license: apache-2.0
inference: false
---
We have trained a model to evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation. Data augmentation by adding Surface level variations does not add much value to the NLP model training. if the approach to paraphrase generation is "OverGenerate and Rank" , Its important to have a robust model of scoring/ ranking paraphrases. NLG Metrics like bleu ,BleuRT, gleu , Meteor have not proved very effective in scoring paraphrases.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
para_pairs=list(pd.MultiIndex.from_product([input_query, paraphrases]))
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para_pairs)[scores[:,1].sort(descending=True).indices].tolist()
print(sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
```
============================================================================
For more robust results, filter out the paraphrases which are not semantically
similar using a model trained on NLI, STS task and then apply the ranker .
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
from transformers import AutoModelForSequenceClassification
from sentence_transformers import SentenceTransformer, util
import torch
import pandas as pd
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
embedder = SentenceTransformer('stsb-bert-large')
input_query = ["tough challenges make you stronger."]
paraphrases = [
"tough problems make you stronger",
"tough problems will make you stronger",
"tough challenges make you stronger",
"tough challenges will make you a stronger person",
"tough challenges will make you stronger",
"tough tasks make you stronger",
"the tough task makes you stronger",
"tough stuff makes you stronger",
"tough people make you stronger",
"if tough times make you stronger",
"the tough part makes you stronger",
"tough issues strengthens you",
"tough shit makes you stronger",
"tough tasks force you to be stronger",
"tough challenge is making you stronger",
"tough problems make you have more strength"]
corpus_embeddings = embedder.encode(paraphrases, convert_to_tensor=True)
query_embedding = embedder.encode(input_query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
para_set=np.array(paraphrases)
a=cos_scores.sort(descending=True)
para= para_set[a.indices[a.values>=0.7].cpu()].tolist()
para_pairs=list(pd.MultiIndex.from_product([input_query, para]))
import torch
features = tokenizer(para_pairs, padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
sorted_diverse_paraphrases= np.array(para)[scores[:,1].sort(descending=True).indices].tolist()
print("Paraphrases sorted by diversity:=======",sorted_diverse_paraphrases)
# to identify the type of paraphrase (surface-level variation or semantic variation)
print("Paraphrase type detection=====", list(zip(para_pairs, labels)))
``` |
snunlp/KR-Medium | 2816fd37f5338827a265243a9abb59b2ac815099 | 2021-11-22T06:19:42.000Z | [
"pytorch",
"jax",
"bert",
"ko",
"transformers"
] | null | false | snunlp | null | snunlp/KR-Medium | 142 | 3 | transformers | 4,091 | ---
language:
- ko
---
# KR-BERT-MEDIUM
A pretrained Korean-specific BERT model developed by Computational Linguistics Lab at Seoul National University.
It is based on our character-level [KR-BERT](https://github.com/snunlp/KR-BERT) model which utilize WordPiece tokenizer.
Here, the model name has a suffix 'MEDIUM' since its training data grew from KR-BERT's original dataset. We have another additional model, KR-BERT-EXPANDED with more extensive training data expanded from those of KR-BERT-MEDIUM, so the suffix 'MEDIUM' is used.
<br>
### Vocab, Parameters and Data
| | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT-MEDIUM |
| -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: |
| vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 20,000 |
| parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 102,015,010 |
| data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 12.37GB<br>91M sentences,<br>1.17B words |
<br>
The training data for this model is expanded from those of KR-BERT, texts from Korean Wikipedia, and news articles, by addition of legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). This data expansion is to collect texts from more various domains than those of KR-BERT. The total data size is about 12.37GB, consisting of 91M and 1.17B words.
The user-generated comment dataset is expected to have similar stylistic properties to the task datasets of NSMC and HSD. Such text includes abbreviations, coinages, emoticons, spacing errors, and typos. Therefore, we added the dataset containing such on-line properties to our existing formal data such as news articles and Wikipedia texts to compose the training data for KR-BERT-MEDIUM. Accordingly, KR-BERT-MEDIUM reported better results in sentiment analysis than other models, and the performances improved with the model of the more massive, more various training data.
This model’s vocabulary size is 20,000, whose tokens are trained based on the expanded training data using the WordPiece tokenizer.
KR-BERT-MEDIUM is trained for 2M steps with the maxlen of 128, training batch size of 64, and learning rate of 1e-4, taking 22 hours to train the model using a Google Cloud TPU v3-8.
### Models
#### TensorFlow
* BERT tokenizer, character-based model ([download](https://drive.google.com/file/d/1OWXGqr2Z2PWD6ST3MsFmcjM8c2mr8PkE/view?usp=sharing))
#### PyTorch
* You can import it from Transformers!
```sh
# pytorch, transformers
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-Medium", do_lower_case=False)
model = AutoModel.from_pretrained("snunlp/KR-Medium")
```
### Requirements
- transformers == 4.0.0
- tensorflow < 2.0
## Downstream tasks
* Movie Review Classification on Naver Sentiment Movie Corpus [(NSMC)](https://github.com/e9t/nsmc)
* Hate Speech Detection [(Moon et al., 2020)](https://github.com/kocohub/korean-hate-speech)
#### tensorflow
* After downloading our pre-trained models, put them in a `models` directory.
* Set the output directory (for fine-tuning)
* Select task name: `NSMC` for Movie Review Classification, and `HATE` for Hate Speech Detection
```sh
# tensorflow
python3 run_classifier.py \
--task_name={NSMC, HATE} \
--do_train=true \
--do_eval=true \
--do_predict=true \
--do_lower_case=False\
--max_seq_length=128 \
--train_batch_size=128 \
--learning_rate=5e-05 \
--num_train_epochs=5.0 \
--output_dir={output_dir}
```
<br>
### Performances
TensorFlow, test set performances
| | multilingual BERT | KorBERT<br>character | KR-BERT<br>character<br>WordPiece | KR-BERT-MEDIUM |
|:-----:|-------------------:|----------------:|----------------------------:|-----------------------------------------:|
| NSMC (Acc) | 86.82 | 89.81 | 89.74 | 90.29 |
| Hate Speech (F1) | 52.03 | 54.33 | 54.53 | 57.91 |
<br>
## Contacts
[email protected]
|
thu-coai/CDial-GPT_LCCC-large | d0d1614dbd3982c715c672998f5f498246cb041f | 2020-12-23T05:56:25.000Z | [
"pytorch",
"transformers"
] | null | false | thu-coai | null | thu-coai/CDial-GPT_LCCC-large | 142 | 3 | transformers | 4,092 | # CDial-GPT_LCCC-large
https://github.com/thu-coai/CDial-GPT |
typeform/distilroberta-base-v2 | d3a8651147e18a6145d67d445c4b5229682dcae2 | 2021-05-20T22:46:35.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | typeform | null | typeform/distilroberta-base-v2 | 142 | null | transformers | 4,093 | ---
language: en
license: apache-2.0
datasets:
- openwebtext
---
# DistilRoBERTa base model
Forked from https://huggingface.co/distilroberta-base
|
ydshieh/tiny-random-gptj-for-question-answering | c804c9baa450871dcafe5ce20b54a066f3b889f1 | 2022-04-08T10:21:07.000Z | [
"pytorch",
"tf",
"gptj",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ydshieh | null | ydshieh/tiny-random-gptj-for-question-answering | 142 | 1 | transformers | 4,094 | Entry not found |
sarakolding/daT5-summariser | 190e1505acc55da54bdf32016bcb06387a526a2f | 2022-07-05T09:45:46.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"da",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | sarakolding | null | sarakolding/daT5-summariser | 142 | 6 | transformers | 4,095 | ---
language:
- da
tags:
- summarization
widget:
- text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer.
Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige.
Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor.
- Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun.
Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab.
Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet.
- Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun.
Nye regler giver forsinkelse
På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under.
Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast.
NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET
Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek.
Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen.
- Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han.
Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019.
Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig.
- Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han.
Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig.
- Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen.
I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3."
example_title: "Summarization"
---
This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
|
psyche/KoT5-paraphrase-generation | 929af1dcc025667315a3a56196b92feb3d24ccdf | 2022-06-19T15:39:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | psyche | null | psyche/KoT5-paraphrase-generation | 142 | null | transformers | 4,096 | ---
languages:
- ko
license: apache-2.0
---
More Information of KoT5 -> https://github.com/wisenut-research/KoT5 |
lucataco/DialogGPT-med-Rick | f753e6506a9d52f3609307ffa77dbc7a15494ebd | 2022-06-28T23:17:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lucataco | null | lucataco/DialogGPT-med-Rick | 142 | null | transformers | 4,097 | ---
tags:
- conversational
---
# Rick Dialog GPT Model Medium 12
# Trained on:
# kaggle rick n morty Tv transcript |
shatabdi/twisent_twisent | ab00a8dbe09f44216edba796318345482c52a91f | 2022-06-24T00:40:53.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | shatabdi | null | shatabdi/twisent_twisent | 142 | null | transformers | 4,098 | ---
tags:
- generated_from_trainer
model-index:
- name: twisent_twisent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twisent_twisent
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
trickstters/evbot2 | bb110a8e747a6db9b4a47ec82a26e94fa0057270 | 2022-07-28T08:04:46.000Z | [
"pytorch",
"conversational"
] | conversational | false | trickstters | null | trickstters/evbot2 | 142 | null | null | 4,099 | ---
tags:
- conversational
---
# a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.