modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
junnyu/roformer_chinese_sim_char_small | 4cc0b8dbc73cda2fada77a8b69878ccdcb667d2d | 2022-04-15T03:52:19.000Z | [
"pytorch",
"roformer",
"text-generation",
"zh",
"transformers",
"tf2.0"
] | text-generation | false | junnyu | null | junnyu/roformer_chinese_sim_char_small | 36 | null | transformers | 6,700 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
inference: False
---
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
``` |
lannelin/bert-imdb-1hidden | 7808be7790ceb59489081af1a72b7416a482c71a | 2022-07-13T15:17:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:imdb",
"transformers"
] | text-classification | false | lannelin | null | lannelin/bert-imdb-1hidden | 36 | null | transformers | 6,701 | ---
language:
- en
datasets:
- imdb
metrics:
- accuracy
---
# bert-imdb-1hidden
## Model description
A `bert-base-uncased` model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the `datasets` library.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained = "lannelin/bert-imdb-1hidden"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
LABELS = ["negative", "positive"]
def get_sentiment(text: str):
inputs = tokenizer.encode_plus(text, return_tensors='pt')
output = model(**inputs)[0].squeeze()
return LABELS[(output.argmax())]
print(get_sentiment("What a terrible film!"))
```
#### Limitations and bias
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
## Training data
Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Fine tuned on [imdb](https://huggingface.co/datasets/imdb)
## Training procedure
The model was fine-tuned for 1 epoch with a batch size of 64,
a learning rate of 5e-5, and a maximum sequence length of 512.
## Eval results
Accuracy on imdb test set: 0.87132 |
lighteternal/SSE-TUC-mt-en-el-cased | d7a1738e8f8aca831f87e0652f1796eeb5f46ce0 | 2021-03-31T17:27:05.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | lighteternal | null | lighteternal/SSE-TUC-mt-en-el-cased | 36 | null | transformers | 6,702 | ---
language:
- en
- el
tags:
- translation
widget:
- text: "'Katerina', is the best name for a girl."
license: apache-2.0
metrics:
- bleu
---
## English to Greek NMT
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: en
* target languages: el
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (20k codes).\\
Mixed-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = "lighteternal/SSE-TUC-mt-en-el-cased"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = " 'Katerina', is the best name for a girl."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 76.9 | 0.733 |
Results on XNLI parallel (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 65.4 | 0.624 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
lighteternal/SSE-TUC-mt-en-el-lowercase | 0fe36048867527fd75f08d0a8df723e8a60a1484 | 2021-03-31T17:27:32.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | lighteternal | null | lighteternal/SSE-TUC-mt-en-el-lowercase | 36 | null | transformers | 6,703 | ---
language:
- en
- el
tags:
- translation
widget:
- text: "Not all those who wander are lost."
license: apache-2.0
metrics:
- bleu
---
## English to Greek NMT (lower-case output)
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: en
* target languages: el
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + lower-casing + BPE segmentation
* metrics: bleu, chrf
* output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (10k codes).\\
Lower-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = " <your_downloaded_model_folderpath_here> "
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = "Not all those who wander are lost."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 77.3 | 0.739 |
Results on XNLI parallel (EN-EL):
| BLEU | chrF |
| ------ | ------ |
| 66.1 | 0.606 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
m3hrdadfi/albert-fa-base-v2-sentiment-digikala | 2104f211f5ba3012d5cfd1ef72fc2791cb52eaa9 | 2020-12-26T08:48:33.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-sentiment-digikala | 36 | null | transformers | 6,704 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.12 | 81.74 | 80.74 | - |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
microsoft/xprophetnet-large-wiki100-cased-xglue-qg | 635f3dae12109f196f068a416669d5f069dd1c34 | 2020-12-11T21:51:14.000Z | [
"pytorch",
"xlm-prophetnet",
"text2text-generation",
"arxiv:2001.04063",
"arxiv:2004.01401",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | microsoft | null | microsoft/xprophetnet-large-wiki100-cased-xglue-qg | 36 | null | transformers | 6,705 | ## xprophetnet-large-wiki100-cased-xglue-ntg
Cross-lingual version [ProphetNet](https://arxiv.org/abs/2001.04063), pretrained on [wiki100 xGLUE dataset](https://arxiv.org/abs/2004.01401) and finetuned on xGLUE cross-lingual Question Generation task.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
xProphetNet is also served as the baseline model for xGLUE cross-lingual natural language generation tasks.
For xGLUE corss-lingual NLG tasks, xProphetNet is finetuned with English data, but inference with both English and other zero-shot language data.
### Usage
A quick usage is like:
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-qg')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-qg')
EN_SENTENCE = "Google left China in 2010"
ZH_SENTENCE = "Google在2010年离开中国"
inputs = tokenizer([EN_SENTENCE, ZH_SENTENCE], padding=True, max_length=256, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True)
print([tokenizer.decode(g) for g in summary_ids])
```
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
|
mrm8488/bert-spanish-cased-finedtuned-ner | e39fef87da28af909803c1d65d3d6f36d080ee3f | 2021-05-20T00:34:37.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/bert-spanish-cased-finedtuned-ner | 36 | null | transformers | 6,706 | Entry not found |
mrm8488/t5-base-finetuned-Reddit-TIFU-TLDR | 774a3d8584d9307ca2c861c0464773bca6bad16e | 2020-08-03T14:57:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-Reddit-TIFU-TLDR | 36 | null | transformers | 6,707 | Entry not found |
projecte-aina/roberta-base-ca-cased-tc | 59f67a70f7716e152c22bf5faa85d6c285cc99a1 | 2022-02-24T08:34:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"dataset:projecte-aina/tecla",
"arxiv:1907.11692",
"transformers",
"catalan",
"text classification",
"tecla",
"CaText",
"Catalan Textual Corpus",
"model-index"
] | text-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-cased-tc | 36 | 1 | transformers | 6,708 | ---
language:
- ca
tags:
- "catalan"
- "text classification"
- "tecla"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/tecla"
metrics:
- accuracy
model-index:
- name: roberta-base-ca-cased-tc
results:
- task:
type: text-classification
dataset:
name: tecla
type: projecte-aina/tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.740388810634613
widget:
- text: "Els Pets presenten el seu nou treball al Palau Sant Jordi."
- text: "Els barcelonins incrementen un 23% l’ús del cotxe des de l’inici de la pandèmia."
- text: "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
- text: "Majors de 60 anys i sanitaris començaran a rebre la tercera dosi de la vacuna covid els propers dies."
- text: "Els cinemes Verdi estrenen Verdi Classics, un nou canal de televisió."
---
# Catalan BERTa (RoBERTa-base) finetuned for Text Classification.
The **roberta-base-ca-cased-tc** is a Text Classification (TC) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the TC dataset in Catalan called [TeCla](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.
## Evaluation and results
We evaluated the _roberta-base-ca-cased-tc_ on the TeCla test set against standard multilingual and monolingual baselines:
| Model | TeCla (accuracy) |
| ------------|:-------------|
| roberta-base-ca-cased-tc | **74.04** |
| mBERT | 70.56 |
| XLM-RoBERTa | 71.68 |
| WikiBERT-ca | 73.22 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
``` |
pszemraj/t5-base-askscience-lfqa | 45c775f403de3a92faec72ee3f6d28377d130097 | 2022-03-13T22:37:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:vblagoje/lfqa",
"transformers",
"qa",
"askscience",
"lfqa",
"information retrieval",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/t5-base-askscience-lfqa | 36 | 0 | transformers | 6,709 | ---
license: apache-2.0
language:
- en
tags:
- t5
- qa
- askscience
- lfqa
- information retrieval
datasets:
- vblagoje/lfqa
metrics:
- rouge
widget:
- text: "why hasn't humanity expanded to live on other planets in our solar system?"
example_title: "solar system"
- text: "question: what is a probability distribution? context: I am just learning about statistics."
example_title: "probability distribution"
- text: "question: What are the underlying physical processes by which exercise helps us lose weight? context: I started working out two weeks ago and already feel a lot better, and started to think about it and became deeply confused."
example_title: "pumpen"
- text: "what is a neural network?"
example_title: "deep learning"
- text: "What is the process that computers use to understand human language in deep learning models?"
example_title: "NLP"
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 4
repetition_penalty: 3.51
length_penalty: 0.8
num_beams: 4
early_stopping: True
---
# checkpoints
- This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the `vblagoje/lfqa` dataset, with training duration of 2 epochs, for a (_somewhat_) apples-to-apples comparison with [t5-base](https://huggingface.co/pszemraj/t5-base-askscience) on the standard eli5 dataset.
- This checkpoint does seem to be more coherent than t5-base on the original dataset.
- Compared to [bart on lfqa](https://huggingface.co/vblagoje/bart_lfqa), it seems to be able to respond to some questions independently of retrieval.
> NOTE: the inference API is limited to generating approx. 64 chars for runtime reasons, for longer outputs try using it in python as a transformers pipeline object.
## Intended uses & limitations
- Q&A, information retrieval
- it is probably better to use it with a [retrieval pipeline](https://github.com/deepset-ai/haystack) than alone
## Training and evaluation data
- see linked dataset. the dataset was filtered to only included the `askscience` subreddit in an attempt to focus on academic/technical queries.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
seduerr/paiintent | 5d5b048b064dc90fe29bb64b6e36479794b98e22 | 2021-03-20T05:20:17.000Z | [
"pytorch",
"squeezebert",
"en",
"dataset:mulit_nli",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | seduerr | null | seduerr/paiintent | 36 | 2 | transformers | 6,710 | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- squeezebert
datasets:
- mulit_nli
metrics:
- accuracy
---
# SqueezeBERT |
shashank2123/t5-finetuned-for-GEC | 86f329d570afca7636286d61318be160d23e8bf9 | 2021-08-05T06:16:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | shashank2123 | null | shashank2123/t5-finetuned-for-GEC | 36 | null | transformers | 6,711 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: t5-finetuned-for-GEC
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 0.3571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned-for-GEC
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3949
- Bleu: 0.3571
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.3958 | 1.0 | 4053 | 0.4236 | 0.3493 | 19.0 |
| 0.3488 | 2.0 | 8106 | 0.4076 | 0.3518 | 19.0 |
| 0.319 | 3.0 | 12159 | 0.3962 | 0.3523 | 19.0 |
| 0.3105 | 4.0 | 16212 | 0.3951 | 0.3567 | 19.0 |
| 0.3016 | 5.0 | 20265 | 0.3949 | 0.3571 | 19.0 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
sosuke/ease-bert-base-multilingual-cased | 71d445876280d99aa674a9f32bcca3542af80e8f | 2021-12-07T14:19:07.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sosuke | null | sosuke/ease-bert-base-multilingual-cased | 36 | null | transformers | 6,712 | Entry not found |
speechbrain/asr-crdnn-commonvoice-de | 54aa45ffca30cee245d29a499ba954a7eab30bb3 | 2021-11-30T00:36:12.000Z | [
"de",
"dataset:common_voice",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-crdnn-commonvoice-de | 36 | null | speechbrain | 6,713 | ---
language: "de"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- common_voice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention trained on CommonVoice 7.0 German (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (German Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test CER | Test WER | GPUs |
|:-------------:|:--------------:|:--------------:| :--------:|
| 28.10.21 | 4.93 | 15.37 | 1xV100 16GB |
## Credits
The model is provided by [vitas.ai](https://www.vitas.ai/).
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (DE).
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalization and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in German)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-de", savedir="pretrained_models/asr-crdnn-commonvoice-de")
asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-de/example-de.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (986a2175).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CommonVoice/ASR/seq2seq
python train.py hparams/train_de.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/13i7rdgVX7-qZ94Rtj6OdUgU-S6BbKKvw?usp=sharing)
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
surajp/albert-base-sanskrit | a397b2fad852d6ac908ebf16b32fc457e442d0d8 | 2020-12-11T22:02:34.000Z | [
"pytorch",
"albert",
"feature-extraction",
"sa",
"transformers"
] | feature-extraction | false | surajp | null | surajp/albert-base-sanskrit | 36 | 2 | transformers | 6,714 | ---
language: sa
---
# ALBERT-base-Sanskrit
Explaination Notebook Colab: [SanskritALBERT.ipynb](https://colab.research.google.com/github/parmarsuraj99/suraj-parmar/blob/master/_notebooks/2020-05-02-SanskritALBERT.ipynb)
Size of the model is **46MB**
Example of usage:
```
tokenizer = AutoTokenizer.from_pretrained("surajp/albert-base-sanskrit")
model = AutoModel.from_pretrained("surajp/albert-base-sanskrit")
enc=tokenizer.encode("ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥")
print(tokenizer.decode(enc))
ps = model(torch.tensor(enc).unsqueeze(1))
print(ps[0].shape)
```
```
'''
Output:
--------
[CLS] ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥[SEP]
torch.Size([28, 1, 768])
```
> Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99)
> Made with <span style="color: #e25555;">♥</span> in India
|
drAbreu/bioBERT-NER-NCBI_disease | 4e888d64cbd0f2fda938f8ad7b25094866e137c9 | 2022-03-15T14:42:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:ncbi_disease",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | drAbreu | null | drAbreu/bioBERT-NER-NCBI_disease | 36 | null | transformers | 6,715 | ---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bioBERT-NER-NCBI_disease
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metrics:
- name: Precision
type: precision
value: 0.8136200716845878
- name: Recall
type: recall
value: 0.8653113087674714
- name: F1
type: f1
value: 0.8386699507389163
- name: Accuracy
type: accuracy
value: 0.9850187265917603
widget:
- text: "This model finds disease names such as Cholera, Cancer or COVID"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bioBERT-NER-NCBI_disease
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.8136
- Recall: 0.8653
- F1: 0.8387
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0972 | 1.0 | 680 | 0.0688 | 0.7435 | 0.7624 | 0.7528 | 0.9794 |
| 0.0397 | 2.0 | 1360 | 0.0508 | 0.7952 | 0.8780 | 0.8345 | 0.9840 |
| 0.0118 | 3.0 | 2040 | 0.0598 | 0.8136 | 0.8653 | 0.8387 | 0.9850 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Aureliano/distilbert-base-uncased-if | 0e0f686b700a3871d7b64ade5f2ee282d3352e38 | 2022-03-25T00:06:03.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"transformers",
"license:apache-2.0"
] | text-classification | false | Aureliano | null | Aureliano/distilbert-base-uncased-if | 36 | null | transformers | 6,716 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased) for Interactive Fiction
[`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) finetuned on a dataset of Interactive
Fiction commands.
Details on the datasets can be found [here](https://github.com/aporporato/jericho-corpora).
The resulting model scored an accuracy of 0.976253 on the WordNet task test set.
## How to use the discriminator in `transformers`
```python
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = discriminator.config.id2label[tf.math.argmax(prediction).numpy()]
print(text, ":", label) # take.v.04 -> "get into one's hands, take physically"
```
## How to use the discriminator in `transformers` on a custom dataset
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModel, TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
label2id=label2id,
id2label=id2label)
discriminator.distilbert = TFAutoModel.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 20
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
## How to use in a Rasa pipeline
The model can integrated in a Rasa pipeline through
a [`LanguageModelFeaturizer`](https://rasa.com/docs/rasa/components#languagemodelfeaturizer)
```yaml
recipe: default.v1
language: en
pipeline:
# See https://rasa.com/docs/rasa/tuning-your-model for more information.
...
- name: "WhitespaceTokenizer"
...
- name: LanguageModelFeaturizer
model_name: "distilbert"
model_weights: "Aureliano/distilbert-base-uncased-if"
...
``` |
binay1999/bert-finetuned-ner-q | e0469a09097b1b7e906f1965f67399688e502747 | 2022-03-31T07:43:48.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | binay1999 | null | binay1999/bert-finetuned-ner-q | 36 | null | transformers | 6,717 | Entry not found |
Jackett/subject_classifier_extended | 75e5305761fb9a8b0e59ddb6d6bbc5601063e9ae | 2022-05-12T06:09:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Jackett | null | Jackett/subject_classifier_extended | 36 | null | transformers | 6,718 | Label mappings
{'LABEL_0':'Biology','LABEL_1':'Physics','LABEL_2':'Chemistry','LABEL_3':'Maths','LABEL_4':'Social Science','LABEL_5':'English'}
Training data distribution
Physics - 7000
Maths - 7000
Biology - 7000
Chemistry - 7000
English - 5254
Social Science - 7000 |
Bryan0123/bert-hashtag-to-hashtag | b0465992773484c93603403975ebee2b272a1d6a | 2022-05-15T05:08:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Bryan0123 | null | Bryan0123/bert-hashtag-to-hashtag | 36 | null | transformers | 6,719 | Entry not found |
Salesforce/codegen-2B-nl | 25563dbce49290c7dceea25ebeb2f11b6fd0910b | 2022-06-28T17:45:54.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
] | text-generation | false | Salesforce | null | Salesforce/codegen-2B-nl | 36 | null | transformers | 6,720 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-NL 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 2B** in the paper, where "NL" means it is pre-trained on the Pile and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 2B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
allenai/aspire-contextualsentence-singlem-biomed | 924024ac9557f2e08acf9aea96e2a42749062119 | 2022-04-24T20:06:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | allenai | null | allenai/aspire-contextualsentence-singlem-biomed | 36 | null | transformers | 6,721 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of biomedical scientific papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by leveraging a novel form of textual supervision which leverages co-citation contexts to align the sentences of positive examples. Test time behavior ranks documents based on the smallest L2 distance of sentences between documents or the smallest L2 distance between a set of query sentences and a candidate document.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair with the abstracts sentence aligned using the co-citation context. Here the context notes why the cited papers are similar:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **biomedical** scientific text using multiple vectors per document. The model allows fine grained similarity by establishing sentence-to-sentence similarity between documents. The model is most well suited to an aspect conditional task formulation where a query might consist of sentence in a query document and candidates must be retrieved along this specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library and some additional code to compute contextual sentence vectors.
View example usage in the model github repo: https://github.com/allenai/aspire#tsaspire
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. In using this sentence level model for abstract level retrieval we rank documents by the minimal L2 distance between the sentences in the query and candidate abstract.
### Evaluation results
The released model `aspire-contextualsentence-singlem-biomed` is compared against `allenai/specter`, a bi-encoder baseline and `all-mpnet-base-v2` a strong non-contextual sentence-bert baseline model trained on ~1 billion training examples. `aspire-contextualsentence-singlem-biomed`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released model `aspire-contextualsentence-singlem-biomed` is the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `all-mpnet-base-v2` | 17.35 | 43.87 | 52.92 | 69.69 |
| `specter` | 28.24 | 59.28 | 60.62 | 77.20 |
| `aspire-contextualsentence-singlem-biomed`<sup>*</sup> | 26.24 | 56.55 | 61.29 | 77.89 |
| `aspire-contextualsentence-singlem-biomed` | 26.68 | 57.21 | 61.06 | 77.70 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-singlem-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-compsci): If you wanted to run on computer science papers and want to use a model trained to match a _single_ sentence between documents.
[`aspire-contextualsentence-multim-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-multim-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents. |
cjvt/t5-sl-small | 7d95fb63be4e50c31e1c551b364338dc3b16ad7d | 2022-07-21T11:24:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"sl",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | cjvt | null | cjvt/t5-sl-small | 36 | null | transformers | 6,722 | ---
language:
- sl
license: cc-by-sa-4.0
---
# t5-sl-small
t5-sl-small model is a Slovene T5 model. It has 8 encoder and 8 decoder layers, in total about 60 million parameters.
It was trained for 5 epochs on the following corpora:
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
## Changelog
2022-07-21: updated with v2 of the model, the old one is still accesible at [cjvt/legacy-t5-sl-small](https://huggingface.co/cjvt/legacy-t5-sl-small).
|
TehranNLP-org/bert-large-sst2 | 529f83169e5bcae3674837dfa38b8551307d4734 | 2022-05-03T17:01:28.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-large-sst2 | 36 | null | transformers | 6,723 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: SST2
type: ''
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2104 | 0.7985 | 0.5092 |
| 0.481 | 2.0 | 4208 | 0.7191 | 0.5092 |
| 0.7017 | 3.0 | 6312 | 0.6996 | 0.5092 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
agnihotri/cuad_contract_type | 1df1dd3a95cdc610dd4f4e1fa04d1c9ad2db8d42 | 2022-05-01T18:49:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:agnihotri/autotrain-data-contract_type",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | agnihotri | null | agnihotri/cuad_contract_type | 36 | null | transformers | 6,724 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- agnihotri/autotrain-data-contract_type
co2_eq_emissions: 0.07610944071640048
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 809725368
- CO2 Emissions (in grams): 0.07610944071640048
## Validation Metrics
- Loss: 0.05312908813357353
- Accuracy: 0.9911504424778761
- Macro F1: 0.9912087912087912
- Micro F1: 0.9911504424778761
- Weighted F1: 0.9908586988233007
- Macro Precision: 0.9942857142857143
- Micro Precision: 0.9911504424778761
- Weighted Precision: 0.9924146649810366
- Macro Recall: 0.99
- Micro Recall: 0.9911504424778761
- Weighted Recall: 0.9911504424778761
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/agnihotri/autotrain-contract_type-809725368
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("agnihotri/autotrain-contract_type-809725368", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("agnihotri/autotrain-contract_type-809725368", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
yelpfeast/byt5-base-english-ocr-correction | 19d5c2fd86b87f0a0febb7d2574878a0d68d5294 | 2022-07-09T16:37:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:wikitext",
"arxiv:2105.13626",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yelpfeast | null | yelpfeast/byt5-base-english-ocr-correction | 36 | 0 | transformers | 6,725 | ---
language: en
datasets:
- wikitext
---
# ByT5 base English fine tuned for OCR Correction
This model is a fine-tuned version of the [byt5-base](https://huggingface.co/google/byt5-base) for OCR Correction. ByT5 was
introduced in [this paper](https://arxiv.org/abs/2105.13626) and the idea and code for fine-tuning the model for OCR Correction was taken from [here](https://blog.ml6.eu/ocr-correction-with-byt5-5994d1217c07).
## Model description
byt5-base-english-ocr-correction is a model that has taken the byt5-base model and fine-tuned it an OCR Correction dataset. The model has been fine-tuned to take an input sentence that has incorrectly transcribed from an OCR model and output a sentence that corrects the errors.
The model was trained by taking the [wikitext dataset](https://huggingface.co/datasets/wikitext) and adding synthetic OCR errors using [nlpaug](https://github.com/makcedward/nlpaug).
## Intended uses & limitations
You can use the model for Text-to-Text Generation to remove errors caused by an OCR model.
### How to use
```python
from transformers import T5ForConditionalGeneration
import torch
import nlpaug.augmenter.char as nac
aug = nac.OcrAug(aug_char_p =0.4, aug_word_p = 0.6)
corrected_text = "Life is like a box of chocolates"
augmented_text = aug.augment(corrected_text)
model = T5ForConditionalGeneration.from_pretrained('yelpfeast/byt5-base-english-ocr-correction')
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
loss = model(input_ids, labels=labels).loss # forward pass
```
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
import nlpaug.augmenter.char as nac
aug = nac.OcrAug(aug_char_p =0.4, aug_word_p = 0.6)
corrected_text = "Life is like a box of chocolates"
augmented_text = aug.augment(corrected_text)
print(augmented_text)
model = T5ForConditionalGeneration.from_pretrained('yelpfeast/byt5-base-english-ocr-correction')
tokenizer = AutoTokenizer.from_pretrained("yelpfeast/byt5-base-english-ocr-correction")
inputs = tokenizer(augmented_text, return_tensors="pt", padding=True)
output_sequences = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
do_sample=False, # disable sampling to test if batching affects output
)
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
```
### Limitations
The model has been trained on text that has been artificially corrupted to look like OCR errors. These errors may not be similar for all OCR models and hence the model may not do a good job at producing fully correct text. |
ArthurZ/opt-125m | bd9fb857e5b73be814e773d53baa9036e236af0e | 2022-06-21T20:29:12.000Z | [
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"transformers",
"generated_from_keras_callback",
"model-index"
] | text-generation | false | ArthurZ | null | ArthurZ/opt-125m | 36 | null | transformers | 6,726 | ---
tags:
- generated_from_keras_callback
model-index:
- name: opt-125m
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# opt-125m
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- TensorFlow 2.9.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
xhyi/CodeGen-350M-Multi | a49dc3e1e560fb96f0dd8e8ff4a2b4073a0ff231 | 2022-05-18T07:08:40.000Z | [
"pytorch",
"codegen",
"text-generation",
"en",
"transformers",
"text generation",
"causal-lm",
"license:bsd-3-clause"
] | text-generation | false | xhyi | null | xhyi/CodeGen-350M-Multi | 36 | null | transformers | 6,727 | ---
language:
- en
tags:
- codegen
- text generation
- pytorch
- causal-lm
license: bsd-3-clause
---
# Salesforce CodeGen
ported salesforce codegen models to work on huggingface transformers without any extra code (the model specific code is bundled)
## Overview
The CodeGen model was proposed in by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. From Salesforce Research.
The abstract from the paper is the following: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We plan to make the training library JaxFormer including checkpoints available as open source.
## Usage
`trust_remote_code` is needed because the [torch modules](https://github.com/salesforce/CodeGen/tree/main/jaxformer/hf/codegen) for the custom codegen model is bundled.
```sh
from transformers import AutoModelForCausalLM, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(model_folder, local_files_only=True)
model = AutoModelForCausalLM.from_pretrained(model_folder, local_files_only=True, trust_remote_code=True)
``` |
PontifexMaximus/ArabicTranslator | 324ef839da8a24a2ca38181a3654960bd51e1a54 | 2022-05-26T01:25:24.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:opus_infopankki",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/ArabicTranslator | 36 | null | transformers | 6,728 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_infopankki
type: opus_infopankki
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 51.6508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7269
- Bleu: 51.6508
- Gen Len: 15.0812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.4974 | 1.0 | 1587 | 1.3365 | 36.9061 | 15.3385 |
| 1.3768 | 2.0 | 3174 | 1.2139 | 39.5476 | 15.2079 |
| 1.2887 | 3.0 | 4761 | 1.1265 | 41.2771 | 15.2034 |
| 1.2076 | 4.0 | 6348 | 1.0556 | 42.6907 | 15.2687 |
| 1.1512 | 5.0 | 7935 | 0.9975 | 43.9498 | 15.2072 |
| 1.0797 | 6.0 | 9522 | 0.9491 | 45.224 | 15.2034 |
| 1.0499 | 7.0 | 11109 | 0.9101 | 46.1387 | 15.1651 |
| 1.0095 | 8.0 | 12696 | 0.8778 | 47.0586 | 15.1788 |
| 0.9833 | 9.0 | 14283 | 0.8501 | 47.8083 | 15.162 |
| 0.9601 | 10.0 | 15870 | 0.8267 | 48.5236 | 15.1784 |
| 0.9457 | 11.0 | 17457 | 0.8059 | 49.1717 | 15.095 |
| 0.9233 | 12.0 | 19044 | 0.7883 | 49.7742 | 15.1126 |
| 0.8964 | 13.0 | 20631 | 0.7736 | 50.2168 | 15.0917 |
| 0.8849 | 14.0 | 22218 | 0.7606 | 50.5583 | 15.0913 |
| 0.8751 | 15.0 | 23805 | 0.7504 | 50.8481 | 15.1108 |
| 0.858 | 16.0 | 25392 | 0.7417 | 51.1841 | 15.0989 |
| 0.8673 | 17.0 | 26979 | 0.7353 | 51.4271 | 15.0939 |
| 0.8548 | 18.0 | 28566 | 0.7306 | 51.535 | 15.0911 |
| 0.8483 | 19.0 | 30153 | 0.7279 | 51.6102 | 15.078 |
| 0.8614 | 20.0 | 31740 | 0.7269 | 51.6508 | 15.0812 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.7.1+cu110
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Sehong/kobart-QuestionGeneration | 09f94b7c18fb7e3ead56fb1c1314706ee4072425 | 2022-05-28T03:21:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ko",
"dataset:korquad",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | Sehong | null | Sehong/kobart-QuestionGeneration | 36 | 3 | transformers | 6,729 | ---
language: ko
tags:
- bart
datasets:
- korquad
license: mit
---
# Korean Question Generation Model
## Github
https://github.com/Seoneun/KoBART-Question-Generation
## Fine-tuning Dataset
KorQuAD 1.0
## Demo
https://huggingface.co/Sehong/kobart-QuestionGeneration
## How to use
```python
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration
tokenizer = PreTrainedTokenizerFast.from_pretrained('Sehong/kobart-QuestionGeneration')
model = BartForConditionalGeneration.from_pretrained('Sehong/kobart-QuestionGeneration')
text = "1989년 2월 15일 여의도 농민 폭력 시위를 주도한 혐의(폭력행위등처벌에관한법률위반)으로 지명수배되었다. 1989년 3월 12일 서울지방검찰청 공안부는 임종석의 사전구속영장을 발부받았다. 같은 해 6월 30일 평양축전에 임수경을 대표로 파견하여 국가보안법위반 혐의가 추가되었다. 경찰은 12월 18일~20일 사이 서울 경희대학교에서 임종석이 성명 발표를 추진하고 있다는 첩보를 입수했고, 12월 18일 오전 7시 40분 경 가스총과 전자봉으로 무장한 특공조 및 대공과 직원 12명 등 22명의 사복 경찰을 승용차 8대에 나누어 경희대학교에 투입했다. 1989년 12월 18일 오전 8시 15분 경 서울청량리경찰서는 호위 학생 5명과 함께 경희대학교 학생회관 건물 계단을 내려오는 임종석을 발견, 검거해 구속을 집행했다. 임종석은 청량리경찰서에서 약 1시간 동안 조사를 받은 뒤 오전 9시 50분 경 서울 장안동의 서울지방경찰청 공안분실로 인계되었다. <unused0> 1989년 2월 15일"
raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]
summary_ids = model.generate(torch.tensor([input_ids]))
print(tokenizer.decode(summary_ids.squeeze().tolist(), skip_special_tokens=True))
# <unused0> is sep_token, sep_token seperate content and answer
```
|
miesnerjacob/distilbert-base-uncased-finetuned-squad-d5716d28 | a1935f2b809bfdb3f20190d1436036b9d25ab7c4 | 2022-05-30T17:27:30.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | miesnerjacob | null | miesnerjacob/distilbert-base-uncased-finetuned-squad-d5716d28 | 36 | null | transformers | 6,730 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
abhishek/autotrain-dog-vs-food | 195eb7819ea64a39de56c166e779a94038ce9a14 | 2022-06-22T14:51:28.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934",
"dataset:sasha/dog-food",
"transformers",
"autotrain",
"model-index",
"co2_eq_emissions"
] | image-classification | false | abhishek | null | abhishek/autotrain-dog-vs-food | 36 | 1 | transformers | 6,731 | ---
tags: autotrain
datasets:
- abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934
- sasha/dog-food
co2_eq_emissions: 2.050948967287266
model-index:
- name: autotrain-dog-vs-food
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
metrics:
- name: Accuracy
type: accuracy
value: 0.9976190476190476
- task:
type: image-classification
name: Image Classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
config: sasha--dog-food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 1.0
verified: true
- name: Precision
type: precision
value: 1.0
verified: true
- name: Recall
type: recall
value: 1.0
verified: true
- name: AUC
type: auc
value: 1.0
verified: true
- name: F1
type: f1
value: 1.0
verified: true
- name: loss
type: loss
value: 0.001115015591494739
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 264300
- CO2 Emissions (in grams): 2.050948967287266
## Validation Metrics
- Loss: 0.009216072037816048
- Accuracy: 0.9976190476190476
- Macro F1: 0.9973261861865685
- Micro F1: 0.9976190476190476
- Weighted F1: 0.997621154535828
- Macro Precision: 0.9964539007092199
- Micro Precision: 0.9976190476190476
- Weighted Precision: 0.9976359338061465
- Macro Recall: 0.9982142857142857
- Micro Recall: 0.9976190476190476
- Weighted Recall: 0.9976190476190476 |
Dizzykong/charles-dickens | 2fd3ea9935cf68edf9813e233823f89397221e09 | 2022-06-27T21:13:14.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/charles-dickens | 36 | null | transformers | 6,732 | ---
tags:
- generated_from_trainer
model-index:
- name: charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# charles-dickens
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ghadeermobasher/Original-BlueBERT-BioRED-Chem-512-5-30 | bc22321dfa27144ffd169ee601d37c5d0a26c4c6 | 2022-07-08T12:25:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BioRED-Chem-512-5-30 | 36 | null | transformers | 6,733 | |
shengnan/v-shean-visualize-202207162206 | e1f80725195c07554513aedc0e60fa8228e5fe49 | 2022-07-16T14:24:36.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | shengnan | null | shengnan/v-shean-visualize-202207162206 | 36 | null | transformers | 6,734 | Entry not found |
Lancelot53/CV_bn_trained_on_Validation | 4f99077eab1d7618fe65195e6a4dcd233c1249d4 | 2022-07-19T17:02:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Lancelot53 | null | Lancelot53/CV_bn_trained_on_Validation | 36 | null | transformers | 6,735 | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: CV_bn_trained_on_Validation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CV_bn_trained_on_Validation
This model is a fine-tuned version of [./content/CV_bn_trained_on_Validation/checkpoint-6000](https://huggingface.co/./content/CV_bn_trained_on_Validation/checkpoint-6000) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Wer: 0.3385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1104 | 1.25 | 2000 | 0.2535 | 0.3705 |
| 1.1069 | 2.49 | 4000 | 0.2369 | 0.3545 |
| 1.0344 | 3.74 | 6000 | 0.2181 | 0.3385 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mehdidn/finetuned_translation_fa_en | e81981288b703716902f8c16de6ef16f5057c3b2 | 2022-07-24T21:00:41.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mehdidn | null | mehdidn/finetuned_translation_fa_en | 36 | null | transformers | 6,736 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: finetuned_translation_fa_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_translation_fa_en
This model is a fine-tuned version of [persiannlp/mt5-small-parsinlu-opus-translation_fa_en](https://huggingface.co/persiannlp/mt5-small-parsinlu-opus-translation_fa_en) on the TEP (https://opus.nlpl.eu/TEP.php) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4370
- Bleu: 24.2331
- Gen Len: 11.6467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.6184 | 1.0 | 30987 | 1.4370 | 24.2331 | 11.6467 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fujiki/t5-efficient-xl-en2ja_train5 | b2e3fb1156d4be4b195c29d9fe88f10b72a5f687 | 2022-07-30T09:49:28.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-efficient-xl-en2ja_train5 | 36 | null | transformers | 6,737 | Entry not found |
AriakimTaiyo/gpt2-chat | e5389d7c7347eeb36dcf43d408e46022294ede26 | 2022-07-27T19:36:22.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"transformers",
"conversational",
"license:mit"
] | conversational | false | AriakimTaiyo | null | AriakimTaiyo/gpt2-chat | 36 | null | transformers | 6,738 | ---
language: en
license: mit
tags:
- conversational
---
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. |
derwahnsinn/gpt2-mediumADS | 6acccda4096dc57430ba122548e3f0a7c610791b | 2022-07-28T03:25:52.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediumADS | 36 | null | transformers | 6,739 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediumADS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediumADS
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2710
- eval_runtime: 23.7095
- eval_samples_per_second: 61.157
- eval_steps_per_second: 7.676
- epoch: 20.37
- step: 3707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 29
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
soop/DialoGPT-medium-BaymaxBot | 3bb9ce4a628f7f641fb805bf7eb0d47a4c65c882 | 2022-07-29T17:39:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | soop | null | soop/DialoGPT-medium-BaymaxBot | 36 | null | transformers | 6,740 | ---
tags:
- conversational
---
# DialoGPT BaymaxBot
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | 3e48534705c153737cbec1c5748bb02359b7b239 | 2021-09-14T14:30:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | 35 | 2 | transformers | 6,741 | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-quarter** (`bert-base-arabic-camelbert-msa-quarter`), a model pre-trained on a quarter of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
|✔|`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.17437894642353058,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.042852893471717834,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]',
'score': 0.030925093218684196,
'token': 9331,
'token_str': 'البقاء'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.02964409440755844,
'token': 3088,
'token_str': 'الحب'},
{'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]',
'score': 0.028030086308717728,
'token': 17188,
'token_str': 'الكمال'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
DataikuNLP/TinyBERT_General_4L_312D | 33ec5b27fcd40369ff402c779baffe219f5360fe | 2021-09-02T08:09:47.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1909.10351",
"transformers"
] | null | false | DataikuNLP | null | DataikuNLP/TinyBERT_General_4L_312D | 35 | null | transformers | 6,742 | TinyBERT: Distilling BERT for Natural Language Understanding
========
**This model is a copy of [this model repository](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) from Huawei Noah at the specific commit `34707a33cd59a94ecde241ac209bf35103691b43`.**
TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand.
For more details about the techniques of TinyBERT, refer to our paper:
[TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/abs/1909.10351)
Citation
========
If you find TinyBERT useful in your research, please cite the following paper:
```
@article{jiao2019tinybert,
title={Tinybert: Distilling bert for natural language understanding},
author={Jiao, Xiaoqi and Yin, Yichun and Shang, Lifeng and Jiang, Xin and Chen, Xiao and Li, Linlin and Wang, Fang and Liu, Qun},
journal={arXiv preprint arXiv:1909.10351},
year={2019}
}
```
|
Gerwin/bert-for-pac | 0b2f6225499890eb49f58658db00706ea3d3e8d2 | 2022-07-21T08:59:38.000Z | [
"pytorch",
"bert",
"text-classification",
"nl",
"transformers",
"passive",
"active",
"license:apache-2.0"
] | text-classification | false | Gerwin | null | Gerwin/bert-for-pac | 35 | null | transformers | 6,743 | ---
language:
- nl
tags:
- bert
- passive
- active
license: apache-2.0
---
## Dutch Fine-Tuned BERT For Passive/Active Voice Classification.
### Lijdende en Bedrijvende vorm classificatie voor zinnen
#### Examples
Try the following examples in the Hosted inference API:
1. Jan werd opgehaald door zijn moeder.
2. Wie niet weg is, is gezien
3. Ik ben van plan om morgen te gaan werken
4. De makelaar heeft het nieuwe huis verkocht aan de bewoners die iets verderop wonen.
5. De koekjes die mama had gemaakt waren door de jongens allemaal opgegeten.
LABEL_0 = Active / Bedrijvend. LABEL_1 = Passive / Lijdend
Answers (what they should be):
1. 1
2. 1
3. 0
4. 0
5. 1
#### Basic Information
This model is fine-tuned on [BERTje](https://huggingface.co/GroNLP/bert-base-dutch-cased) for recognizing passive and active voice in Dutch sentences.
Contact me at [email protected] for further questions.
Gerwin |
GroNLP/gpt2-medium-dutch-embeddings | 0ea2e72f4d0a68c02ca02e62c7f3deadc956c1e7 | 2021-05-21T09:49:13.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"nl",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-medium"
] | text-generation | false | GroNLP | null | GroNLP/gpt2-medium-dutch-embeddings | 35 | null | transformers | 6,744 | ---
language: nl
tags:
- adaption
- recycled
- gpt2-medium
pipeline_tag: text-generation
---
# GPT-2 recycled for Dutch (medium, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-dutch-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Helsinki-NLP/opus-mt-de-bg | 346965fed40253783eaf06a00664d19f4810e46e | 2021-01-18T07:57:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-bg | 35 | null | transformers | 6,745 | ---
language:
- de
- bg
tags:
- translation
license: apache-2.0
---
### deu-bul
* source group: German
* target group: Bulgarian
* OPUS readme: [deu-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-bul/README.md)
* model: transformer
* source language(s): deu
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.bul | 50.7 | 0.683 |
### System Info:
- hf_name: deu-bul
- source_languages: deu
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'bg']
- src_constituents: {'deu'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-bul/opus-2020-07-03.test.txt
- src_alpha3: deu
- tgt_alpha3: bul
- short_pair: de-bg
- chrF2_score: 0.6829999999999999
- bleu: 50.7
- brevity_penalty: 0.98
- ref_len: 2032.0
- src_name: German
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: de
- tgt_alpha2: bg
- prefer_old: False
- long_pair: deu-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ig-en | d09fef787bb0f194ca6ba9bb2768c84171a5820d | 2021-09-09T22:11:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ig",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ig-en | 35 | null | transformers | 6,746 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ig-en
* source languages: ig
* target languages: en
* OPUS readme: [ig-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.en | 36.7 | 0.520 |
| Tatoeba.ig.en | 46.3 | 0.528 |
|
Jung/t5-large | 03ad92b724052c75982c0690d033aaa9f32d0287 | 2021-06-23T02:42:01.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jung | null | Jung/t5-large | 35 | null | transformers | 6,747 | Entry not found |
KETI-AIR/ke-t5-base-newslike | a6fb89dae4a8e42add5c16ca4978d365ffbc5563 | 2021-06-23T02:48:53.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-base-newslike | 35 | null | transformers | 6,748 | Entry not found |
Kirili4ik/mbart_ruDialogSum | 13b82f3b5531ba49ee8140c46c7a4501cd882dc6 | 2022-01-26T10:36:21.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"ru",
"ru-RU",
"dataset:IlyaGusev/gazeta",
"dataset:samsum",
"dataset:samsum (translated to RU)",
"transformers",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kirili4ik | null | Kirili4ik/mbart_ruDialogSum | 35 | 2 | transformers | 6,749 | ---
language:
- ru
- ru-RU
tags:
- mbart
inference:
parameters:
no_repeat_ngram_size: 4,
num_beams : 5
datasets:
- IlyaGusev/gazeta
- samsum
- samsum (translated to RU)
widget:
- text: |
Джефф: Могу ли я обучить модель 🤗 Transformers на Amazon SageMaker?
Филипп: Конечно, вы можете использовать новый контейнер для глубокого обучения HuggingFace.
Джефф: Хорошо.
Джефф: и как я могу начать?
Джефф: где я могу найти документацию?
Филипп: ок, ок, здесь можно найти все: https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
model-index:
- name: "mbart_ruDialogSum"
results:
- task:
name: Abstractive Dialogue Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus (translated to Russian)"
type: samsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 34.5
- name: Validation ROGUE-L
type: rogue-l
value: 33
- name: Test ROGUE-1
type: rogue-1
value: 31
- name: Test ROGUE-L
type: rogue-l
value: 28
---
### 📝 Description
MBart for Russian summarization fine-tuned for **dialogues** summarization.
This model was firstly fine-tuned by [Ilya Gusev](https://hf.co/IlyaGusev) on [Gazeta dataset](https://huggingface.co/datasets/IlyaGusev/gazeta). We have **fine tuned** that model on [SamSum dataset]() **translated to Russian** using GoogleTranslateAPI
🤗 Moreover! We have implemented a **! telegram bot [@summarization_bot](https://t.me/summarization_bot) !** with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages! 🤗
### ❓ How to use with code
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
# Download model and tokenizer
model_name = "Kirili4ik/mbart_ruDialogSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)
model.eval()
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
top_k=0,
num_beams=3,
no_repeat_ngram_size=3
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
|
MohamedZaitoon/bart-fine-tune | 87434c291e2aa58d368f638ea470a0387bd084dc | 2021-06-13T17:27:59.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"dataset:CNN/Daily-mail",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | MohamedZaitoon | null | MohamedZaitoon/bart-fine-tune | 35 | null | transformers | 6,750 | ---
tags:
- summarization
datasets:
- CNN/Daily-mail
metrics:
- ROUGE
---
|
NlpHUST/t5-en-vi-base | b55b88c9d7ed504032dd46c0c27117882f0d8fdf | 2021-06-23T03:30:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:1706.05565",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NlpHUST | null | NlpHUST/t5-en-vi-base | 35 | null | transformers | 6,751 | # T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
# Dataset
The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/).
For all experiments the corpus was split into training, development and test set:
| Data set | Sentences | Download
| ----------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------
| Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz`
| Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz`
| Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz`
## Results
The results on test set.
| Model | BLEU (Beam Search)
| ----------------------------------------------------------------------------------------------------- | ------------------
| [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30
| Sequence-to-sequence model with attention | 26.10
| Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69
| Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07
| t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased)
|t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased)
| t5-en-vi-base (pretraining, without training data) | **29.66** (cased) / **30.37** (uncased)
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small")
model.to(device)
src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=128,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]). |
SEBIS/code_trans_t5_base_code_documentation_generation_javascript_transfer_learning_finetune | cb90c9642383f99d0f7e98746a36c45d92c103d9 | 2021-06-23T04:33:14.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_javascript_transfer_learning_finetune | 35 | null | transformers | 6,752 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/javascript/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V3-8 for 35,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
Shushant/biobert-v1.1-biomedicalQuestionAnswering | 194ea3796afe80d6cdc807bf3aad43f5f0827f83 | 2022-01-16T15:34:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Shushant | null | Shushant/biobert-v1.1-biomedicalQuestionAnswering | 35 | 2 | transformers | 6,753 | ---
tags:
- generated_from_trainer
model-index:
- name: biobert-v1.1-biomedicalQuestionAnswering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-biomedicalQuestionAnswering
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.7409 |
| No log | 2.0 | 44 | 3.1852 |
| No log | 3.0 | 66 | 3.0342 |
| No log | 4.0 | 88 | 2.9416 |
| No log | 5.0 | 110 | 2.9009 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SimonThormeyer/movie-plot-generator | 800dcf9814a57090027cbc2c20fc7c6dc1cc3f63 | 2021-07-25T10:26:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | SimonThormeyer | null | SimonThormeyer/movie-plot-generator | 35 | null | transformers | 6,754 | Entry not found |
Soonhwan-Kwon/xlm-roberta-xlarge | 9b58a97eb09aa1277777fba517a3c1390c96f52c | 2021-11-14T09:03:57.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Soonhwan-Kwon | null | Soonhwan-Kwon/xlm-roberta-xlarge | 35 | null | transformers | 6,755 | Entry not found |
addy88/wav2vec2-kannada-stt | 6b2779d4471c1d54ef31838d91b6756fcf883d03 | 2021-12-19T13:35:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-kannada-stt | 35 | null | transformers | 6,756 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-kannada-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-kannada-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
akahana/indonesia-emotion-roberta | 36ec7949f3d2fb7495139dc718b5f55d1557f35b | 2021-12-08T02:24:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"id",
"transformers"
] | text-classification | false | akahana | null | akahana/indonesia-emotion-roberta | 35 | null | transformers | 6,757 | ---
language: "id"
widget:
- text: "dia orang yang baik ya bunds."
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-emotion-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
[{'label': 'BAHAGIA', 'score': 0.8790940046310425}]
``` |
allenai/unifiedqa-v2-t5-11b-1363200 | 93489bb8bb4f70af140b9258e9d421ee954a2866 | 2022-02-22T19:16:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/unifiedqa-v2-t5-11b-1363200 | 35 | 2 | transformers | 6,758 | # Further details: https://github.com/allenai/unifiedqa |
amtam0/timer-ner-fr | d4c6f9038f0d6a969f6b5cbb67ece4de30960252 | 2022-03-03T14:12:18.000Z | [
"pytorch",
"fr",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | amtam0 | null | amtam0/timer-ner-fr | 35 | null | flair | 6,759 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fr
widget:
- text: 'génère 27 séries de 54 seconde '
- text: ' 9 cycles de 17 minute '
- text: 'initie 17 sets de 44 secondes 297 minutes entre séries'
- text: ' 13 sets de 88 secondes 225 minutes 49 entre chaque série'
- text: 'génère 39 séries de 19 minute 21 minute 45 entre séries'
- text: 'débute 47 sets de 6 heures '
- text: 'débute 1 cycle de 25 minutes 48 23 minute 32 entre chaque série'
- text: 'commence 23 séries de 18 heure et demi 25 minutes 41 entre séries'
- text: ' 13 cycles de 52 secondes '
- text: 'crée 31 série de 60 secondes '
- text: ' 7 set de 36 secondes 139 minutes 34 entre séries'
- text: 'commence 37 sets de 51 minute 25 295 minute entre chaque série'
- text: 'crée 11 cycles de 72 seconde 169 minute 15 entre chaque série'
- text: 'initie 5 série de 33 minutes 48 '
- text: 'crée 23 set de 1 minute 46 279 minutes 50 entre chaque série'
- text: 'génère 41 série de 35 minutes 55 '
- text: 'lance 11 cycles de 4 heures '
- text: 'crée 47 cycle de 28 heure moins quart 243 minutes 45 entre chaque série'
- text: 'initie 23 set de 36 secondes '
- text: 'commence 37 sets de 24 heures et quart '
---
#### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
Synthetic dataset has been used (perfectible). Sentences example in the widget. |
anantoj/wav2vec2-xls-r-1b-korean | 1ef4266ea08ffe9376cefdc0fe55e8bfaae69fcf | 2022-03-23T18:29:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ko",
"dataset:kresnik/zeroth_korean",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anantoj | null | anantoj/wav2vec2-xls-r-1b-korean | 35 | null | transformers | 6,760 | ---
language: ko
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- kresnik/zeroth_korean
model-index:
- name: Wav2Vec2 XLS-R 1B Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.07
- name: Test CER
type: cer
value: 42.12
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ko
metrics:
- name: Test WER
type: wer
value: 82.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Wer: 0.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.603 | 0.72 | 500 | 4.6572 | 0.9985 |
| 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 |
| 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 |
| 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 |
| 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 |
| 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 |
| 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 |
| 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 |
| 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 |
| 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 |
| 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 |
| 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 |
| 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 |
| 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 |
| 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 |
| 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 |
| 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 |
| 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 |
| 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 |
| 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 |
| 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 |
| 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 |
| 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 |
| 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 |
| 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 |
| 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 |
| 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 |
| 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 |
| 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 |
| 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 |
| 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 |
| 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 |
| 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 |
| 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 |
| 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 |
| 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 |
| 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 |
| 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 |
| 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 |
| 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 |
| 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 |
| 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 |
| 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 |
| 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 |
| 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 |
| 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 |
| 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 |
| 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 |
| 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 |
| 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 |
| 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 |
| 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 |
| 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 |
| 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 |
| 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 |
| 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 |
| 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 |
| 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 |
| 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 |
| 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 |
| 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 |
| 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 |
| 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 |
| 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 |
| 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 |
| 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 |
| 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 |
| 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 |
| 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
aubmindlab/aragpt2-mega-detector-long | 685843487166af81b5cc47f33386f0f107d10d4c | 2021-03-11T21:46:39.000Z | [
"pytorch",
"electra",
"text-classification",
"ar",
"arxiv:2012.15520",
"transformers"
] | text-classification | false | aubmindlab | null | aubmindlab/aragpt2-mega-detector-long | 35 | null | transformers | 6,761 | ---
language: ar
widget:
- text: "وإذا كان هناك من لا يزال يعتقد أن لبنان هو سويسرا الشرق ، فهو مخطئ إلى حد بعيد . فلبنان ليس سويسرا ، ولا يمكن أن يكون كذلك . لقد عاش اللبنانيون في هذا البلد منذ ما يزيد عن ألف وخمسمئة عام ، أي منذ تأسيس الإمارة الشهابية التي أسسها الأمير فخر الدين المعني الثاني ( 1697 - 1742 )"
---
# AraGPT2 Detector
Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520)
This model is trained on the long text passages, and achieves a 99.4% F1-Score.
# How to use it:
```python
from transformers import pipeline
from arabert.preprocess import ArabertPreprocessor
processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator")
pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long")
text = " "
text_prep = processor.preprocess(text)
result = pipe(text_prep)
# [{'label': 'machine-generated', 'score': 0.9977743625640869}]
```
# If you used this model please cite us as :
```
@misc{antoun2020aragpt2,
title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15520},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]> |
bigjoedata/rockbot | 3a0cacc4b115165b4943a87bc39901fc7a682478 | 2021-05-21T14:15:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bigjoedata | null | bigjoedata/rockbot | 35 | null | transformers | 6,762 |
# 🎸 🥁 Rockbot 🎤 🎧
A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock).
**Instructions:** Type in a fake song title, pick an artist, click "Generate".
Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable.
Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot.
Just have fun.
[Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed
[Github](https://github.com/bigjoedata/rockbot)
[GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot)
[DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic.
🎹 🪘 🎷 🎺 🪗 🪕 🎻
## Background
With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate)
### Full Tech Stack
[Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.).
[Python](https://www.python.org/).
[Streamlit](https://www.streamlit.io/).
[GPT-2](https://openai.com/blog/better-language-models/).
[AITextGen](https://github.com/minimaxir/aitextgen).
[Pandas](https://pandas.pydata.org/).
[LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/).
[Google Colab](https://colab.research.google.com/) (GPU based Training).
[Knime](https://www.knime.com/) (data cleaning).
## How to Use The Model
Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation.
### Training Parameters Used
ai.train("lyrics.txt",
line_by_line=False,
from_cache=False,
num_steps=10000,
generate_every=2000,
save_every=2000,
save_gdrive=False,
learning_rate=1e-3,
batch_size=3,
eos_token="<|endoftext|>",
#fp16=True
)
### To Use
Generate With Prompt (Use Title Case):
Song Name
BY
Artist Name
|
boronbrown48/wangchanberta-topic-classification | 50fb13fb42afc87958e535ac2c41f02298637676 | 2021-11-21T09:42:05.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | boronbrown48 | null | boronbrown48/wangchanberta-topic-classification | 35 | null | transformers | 6,763 | Entry not found |
dead69/GPT-small-yoda | 81005ed1731a2b6d0ea8e2fe168d5a9b89516e80 | 2022-01-09T11:24:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | dead69 | null | dead69/GPT-small-yoda | 35 | null | transformers | 6,764 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dead69/GTP-small-yoda")
model = AutoModelWithLMHead.from_pretrained("dead69/GTP-small-yoda")
# Let's chat for 4 lines
for step in range(10):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Master YODA: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
deepset/tinyroberta-6l-768d | d15e8976885cb67a2a661ebe54cae6366497a950 | 2022-03-15T17:31:30.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/tinyroberta-6l-768d | 35 | null | transformers | 6,765 | ---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# tinyroberta-squad2
## Overview
**Language model:** tinyroberta-squad2
**Language:** English
**Training data:** The PILE
**Code:**
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 4
base_LM_model = "deepset/tinyroberta-squad2-step1"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.2
teacher = "deepset/roberta-base"
```
## Distillation
This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack).
We have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d).
This model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new [distillation functionality](https://haystack.deepset.ai/guides/model-distillation). You can also check out [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) for a model that is already distilled on an extractive QA downstream task.
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/tinyroberta-squad2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/tinyroberta-squad2"
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
educhav/Elijah-DialoGPT-small | eb2351d20be503f37b9486ec4f0f04e7957b1d0c | 2021-10-23T02:48:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | educhav | null | educhav/Elijah-DialoGPT-small | 35 | null | transformers | 6,766 | ---
tags:
- conversational
---
# Elijah Parker
- Made using DialoGPT (GPT2) algorithm in PyTorch |
fabriceyhc/bert-base-uncased-ag_news | c14e3b32fe1f757639b9751bdff3ea3c8c3b4a6b | 2021-09-21T00:54:07.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:ag_news",
"transformers",
"generated_from_trainer",
"sibyl",
"license:apache-2.0",
"model-index"
] | text-classification | false | fabriceyhc | null | fabriceyhc/bert-base-uncased-ag_news | 35 | null | transformers | 6,767 | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- ag_news
metrics:
- accuracy
model-index:
- name: bert-base-uncased-ag_news
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ag_news
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 7425
- training_steps: 74250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5773 | 0.13 | 2000 | 0.3627 | 0.8875 |
| 0.3101 | 0.27 | 4000 | 0.2938 | 0.9208 |
| 0.3076 | 0.4 | 6000 | 0.3114 | 0.9092 |
| 0.3114 | 0.54 | 8000 | 0.4545 | 0.9008 |
| 0.3154 | 0.67 | 10000 | 0.3875 | 0.9083 |
| 0.3095 | 0.81 | 12000 | 0.3390 | 0.9142 |
| 0.2948 | 0.94 | 14000 | 0.3341 | 0.9133 |
| 0.2557 | 1.08 | 16000 | 0.4573 | 0.9092 |
| 0.258 | 1.21 | 18000 | 0.3356 | 0.9217 |
| 0.2455 | 1.35 | 20000 | 0.3348 | 0.9283 |
| 0.2361 | 1.48 | 22000 | 0.3218 | 0.93 |
| 0.254 | 1.62 | 24000 | 0.3814 | 0.9033 |
| 0.2528 | 1.75 | 26000 | 0.3628 | 0.9158 |
| 0.2282 | 1.89 | 28000 | 0.3302 | 0.9308 |
| 0.224 | 2.02 | 30000 | 0.3967 | 0.9225 |
| 0.174 | 2.15 | 32000 | 0.3669 | 0.9333 |
| 0.1848 | 2.29 | 34000 | 0.3435 | 0.9283 |
| 0.19 | 2.42 | 36000 | 0.3552 | 0.93 |
| 0.1865 | 2.56 | 38000 | 0.3996 | 0.9258 |
| 0.1877 | 2.69 | 40000 | 0.3749 | 0.9258 |
| 0.1951 | 2.83 | 42000 | 0.3963 | 0.9258 |
| 0.1702 | 2.96 | 44000 | 0.3655 | 0.9317 |
| 0.1488 | 3.1 | 46000 | 0.3942 | 0.9292 |
| 0.1231 | 3.23 | 48000 | 0.3998 | 0.9267 |
| 0.1319 | 3.37 | 50000 | 0.4292 | 0.9242 |
| 0.1334 | 3.5 | 52000 | 0.4904 | 0.9192 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
gagan3012/wav2vec2-xlsr-nepali | d1dc1c34a3f2387d00d4bfe351e940cb7c06fb80 | 2021-07-06T04:10:40.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ne",
"dataset:OpenSLR",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gagan3012 | null | gagan3012/wav2vec2-xlsr-nepali | 35 | 1 | transformers | 6,768 | ---
language: ne
datasets:
- OpenSLR
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-xlsr-nepali
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR ne
type: OpenSLR
args: ne
metrics:
- name: Test WER
type: wer
value: 05.97
---
# Wav2Vec2-Large-XLSR-53-Nepali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nepali using the [Common Voice](https://huggingface.co/datasets/common_voice), and [OpenSLR ne](http://www.openslr.org/43/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
!wget https://www.openslr.org/resources/43/ne_np_female.zip
!unzip ne_np_female.zip
!ls ne_np_female
colnames=['path','sentence']
df = pd.read_csv('/content/ne_np_female/line_index.tsv',sep='\\t',header=None,names = colnames)
df['path'] = '/content/ne_np_female/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/ne_np_female/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/ne_np_female/line_index_test.csv',split = 'train')
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Result
Prediction: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
Reference: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
!wget https://www.openslr.org/resources/43/ne_np_female.zip
!unzip ne_np_female.zip
!ls ne_np_female
colnames=['path','sentence']
df = pd.read_csv('/content/ne_np_female/line_index.tsv',sep='\\t',header=None,names = colnames)
df['path'] = '/content/ne_np_female/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/ne_np_female/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/ne_np_female/line_index_test.csv',split = 'train')
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 05.97 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1AHnYWXb5cwfMEa2o4O3TSdasAR3iVBFP?usp=sharing) |
huggingtweets/iwriteok | 62a6c71882641bd3538991f4767648fd3f9cc374 | 2021-05-22T08:46:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/iwriteok | 35 | null | transformers | 6,769 | ---
language: en
thumbnail: https://www.huggingtweets.com/iwriteok/1616696251667/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/598663964340301824/im3Wzn-o_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Robert Evans (The Only Robert Evans) 🤖 AI Bot </div>
<div style="font-size: 15px">@iwriteok bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@iwriteok's tweets](https://twitter.com/iwriteok).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 1153 |
| Short tweets | 194 |
| Tweets kept | 1857 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2di5nps9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iwriteok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/127m2six) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/127m2six/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iwriteok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
izumi-lab/electra-base-japanese-generator | c51d54041861147fc4729dcfb4127e10dd678ec0 | 2022-03-19T09:38:27.000Z | [
"pytorch",
"electra",
"fill-mask",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | izumi-lab | null | izumi-lab/electra-base-japanese-generator | 35 | null | transformers | 6,770 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA base Japanese generator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA base in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 512 tokens per instance, 256 instances per batch, and 766k training steps.
The size of the generator is 1/3 of the size of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
jpwahle/t5-word-sense-disambiguation | 4eeae42c057b80c49f4a19a6004a9ac7c7416007 | 2022-06-14T08:57:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ISO 639-1 code for your language, or `multilingual`",
"dataset:array of dataset identifiers",
"arxiv:1910.10683",
"transformers",
"array",
"of",
"tags",
"autotrain_compatible"
] | text2text-generation | false | jpwahle | null | jpwahle/t5-word-sense-disambiguation | 35 | 5 | transformers | 6,771 | ---
language: ISO 639-1 code for your language, or `multilingual`
thumbnail: url to a thumbnail used in social sharing
tags:
- array
- of
- tags
datasets:
- array of dataset identifiers
metrics:
- array of metric identifiers
widget:
- text: "question: which description describes the word \" java \" best in the following\
\ context? descriptions: [ \" A drink consisting of an infusion of ground coffee\
\ beans \" , \" a platform-independent programming lanugage \" , or \" an island\
\ in Indonesia to the south of Borneo \" ] context: I like to drink ' java '\
\ in the morning ."
---
# T5-large for Word Sense Disambiguation
If you are using this model in your research work, please cite
```bib
@article{wahle2021incorporating,
title={Incorporating Word Sense Disambiguation in Neural Language Models},
author={Wahle, Jan Philip and Ruas, Terry and Meuschke, Norman and Gipp, Bela},
journal={arXiv preprint arXiv:2106.07967},
year={2021}
}
```
This is the checkpoint for T5-large after being trained on the [SemCor 3.0 dataset](http://lcl.uniroma1.it/wsdeval/).
Additional information about this model:
* [The t5-large model page](https://huggingface.co/t5-large)
* [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
* [Official implementation by Google](https://github.com/google-research/text-to-text-transfer-transformer)
The model can be loaded to perform a few-shot classification like so:
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
tokenizer = AutoTokenizer.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
input = '''question: which description describes the word " java " best in the following context? \
descriptions:[ " A drink consisting of an infusion of ground coffee beans " ,
" a platform-independent programming lanugage "
, or " an island in Indonesia to the south of Borneo " ]
context: I like to drink " java " in the morning .'''
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model.generate(input_ids=example['input_ids'],
attention_mask=example['attention_mask'],
max_length=135)
# "a drink consisting of an infusion of ground coffee beans"
```
|
mpariente/ConvTasNet_WHAM_sepclean | ba1593b6f7509fce313910deb6bb4781915a8b26 | 2021-11-04T15:29:29.000Z | [
"pytorch",
"dataset:wham",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | mpariente | null | mpariente/ConvTasNet_WHAM_sepclean | 35 | null | asteroid | 6,772 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
---
## Asteroid model `mpariente/ConvTasNet_WHAM_sepclean`
Imported from [Zenodo](https://zenodo.org/record/3862942)
### Description:
This model was trained by Manuel Pariente
using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
n_src: 2
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/tr/
valid_dir: data/wav8k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/wham
gpus: -1
help: None
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
### Results:
```yaml
si_sdr: 16.21326632846293
si_sdr_imp: 16.21441705664987
sdr: 16.615180021738933
sdr_imp: 16.464137807433435
sir: 26.860503975131923
sir_imp: 26.709461760826414
sar: 17.18312813480803
sar_imp: -131.99332048277296
stoi: 0.9619940905157323
stoi_imp: 0.2239480672473015
```
### License notice:
This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente. |
mrm8488/bert-base-german-dbmdz-cased-finetuned-pawsx-de | 1121c3e02dacf619f9c7045e8cad012c3c9a5316 | 2021-05-20T00:19:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"de",
"dataset:xtreme",
"transformers",
"nli"
] | text-classification | false | mrm8488 | null | mrm8488/bert-base-german-dbmdz-cased-finetuned-pawsx-de | 35 | null | transformers | 6,773 | ---
language: de
datasets:
- xtreme
tags:
- nli
widget:
- text: "Winarsky ist Mitglied des IEEE, Phi Beta Kappa, des ACM und des Sigma Xi. Winarsky ist Mitglied des ACM, des IEEE, der Phi Beta Kappa und der Sigma Xi."
---
# bert-base-german-dbmdz-cased fine-tuned on PAWS-X-de for Paraphrase Identification (NLI)
|
mrm8488/deberta-v3-small-finetuned-mnli | 211d1d22137f618f14d8fbb6c50d2772084323a9 | 2021-12-07T17:45:59.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"en",
"dataset:glue",
"arxiv:2006.03654",
"arxiv:2111.09543",
"transformers",
"generated_from_trainer",
"deberta-v3",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/deberta-v3-small-finetuned-mnli | 35 | 3 | transformers | 6,774 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
model-index:
- name: ds_results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.874593165174939
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on MNLI
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4985
- Accuracy: 0.8746
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7773 | 0.04 | 1000 | 0.5241 | 0.7984 |
| 0.546 | 0.08 | 2000 | 0.4629 | 0.8194 |
| 0.5032 | 0.12 | 3000 | 0.4704 | 0.8274 |
| 0.4711 | 0.16 | 4000 | 0.4383 | 0.8355 |
| 0.473 | 0.2 | 5000 | 0.4652 | 0.8305 |
| 0.4619 | 0.24 | 6000 | 0.4234 | 0.8386 |
| 0.4542 | 0.29 | 7000 | 0.4825 | 0.8349 |
| 0.4468 | 0.33 | 8000 | 0.3985 | 0.8513 |
| 0.4288 | 0.37 | 9000 | 0.4084 | 0.8493 |
| 0.4354 | 0.41 | 10000 | 0.3850 | 0.8533 |
| 0.423 | 0.45 | 11000 | 0.3855 | 0.8509 |
| 0.4167 | 0.49 | 12000 | 0.4122 | 0.8513 |
| 0.4129 | 0.53 | 13000 | 0.4009 | 0.8550 |
| 0.4135 | 0.57 | 14000 | 0.4136 | 0.8544 |
| 0.4074 | 0.61 | 15000 | 0.3869 | 0.8595 |
| 0.415 | 0.65 | 16000 | 0.3911 | 0.8517 |
| 0.4095 | 0.69 | 17000 | 0.3880 | 0.8593 |
| 0.4001 | 0.73 | 18000 | 0.3907 | 0.8587 |
| 0.4069 | 0.77 | 19000 | 0.3686 | 0.8630 |
| 0.3927 | 0.81 | 20000 | 0.4008 | 0.8593 |
| 0.3958 | 0.86 | 21000 | 0.3716 | 0.8639 |
| 0.4016 | 0.9 | 22000 | 0.3594 | 0.8679 |
| 0.3945 | 0.94 | 23000 | 0.3595 | 0.8679 |
| 0.3932 | 0.98 | 24000 | 0.3577 | 0.8645 |
| 0.345 | 1.02 | 25000 | 0.4080 | 0.8699 |
| 0.2885 | 1.06 | 26000 | 0.3919 | 0.8674 |
| 0.2858 | 1.1 | 27000 | 0.4346 | 0.8651 |
| 0.2872 | 1.14 | 28000 | 0.4105 | 0.8674 |
| 0.3002 | 1.18 | 29000 | 0.4133 | 0.8708 |
| 0.2954 | 1.22 | 30000 | 0.4062 | 0.8667 |
| 0.2912 | 1.26 | 31000 | 0.3972 | 0.8708 |
| 0.2958 | 1.3 | 32000 | 0.3713 | 0.8732 |
| 0.293 | 1.34 | 33000 | 0.3717 | 0.8715 |
| 0.3001 | 1.39 | 34000 | 0.3826 | 0.8716 |
| 0.2864 | 1.43 | 35000 | 0.4155 | 0.8694 |
| 0.2827 | 1.47 | 36000 | 0.4224 | 0.8666 |
| 0.2836 | 1.51 | 37000 | 0.3832 | 0.8744 |
| 0.2844 | 1.55 | 38000 | 0.4179 | 0.8699 |
| 0.2866 | 1.59 | 39000 | 0.3969 | 0.8681 |
| 0.2883 | 1.63 | 40000 | 0.4000 | 0.8683 |
| 0.2832 | 1.67 | 41000 | 0.3853 | 0.8688 |
| 0.2876 | 1.71 | 42000 | 0.3924 | 0.8677 |
| 0.2855 | 1.75 | 43000 | 0.4177 | 0.8719 |
| 0.2845 | 1.79 | 44000 | 0.3877 | 0.8724 |
| 0.2882 | 1.83 | 45000 | 0.3961 | 0.8713 |
| 0.2773 | 1.87 | 46000 | 0.3791 | 0.8740 |
| 0.2767 | 1.91 | 47000 | 0.3877 | 0.8779 |
| 0.2772 | 1.96 | 48000 | 0.4022 | 0.8690 |
| 0.2816 | 2.0 | 49000 | 0.3837 | 0.8732 |
| 0.2068 | 2.04 | 50000 | 0.4644 | 0.8720 |
| 0.1914 | 2.08 | 51000 | 0.4919 | 0.8744 |
| 0.2 | 2.12 | 52000 | 0.4870 | 0.8702 |
| 0.1904 | 2.16 | 53000 | 0.5038 | 0.8737 |
| 0.1915 | 2.2 | 54000 | 0.5232 | 0.8711 |
| 0.1956 | 2.24 | 55000 | 0.5192 | 0.8747 |
| 0.1911 | 2.28 | 56000 | 0.5215 | 0.8761 |
| 0.2053 | 2.32 | 57000 | 0.4604 | 0.8738 |
| 0.2008 | 2.36 | 58000 | 0.5162 | 0.8715 |
| 0.1971 | 2.4 | 59000 | 0.4886 | 0.8754 |
| 0.192 | 2.44 | 60000 | 0.4921 | 0.8725 |
| 0.1937 | 2.49 | 61000 | 0.4917 | 0.8763 |
| 0.1931 | 2.53 | 62000 | 0.4789 | 0.8778 |
| 0.1964 | 2.57 | 63000 | 0.4997 | 0.8721 |
| 0.2008 | 2.61 | 64000 | 0.4748 | 0.8756 |
| 0.1962 | 2.65 | 65000 | 0.4840 | 0.8764 |
| 0.2029 | 2.69 | 66000 | 0.4889 | 0.8767 |
| 0.1927 | 2.73 | 67000 | 0.4820 | 0.8758 |
| 0.1926 | 2.77 | 68000 | 0.4857 | 0.8762 |
| 0.1919 | 2.81 | 69000 | 0.4836 | 0.8749 |
| 0.1911 | 2.85 | 70000 | 0.4859 | 0.8742 |
| 0.1897 | 2.89 | 71000 | 0.4853 | 0.8766 |
| 0.186 | 2.93 | 72000 | 0.4946 | 0.8768 |
| 0.2011 | 2.97 | 73000 | 0.4851 | 0.8767 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
mrm8488/deberta-v3-small-finetuned-mrpc | 8754f1d3df9be49faa39ed38235c8f0e349a667b | 2021-11-21T18:52:09.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"deberta-v3",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/deberta-v3-small-finetuned-mrpc | 35 | 1 | transformers | 6,775 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8921568627450981
- name: F1
type: f1
value: 0.9233449477351917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on MRPC
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- Accuracy: 0.8922
- F1: 0.9233
- Combined Score: 0.9078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| No log | 1.0 | 230 | 0.2787 | 0.8922 | 0.9233 | 0.9078 |
| No log | 2.0 | 460 | 0.3651 | 0.875 | 0.9137 | 0.8944 |
| No log | 3.0 | 690 | 0.5238 | 0.8799 | 0.9179 | 0.8989 |
| No log | 4.0 | 920 | 0.4712 | 0.8946 | 0.9222 | 0.9084 |
| 0.2147 | 5.0 | 1150 | 0.5704 | 0.8946 | 0.9262 | 0.9104 |
| 0.2147 | 6.0 | 1380 | 0.5697 | 0.8995 | 0.9284 | 0.9140 |
| 0.2147 | 7.0 | 1610 | 0.6651 | 0.8922 | 0.9214 | 0.9068 |
| 0.2147 | 8.0 | 1840 | 0.6726 | 0.8946 | 0.9239 | 0.9093 |
| 0.0183 | 9.0 | 2070 | 0.7250 | 0.8848 | 0.9177 | 0.9012 |
| 0.0183 | 10.0 | 2300 | 0.7093 | 0.8922 | 0.9223 | 0.9072 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
mrm8488/longformer-base-4096-spanish-finetuned-squad | f803616035603fc04dbcb1c7783217cb4932ae3f | 2022-01-11T20:39:06.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"transformers",
"Long documents",
"LongFormer",
"QA",
"Q&A",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/longformer-base-4096-spanish-finetuned-squad | 35 | 1 | transformers | 6,776 | ---
language: es
tags:
- Long documents
- LongFormer
- QA
- Q&A
datasets:
- BSC-TeMU/SQAC
---
# Spanish Longformer fine-tuned on **SQAC** for Spanish **QA** 📖❓
[longformer-base-4096-spanish](https://huggingface.co/mrm8488/longformer-base-4096-spanish) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task.
## Details of the model 🧠
[longformer-base-4096-spanish](https://huggingface.co/mrm8488/longformer-base-4096-spanish) is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to **4,096**!
## Details of the dataset 📚
This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
This dataset can be used to build extractive-QA.
## Evaluation Metrics 📈
TBA
## Fast Usage with HF `pipeline` 🧪
```py
from transformers import pipeline
qa_pipe = pipeline("question-answering", model='mrm8488/longformer-base-4096-spanish-finetuned-squad')
context = '''
Hace aproximadamente un año, Hugging Face, una startup de procesamiento de lenguaje natural con sede en Brooklyn, Nueva York, lanzó BigScience, un proyecto internacional con más de 900 investigadores que está diseñado para comprender mejor y mejorar la calidad de los grandes modelos de lenguaje natural. Los modelos de lenguaje grande (LLM), algoritmos que pueden reconocer, predecir y generar lenguaje sobre la base de conjuntos de datos basados en texto, han captado la atención de empresarios y entusiastas de la tecnología por igual. Pero el costoso hardware requerido para desarrollar LLM los ha mantenido en gran medida fuera del alcance de los investigadores sin los recursos de compañías como OpenAI y DeepMind detrás de ellos.
Inspirándose en organizaciones como la Organización Europea para la Investigación Nuclear (también conocida como CERN) y el Gran Colisionador de Hadrones, el objetivo de BigScience es crear LLM y grandes conjuntos de datos de texto que eventualmente serán de código abierto para la IA más amplia. comunidad. Los modelos serán entrenados en la supercomputadora Jean Zay ubicada cerca de París, Francia, que se encuentra entre las máquinas más poderosas del mundo.
'''
question = "¿Cuál es el objetivo de BigScience?"
qa_pipe({'context':context, 'question': question})
# It outpus
```
```js
{'answer': 'comprender mejor y mejorar la calidad de los grandes modelos de lenguaje natural.',
'end': 305,
'score': 0.9999799728393555,
'start': 224}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
pertschuk/albert-large-intent-v2 | ddec85a16827395bdfea93cda1a29cfd2305f47f | 2020-04-24T16:05:07.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | pertschuk | null | pertschuk/albert-large-intent-v2 | 35 | null | transformers | 6,777 | Entry not found |
pertschuk/albert-large-intent-v3 | 56738705852fd3579c035cc5587559e80fe1c371 | 2020-04-24T16:05:09.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | pertschuk | null | pertschuk/albert-large-intent-v3 | 35 | null | transformers | 6,778 | Entry not found |
pucpr/clinicalnerpt-sign | ca6c0b0416c190de7933754c1c1faf8a689798c8 | 2021-10-13T09:31:19.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-sign | 35 | 4 | transformers | 6,779 | ---
language: "pt"
widget:
- text: "Há 15 anos relata dor lombar com irradiação para coxa direita."
- text: "Paciente segue internado, sem presença de edema."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Sign
The Sign NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
razent/spbert-mlm-zero | d8dd96929c2f433abeb0561960bff5643231810d | 2022-03-15T03:24:45.000Z | [
"pytorch",
"tf",
"jax",
"code",
"arxiv:2106.09997",
"transformers",
"question-answering",
"knowledge-graph"
] | question-answering | false | razent | null | razent/spbert-mlm-zero | 35 | null | transformers | 6,780 | ---
language:
- code
tags:
- question-answering
- knowledge-graph
---
# SPBERT MLM (Scratch)
## Introduction
Paper: [SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs](https://arxiv.org/abs/2106.09997)
Authors: _Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen_
## How to use
For more details, do check out [our Github repo](https://github.com/heraclex12/NLP2SPARQL).
Here is an example in Pytorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-zero')
model = AutoModel.from_pretrained("razent/spbert-mlm-zero")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
or Tensorflow
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-zero')
model = TFAutoModel.from_pretrained("razent/spbert-mlm-zero")
text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Citation
```
@misc{tran2021spbert,
title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs},
author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen},
year={2021},
eprint={2106.09997},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier | 7e514c1d56f117617dbdbc37a0b26ca20a84878a | 2021-09-10T05:35:44.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"buy-intent",
"sell-intent",
"consumer-intent"
] | text-classification | false | recobo | null | recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier | 35 | null | transformers | 6,781 | ---
language: "en"
tags:
- buy-intent
- sell-intent
- consumer-intent
widget:
- text: "Flutoprazepam (Restas) is a drug which is a benzodiazepine. It was patented in Japan by Sumitomo."
---
# Chemical vs Pharmaceutical Domain Document Classifier
Chemical domain language model finetuned on 13K Chemical, and 14K Pharma Wikipedia articles broken into paragraphs.
| Train Loss | Validation Acc. | Test Acc.|
| ------------- |:-------------: | -----: |
| 0.17 | 0.928 | 0.927 |
# Dataset
Dataset with splits can be found @ [https://www.kaggle.com/shahrukhkhan/pharma-vs-chemicals-domain-classification](https://www.kaggle.com/shahrukhkhan/pharma-vs-chemicals-domain-classification)
# Label Mappings
LABEL_0 => **"PHARMACEUTICAL"** <br/>
LABEL_1 => **"CHEMICAL"**
## Usage in Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier")
model = AutoModelForSequenceClassification.from_pretrained("recobo/chemical-bert-uncased-pharmaceutical-chemical-classifier")
``` |
wicharnkeisei/thai-bert-multi-cased-finetuned-xquadv1-finetuned-squad | d1c2523ab1cf1888a5a11b375355c1d23dd5b265 | 2021-11-07T08:31:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"th",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | wicharnkeisei | null | wicharnkeisei/thai-bert-multi-cased-finetuned-xquadv1-finetuned-squad | 35 | null | transformers | 6,782 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
language: th
model-index:
- name: thai-bert-multi-cased-finetuned-xquadv1-finetuned-squad
results: []
widget:
- text: "สราวุธ มาตรทอง เข้าสู่วงการบันเทิงเมื่อปีอะไร"
context: "สราวุธ มาตรทอง (ชื่อเล่น: อ้น เกิดเมื่อวันที่ 2 ตุลาคม พ.ศ. 2519) เป็นนักแสดงชาวไทย จบการศึกษาจากมหาวิทยาลัยราชภัฏพระนค เข้าสู่วงการบันเทิงเมื่อปี พ.ศ. 2538 จากการ ชักชวนของ กมล ภู่วัฒนวนิชย์ แห่งบริษัทบรอดคาซท์ ไทยเทเลวิชั่น มีผลงานแสดงชิ้นแรกจาก ใส่ไข่ อะไรเอ่ย, 6/16 ร้ายบริสุทธิ์ และมีผลงานสร้างชื่อคือละครเรื่อง ฉลุย และ น้ำใสใจจริง นอกจากนี้ยังได้ทำอัลบั้มประกอบละคร ฉลุย คู่กับ ทีน สราวุฒิ พุ่มทอง มีผลงานภาพยนตร์เรื่อง ความรักครั้งสุดท้าย (2546) เคยได้รับการเสนอชื่อเข้าชิงรางวัลภาพยนตร์ไทย ชมรมวิจารณ์บันเทิง ครั้งที่ 12 สาขานักแสดงสมทบยอดเยี่ยมจากภาพยนตร์เรื่องนี้ และยังมีละครซิตคอมเรื่อง เทวดาสาธุ นอกจากนี้ยังเคยเป็นดีเจให้กับ สถานีวิทยุ เรดิโอโหวต แซตเทิลไลท์ 93.5 MHz และยังเป็นพิธกร รายการเวเอฟเวอร์ ออกอากาศทางช่อง 3 ในวันเสาร์ เวลา 07.55-08.20 น. ในเดือนตุลาคม พ.ศ. 2551 เจ้าตัวได้ยอมรับว่าคลิปหลุดทางอินเทอร์เน็ต ที่มีเพศสัมพันธ์กับหญิงสาวเป็นเจ้าตัวจริง คนที่เอาไปลงน่าจะเป็นคนที่พบโทรศัพท์ของตนเอง"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thai-bert-multi-cased-finetuned-xquadv1-finetuned-squad
This model is a fine-tuned version of [mrm8488/bert-multi-cased-finetuned-xquadv1](https://huggingface.co/mrm8488/bert-multi-cased-finetuned-xquadv1) on Thai dataset from [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset).
## Intended uses & limitations
This model intends to use with Thai question and answering task
## Training and evaluation data
Trained and evaluated by [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
## Performance
Evaluated on the SQuAD 1.0 test dataset
```
"exact": 57.39972337482711
"f1": 68.10794016188211
"total": 723
```
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
wrmurray/roberta-base-finetuned-imdb | 7aa8ca3fae56a1860d8b4c6bf727b91370821ad5 | 2022-02-10T23:09:54.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | wrmurray | null | wrmurray/roberta-base-finetuned-imdb | 35 | null | transformers | 6,783 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9552
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1783
- Accuracy: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1904 | 1.0 | 1563 | 0.1423 | 0.9517 |
| 0.1187 | 2.0 | 3126 | 0.1783 | 0.9552 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sjyhne/audio_emotion | 213c5a7a7a8bfb49641571ec5f921451313c51fc | 2022-03-02T06:50:37.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | sjyhne | null | sjyhne/audio_emotion | 35 | null | transformers | 6,784 | Entry not found |
aymanm419/araElectra-SQUAD-ARCD | 6b8d83c5a443ada88cab6c1ff82812c98eecd25d | 2022-03-02T21:57:26.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aymanm419 | null | aymanm419/araElectra-SQUAD-ARCD | 35 | null | transformers | 6,785 | Entry not found |
IIC/mt5-base-lfqa-es | c9be3e883d89c1321700b3136c1cde63e8309eca | 2022-04-04T02:55:11.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"es",
"dataset:IIC/lfqa_es",
"transformers",
"seq2seq",
"abstractive question answering",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | IIC | null | IIC/mt5-base-lfqa-es | 35 | 4 | transformers | 6,786 | ---
language:
- es
tags:
# - summarization # Example: audio
- seq2seq # Example: automatic-speech-recognition
- abstractive question answering
datasets:
- IIC/lfqa_es
metrics:
- rouge2
- rouge1
- rougel
- rougelsum
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: mt5-base-lfqa-es
results:
- task:
type: question answering # Required. Example: automatic-speech-recognition
name: abstractive question answering # Optional. Example: Speech Recognition
dataset:
type: IIC/lfqa_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: IIC/lfqa_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: rouge1 # Required. Example: wer
value: 10.2907 # Required. Example: 20.90
name: rouge1 # Optional. Example: Test WER
- type: rouge2
value: 1.7251
name: rouge2
- type: rougeL
value: 8.9193
name: rougeL
- type: rougeLsum
value: 7.9875
name: rougeLsum
---
This model is a fine-tuned version of [MT5-base](https://huggingface.co/google/mt5-base), a multilingual text-to-text encoder-decoder transformer. It is trained on [lfqa-spanish](https://huggingface.co/datasets/IIC/lfqa_spanish), an automatically translated dataset, originally created in English in [this repository](https://huggingface.co/vblagoje/bart_lfqa). For more details about the dataset, check its model card.
We used linear decay, and the full hyperparameters for this model were:
```json
{
"learning_rate": 2e-4,
"num_train_epochs": 3,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-8,
"total_train_batch_size": 64,
"warmup_ratio": 0.06,
}
```
This model is therefore trained to provide long-form answers to open domain questions given certain context paragraphs which can be used to answer that question. Therefore the main task this model can perform is abstractive question answering.
The result it obtains on the validation set of this dataset (it doesn't have a test set), with num_beams = 8 and maximum target sequence length = 360 are:
```json
{"rouge1": 10.2907, "rouge2": 1.7251, "rougeL": 8.9193, "rougeLsum": 7.9875, "gen_len": 296.258}
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
princeton-nlp/CoFi-MNLI-s95 | 26de87b07d575d55f320278e26d715a121bb1c1f | 2022-05-01T01:20:45.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
] | text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-MNLI-s95 | 35 | null | transformers | 6,787 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
facebook/wav2vec2-conformer-rope-large | bdc607e878312a37df94057b527c3db65fe03445 | 2022-06-15T08:12:09.000Z | [
"pytorch",
"wav2vec2-conformer",
"pretraining",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"transformers",
"speech",
"license:apache-2.0"
] | null | false | facebook | null | facebook/wav2vec2-conformer-rope-large | 35 | 1 | transformers | 6,788 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Conformer-Large with Rotary Position Embeddings
Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
Intel/camembert-base-mrpc | 39ba8cbf0198b4bef5c69c6ae716f31ed8b9f600 | 2022-04-21T02:44:02.000Z | [
"pytorch",
"camembert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/camembert-base-mrpc | 35 | null | transformers | 6,789 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: camembert-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8504901960784313
- name: F1
type: f1
value: 0.8927943760984183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-mrpc
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4286
- Accuracy: 0.8505
- F1: 0.8928
- Combined Score: 0.8716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ai4bharat/MultiIndicSentenceSummarizationSS | 9700f9cd1a4da32c7bcb3c3abadf3ce7d72aee3f | 2022-04-30T10:35:01.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicSentenceSummarization",
"arxiv:2203.05437",
"transformers",
"sentence-summarization",
"multilingual",
"nlp",
"indicnlp",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicSentenceSummarizationSS | 35 | null | transformers | 6,790 | ---
tags:
- sentence-summarization
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicSentenceSummarization
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
widget:
- जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। <s> <2hi>
---
# MultiIndicSentenceSummarizationSS
This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicSentenceSummarization](https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (5.53 million sentences). </li>
<li> Unlike <a href="https://huggingface.co/ai4bharat/MultiIndicSentenceSummarization">MultiIndicSentenceSummarization</a> each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3, num_beams=5, length_penalty=0.8, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # अनंतनाग में सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादी ढेर
```
## Benchmarks
Scores on the `IndicSentenceSummarization` test sets are as follows:
Language | Rouge-1 / Rouge-2 / Rouge-L
---------|----------------------------
as | 63.56 / 49.90 / 62.57
bn | 52.52 / 36.15 / 50.60
gu | 47.69 / 29.77 / 45.61
hi | 50.43 / 28.13 / 45.15
kn | 77.06 / 69.36 / 76.33
ml | 65.00 / 51.99 / 63.76
mr | 47.05 / 25.97 / 45.52
or | 50.96 / 30.32 / 49.23
pa | 54.95 / 36.26 / 51.26
ta | 58.52 / 38.36 / 56.49
te | 53.75 / 35.17 / 52.66
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
lightonai/RITA_m | 7ac0a0ec85dbcfa8b8bb7352a48fe638c0d191d8 | 2022-05-19T08:23:24.000Z | [
"pytorch",
"rita",
"text-generation",
"protein",
"dataset:uniref-100",
"arxiv:2205.05789",
"transformers"
] | text-generation | false | lightonai | null | lightonai/RITA_m | 35 | null | transformers | 6,791 | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-M
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[**Medium**](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_m, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_m")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
} |
tsdocode/phobert-finetune-hatespeech | ba30492166bac4a2048bc1225a9ab9fa1bb55291 | 2022-05-07T18:03:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"vi",
"transformers",
"classification"
] | text-classification | false | tsdocode | null | tsdocode/phobert-finetune-hatespeech | 35 | null | transformers | 6,792 | ---
language:
- vi
tags:
- classification
widget:
- text: "Xấu vcl"
example_title: "Công kích"
- text: "Đồ ngu"
example_title: "Thù ghét"
- text: "Xin chào chúc một ngày tốt lành"
example_title: "Normal"
---
## [PhoBert](https://huggingface.co/vinai/phobert-base/tree/main) finetuned version for hate speech detection
## Dataset
- [**VLSP2019**](https://github.com/sonlam1102/vihsd): Hate Speech Detection on Social Networks Dataset
- [**ViHSD**](https://vlsp.org.vn/vlsp2019/eval/hsd): Vietnamese Hate Speech Detection dataset
## Class name
- LABEL_0 : **Normal**
- LABEL_1 : **OFFENSIVE**
- LABEL_2 : **HATE**
## Usage example with **TextClassificationPipeline**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model = AutoModelForSequenceClassification.from_pretrained("tsdocode/phobert-finetune-hatespeech", num_labels=3)
tokenizer = AutoTokenizer.from_pretrained("tsdocode/phobert-finetune-hatespeech")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
# outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]]
pipe("đồ ngu")
``` |
abid/indonesia-bioner | f1b0e4892a363e97d92f288c47fcb8ee8030f9c5 | 2022-07-11T06:41:12.000Z | [
"pytorch",
"id",
"en",
"flair",
"token-classification",
"sequence-tagger-model",
"license:bsd-3-clause"
] | token-classification | false | abid | null | abid/indonesia-bioner | 35 | 0 | flair | 6,793 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- id
- en
license: bsd-3-clause
widget:
- text: 'Dok saya mau tanya kenapa ya kulit saya kering bersisik gitu dok. Apalagi bagian tumit sampai nglupas terus gatal. Penyebabnya apa y dok terus cara mengobatinya gimana? Terima kasi'
- text: 'halo dok saya mau bertanya saya sering merasa cemas resah dan takut akan semua yg saya lakukan dan kejar , padahal aktifitas sehari hari berjalan lancar pdahal saya di kantor cukup terbilang sebagai karyawan terbaik tetapi saya merasa terbebani dengan cemas dan rasa takut itu sendiri'
- text: 'Does anyone else feel like their losing there mind with all the hormonal changes? One minute Im all happy and then Im crying. Tumor was seen in 2014 and I was never told. Lots of other surgeries, they have already told me surgery needs to done. This would be around my 20th surgery. Alot of different parts of body have been medically altered and this time its all my chose on what i want to do. Im opting to just let it all go and let god do what he needs to with me. Im not scared for myself but for my family and people I love.'
---
## Biomedical Entity Recognition in Bahasa Indonesia
Summary:
- Trained using manually annotated data from alodokter.com (online health QA platform) using UMLS guideline (see https://rdcu.be/cNxV3)
- Recognize disorders (DISO) and anatomy (ANAT) entities
- Achieve best F1 macro score 0.81
- Based on XLM-Roberta. So, cross lingual recognition might work.
## CITATION
This work is done with generous support from Safitri Juanita, Dr. Diana Purwitasari and Dr. Mauridhi Hery Purnomo from Institut Teknologi Sepuluh Nopember, Indonesia.
Citation for academic purpose will be provided later.
For demo, please go to the HF space demo: https://huggingface.co/spaces/abid/id-bioner-demo |
BM-K/KoMiniLM | 2ffe948f6b6f99a5a2c4658a6d4075008630be31 | 2022-06-23T11:57:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2002.10957",
"transformers"
] | feature-extraction | false | BM-K | null | BM-K/KoMiniLM | 35 | 2 | transformers | 6,794 | # KoMiniLM
🐣 Korean mini language model
## Overview
Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models.
## Quick tour
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM") # 23M model
model = AutoModel.from_pretrained("BM-K/KoMiniLM")
inputs = tokenizer("안녕 세상아!", return_tensors="pt")
outputs = model(**inputs)
```
## Update history
** Updates on 2022.06.20 **
- Release KoMiniLM-bert-68M
** Updates on 2022.05.24 **
- Release KoMiniLM-bert-23M
## Pre-training
`Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE)
### Object
Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]](https://arxiv.org/abs/2002.10957) were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project.
### Data sets
|Data|News comments|News article|
|:----:|:----:|:----:|
|size|10G|10G|
### Config
- **KoMiniLM-23M**
```json
{
"architectures": [
"BertForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"initializer_range": 0.02,
"intermediate_size": 1536,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"output_attentions": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"return_dict": false,
"torch_dtype": "float32",
"transformers_version": "4.13.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 32000
}
```
### Performance on subtasks
- The results of our fine-tuning experiments are an average of 3 runs for each task.
```
cd KoMiniLM-Finetune
bash scripts/run_all_kominilm.sh
```
|| #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 |
|KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 |
|KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 |
|DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 |
| | | | | | | | | |
|**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 |
|**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 |
- [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus)
- [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018)
- [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding)
- [Question Pair](https://github.com/songys/Question_pair) (Paired Question)
- [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset)
<img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br>
### User Contributed Examples
-
## Reference
- [KLUE BERT](https://github.com/KLUE-benchmark/KLUE)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [SKT KoBERT](https://github.com/SKTBrain/KoBERT)
- [DistilKoBERT](https://github.com/monologg/DistilKoBERT)
- [lassl](https://github.com/lassl/lassl) |
speeqo/distilbert-base-uncased-finetuned-sst-2-english | c4de345862149fb32a334f861bcffce61bfd4447 | 2022-05-29T11:14:52.000Z | [
"pytorch",
"tf",
"rust",
"distilbert",
"text-classification",
"en",
"dataset:sst-2",
"transformers",
"license:apache-2.0"
] | text-classification | false | speeqo | null | speeqo/distilbert-base-uncased-finetuned-sst-2-english | 35 | null | transformers | 6,795 | ---
language: en
license: apache-2.0
datasets:
- sst-2
---
# DistilBERT base uncased finetuned SST-2
This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased).
# Fine-tuning hyper-parameters
- learning_rate = 1e-5
- batch_size = 32
- warmup = 600
- max_seq_length = 128
- num_train_epochs = 3.0
# Bias
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country.
<img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/>
We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
|
bekirbakar/wav2vec2-large-xlsr-53-tr-fine-tuning-02 | 9f14188b8f0477a1e46b152d2fe4cf004c06cae1 | 2022-06-16T13:38:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bekirbakar | null | bekirbakar/wav2vec2-large-xlsr-53-tr-fine-tuning-02 | 35 | null | transformers | 6,796 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-tr-fine-tuning-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-tr-fine-tuning-02
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
|
agne/jobGBERT | f1222a1b5f502881235e216c814d312fb7419593 | 2022-06-03T13:52:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"de",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | agne | null | agne/jobGBERT | 35 | null | transformers | 6,797 |
---
language: de
license: cc-by-nc-sa-4.0
---
## jobGBERT
This is a domain-adapted transformer-based language model for German-speaking job advertisements.
Is is based on [deepset/gbert-base](https://huggingface.co/deepset/gbert-base), and adapted to the domain of job advertisements trough continued in-domain pretraining on 4 million German-speaking job ads from Switzerland 1990-2020 (5.9 GB data).
### Overview
**Architecture:** BERT base <br>
**Language:** German <br>
**Domain:** Job advertisements <br>
**See also:** [agne/jobBERT-de](https://huggingface.co/agne/jobBERT-de)
### License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (cc-by-nc-sa-4.0)
Please use the following citation when using our model:
```bibtex
@inproceedings{
title = "Evaluation of Transfer Learning and Domain Adaptation for Analyzing German-Speaking Job Advertisements",
author = "Gnehm, Ann-Sophie and
Bühlmann, Eva and
Clematide, Simon",
booktitle = "Proceedings of the 13th Language Resources and Evaluation Conference",
month = june,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
}
```
### Intended usage and limitations
You can use the model for masked language modeling, but it's intended to be fine-tuned on a downstream task.
The model is trained on German-Speaking job ads from Switzerland. It inherits potential bias of its base model, and may contain biases and stereotypes common in job advertisements.
### About us
Ann-Sophie Gnehm: `gnehm [at] soziologie.uzh.ch` <br>
Eva Bühlmann: `bühlmann [at] soziologie.uzh.ch` <br>
Simon Clematide: `simon.clematide [at] cl.uzh.ch` <br>
The [Swiss Job Market Monitor](https://www.stellenmarktmonitor.uzh.ch/en.html) aims at systematically expanding scientific knowledge about the job market and improving labour market transparency by informing the general public about current developments on the job market.
**Get in touch:** [Mail](mailto:[email protected]) [Website](https://www.stellenmarktmonitor.uzh.ch/en.html) [Zenodo](https://doi.org/10.5281/zenodo.6497853) [SWISSUbase](https://www.swissubase.ch/de/catalogue/studies/11998/18157/overview)
|
ronak1998/layoutlmv3-finetuned-invoice | c64094e392340e608a15deae0f9d50015f4eeb28 | 2022-06-10T07:52:09.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"dataset:sroie",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | ronak1998 | null | ronak1998/layoutlmv3-finetuned-invoice | 35 | null | transformers | 6,798 | ---
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
args: sroie
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 0.9979716024340771
- name: F1
type: f1
value: 0.9989847715736041
- name: Accuracy
type: accuracy
value: 0.9997893406361913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0030
- Precision: 1.0
- Recall: 0.9980
- F1: 0.9990
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0715 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 4.0 | 200 | 0.0228 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0174 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0137 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1189 | 10.0 | 500 | 0.0122 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1189 | 12.0 | 600 | 0.0112 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1189 | 14.0 | 700 | 0.0080 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1189 | 16.0 | 800 | 0.0100 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1189 | 18.0 | 900 | 0.0040 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0097 | 20.0 | 1000 | 0.0030 | 1.0 | 0.9980 | 0.9990 | 0.9998 |
| 0.0097 | 22.0 | 1100 | 0.0028 | 0.9980 | 0.9959 | 0.9970 | 0.9996 |
| 0.0097 | 24.0 | 1200 | 0.0016 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0097 | 26.0 | 1300 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0097 | 28.0 | 1400 | 0.0015 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0029 | 30.0 | 1500 | 0.0017 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0029 | 32.0 | 1600 | 0.0026 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 34.0 | 1700 | 0.0026 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 36.0 | 1800 | 0.0026 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 38.0 | 1900 | 0.0025 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.002 | 40.0 | 2000 | 0.0026 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ahmeddbahaa/xlmroberta-finetuned-Spanish | 563004e67ff96a126bd623634fddd0c2eb7e3aaa | 2022-06-16T21:05:45.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"xlmroberta",
"es",
"abstractive summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/xlmroberta-finetuned-Spanish | 35 | null | transformers | 6,799 | ---
tags:
- summarization
- xlmroberta
- encoder-decoder
- es
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: xlmroberta-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-finetuned-Spanish
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.