modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sadakmed/distiluse-base-multilingual-cased-v1 | 2c2289a5bd6c89505c77d6b2ca7d1a7a56b2b106 | 2021-09-22T09:37:18.000Z | [
"pytorch",
"multilingual",
"sentence-transformers",
"DistilBert",
"Universal Sentence Encoder",
"sentence-embeddings",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | sadakmed | null | sadakmed/distiluse-base-multilingual-cased-v1 | 41 | null | sentence-transformers | 6,400 | ---
language: multilingual
tags:
- DistilBert
- Universal Sentence Encoder
- sentence-embeddings
- sentence-transformers
- sentence-similarity
license: apache-2.0
---
Knowledge distilled version of multilingual Universal Sentence Encoder. Supports 15 languages: Arabic, Chinese, Dutch, English, French, German, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish.
This Model is saved from 'distiluse-base-multilingual-cased-v1' in `sentence-transformers`, to be used directly from `transformers`
Note that ST has additional two layers(Pooling, Linear), that cannot be saved in any predefined model in HG. |
snisioi/bert-legal-romanian-cased-v1 | 49fa6ad60b74ce898c75a81fb5f77d8f84e2a837 | 2022-01-17T20:32:58.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
] | text-generation | false | snisioi | null | snisioi/bert-legal-romanian-cased-v1 | 41 | null | transformers | 6,401 | A Romanian BERT model, initialized from [bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) and pretrained on the [MARCELL v2.0 corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/) of legal documents for 24h with the principles following the paper by Peter Izsak, Moshe Berchansky, Omer Levy, [How to Train BERT with an Academic Budget](https://aclanthology.org/2021.emnlp-main.831.pdf)
|
tcaputi/guns-relevant | f97fd7332c0ff78a4f2334a8e850b64c7a0211dd | 2021-05-20T07:25:33.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tcaputi | null | tcaputi/guns-relevant | 41 | null | transformers | 6,402 | Entry not found |
vesteinn/XLMR-ENIS-finetuned-cola | df5e8c202b8c76982ec78efeed84f31a63ca467a | 2021-09-27T22:07:58.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"en",
"is",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index"
] | text-classification | false | vesteinn | null | vesteinn/XLMR-ENIS-finetuned-cola | 41 | null | transformers | 6,403 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- glue
language:
- en
- is
metrics:
- matthews_correlation
model-index:
- name: XLMR-ENIS-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6306425398187112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-cola
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7311
- Matthews Correlation: 0.6306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5216 | 1.0 | 535 | 0.5836 | 0.4855 |
| 0.3518 | 2.0 | 1070 | 0.4426 | 0.5962 |
| 0.2538 | 3.0 | 1605 | 0.5091 | 0.6110 |
| 0.1895 | 4.0 | 2140 | 0.6955 | 0.6136 |
| 0.1653 | 5.0 | 2675 | 0.7311 | 0.6306 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bookbot/distil-wav2vec2-xls-r-adult-child-cls-64m | 99a11467d83a54b5146556b9d71638c161dd17ba | 2022-02-26T14:40:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"en",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | bookbot | null | bookbot/distil-wav2vec2-xls-r-adult-child-cls-64m | 41 | null | transformers | 6,404 | ---
language: en
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distil-wav2vec2-xls-r-adult-child-cls-64m
results: []
---
# DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 64M
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a distilled version of [wav2vec2-xls-r-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-xls-r-adult-child-cls) on a private adult/child speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------------- | ------- | ----- | ----------------------------------------- |
| `distil-wav2vec2-xls-r-adult-child-cls-64m` | 64M | XLS-R | Adult/Child Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| --------------------------------- | ------ | -------- | ------ |
| Adult/Child Speech Classification | 0.2571 | 93.86% | 0.9425 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 16
- `eval_batch_size`: 16
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 64
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.5509 | 1.0 | 191 | 0.3685 | 0.9086 | 0.9131 |
| 0.4543 | 2.0 | 382 | 0.3113 | 0.9247 | 0.9285 |
| 0.409 | 3.0 | 573 | 0.2723 | 0.9372 | 0.9418 |
| 0.3024 | 4.0 | 764 | 0.2786 | 0.9381 | 0.9417 |
| 0.3103 | 5.0 | 955 | 0.2571 | 0.9386 | 0.9425 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
MLRS/mBERTu | cd6fcd6144de0b73477fb1e17ba3f51a09ba8152 | 2022-05-20T17:30:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"mt",
"dataset:MLRS/korpus_malti",
"arxiv:2205.10517",
"transformers",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | MLRS | null | MLRS/mBERTu | 41 | null | transformers | 6,405 | ---
language:
- mt
datasets:
- MLRS/korpus_malti
model-index:
- name: mBERTu
results:
- task:
type: dependency-parsing
name: Dependency Parsing
dataset:
type: universal_dependencies
args: mt_mudt
name: Maltese Universal Dependencies Treebank (MUDT)
metrics:
- type: uas
value: 92.10
name: Unlabelled Attachment Score
- type: las
value: 87.87
name: Labelled Attachment Score
- task:
type: part-of-speech-tagging
name: Part-of-Speech Tagging
dataset:
type: mlrs_pos
name: MLRS POS dataset
metrics:
- type: accuracy
value: 98.66
name: UPOS Accuracy
args: upos
- type: accuracy
value: 98.58
name: XPOS Accuracy
args: xpos
- task:
type: named-entity-recognition
name: Named Entity Recognition
dataset:
type: wikiann
name: WikiAnn (Maltese)
args: mt
metrics:
- type: f1
args: span
value: 86.60
name: Span-based F1
- task:
type: sentiment-analysis
name: Sentiment Analysis
dataset:
type: mt-sentiment-analysis
name: Maltese Sentiment Analysis Dataset
metrics:
- type: f1
args: macro
value: 76.79
name: Macro-averaged F1
license: cc-by-nc-sa-4.0
widget:
- text: "Malta huwa pajjiż fl-[MASK]."
---
# mBERTu
A Maltese multilingual model pre-trained on the Korpus Malti v4.0 using multilingual BERT as the initial checkpoint.
## License
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
## Citation
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://arxiv.org/abs/2205.10517).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = {Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese},
author = {Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia},
booktitle = {Proceedings of the 3rd Workshop on Deep Learning for Low-Resource NLP (DeepLo 2022)},
day = {14},
month = {07},
year = {2022},
address = {Seattle, Washington},
publisher = {Association for Computational Linguistics},
}
```
|
nbhimte/tiny-bert-mnli-distilled | 730f7a5e90fa201750e27cd5491e8100495845ad | 2022-05-04T07:14:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | nbhimte | null | nbhimte/tiny-bert-mnli-distilled | 41 | null | transformers | 6,406 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-mnli-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5818644931227712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-mnli-distilled
It achieves the following results on the evaluation set:
- Loss: 1.5018
- Accuracy: 0.5819
- F1 score: 0.5782
- Precision score: 0.6036
- Metric recall: 0.5819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 32
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 score | Precision score | Metric recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:-------------:|
| 1.4475 | 1.0 | 614 | 1.4296 | 0.4521 | 0.4070 | 0.5621 | 0.4521 |
| 1.3354 | 2.0 | 1228 | 1.4320 | 0.4805 | 0.4579 | 0.5276 | 0.4805 |
| 1.2244 | 3.0 | 1842 | 1.4786 | 0.5699 | 0.5602 | 0.5865 | 0.5699 |
| 1.1416 | 4.0 | 2456 | 1.5018 | 0.5819 | 0.5782 | 0.6036 | 0.5819 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
anshr/distilgpt2_supervised_model_01 | 521275113c0e94685be02ffc8471b8b60f5bc990 | 2022-04-24T00:33:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | anshr | null | anshr/distilgpt2_supervised_model_01 | 41 | null | transformers | 6,407 | Entry not found |
Yanhao/simcse-bert-for-patent | e46b5526806e0a2cc868ea06ffff6e113fb62a20 | 2022-05-04T21:32:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Yanhao | null | Yanhao/simcse-bert-for-patent | 41 | 1 | transformers | 6,408 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_42 | 58c38e89c8ca126a5082a7caa9dd0f951258bcd6 | 2022-05-10T23:21:14.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.2-class.exclusive.seed_42 | 41 | null | transformers | 6,409 | Entry not found |
erickfm/t5-base-finetuned-bias | d18b5a444ca6a641921a78c34093e576aa966d38 | 2022-06-01T18:28:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias | 41 | null | transformers | 6,410 | ---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-base](https://huggingface.co/t5-base), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.39 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-base).
|
cambridgeltl/tweet-roberta-base-embeddings-v1 | de1dc3f55e55d54f784c669a30f6edf81ba08d49 | 2022-06-06T14:23:18.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers",
"license:afl-3.0"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/tweet-roberta-base-embeddings-v1 | 41 | null | transformers | 6,411 | ---
license: afl-3.0
---
|
waboucay/camembert-large-finetuned-xnli_fr_3_classes | 534cf93a172b82e7b948e9f48d6640aea1fb3bd3 | 2022-06-19T14:38:51.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-xnli_fr_3_classes | 41 | null | transformers | 6,412 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 85.8 | 85.9 |
| test | 84.2 | 84.3 | |
vaibhavagg303/Pegasus-Large | e041bcd9795d321ff092ffb9642034b83970071d | 2022-06-29T15:30:27.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vaibhavagg303 | null | vaibhavagg303/Pegasus-Large | 41 | null | transformers | 6,413 | Entry not found |
ychenNLP/arabic-relation-extraction | a66d4eebf771dcb7e0d477943d8f54ab561acfdc | 2022-07-10T18:47:45.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"text-classification",
"ar",
"en",
"dataset:ACE2005",
"transformers",
"BERT",
"Text Classification",
"relation",
"license:mit"
] | text-classification | false | ychenNLP | null | ychenNLP/arabic-relation-extraction | 41 | 1 | transformers | 6,414 | ---
tags:
- BERT
- Text Classification
- relation
language:
- ar
- en
license: mit
datasets:
- ACE2005
---
# Arabic Relation Extraction Model
- [Github repo](https://github.com/edchengg/GigaBERT)
- Relation Extraction model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English).
- Model detail: mark two entities in the sentence with special markers (e.g., ```XXXX <PER> entity1 </PER> XXXXXXX <ORG> entity2 </ORG> XXXXX```). Then we use the BERT [CLS] representation to make a prediction.
- ACE2005 Training data: Arabic
- [Relation tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/arabic-relations-guidelines-v6.5.pdf) including: Physical, Part-whole, Personal-Social, ORG-Affiliation, Agent-Artifact, Gen-Affiliation
## Hyperparameters
- learning_rate=2e-5
- num_train_epochs=10
- weight_decay=0.01
## How to use
Workflow of a relation extraction model:
1. Input --> NER model --> Entities
2. Input sentence + Entity 1 + Entity 2 --> Relation Classification Model --> Relation Type
```python
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer, AuotoModelForSequenceClassification
>>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True)
>>> re_model = AutoModelForSequenceClassification.from_pretrained("ychenNLP/arabic-relation-extraction")
>>> re_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-relation-extraction")
>>> re_pip = pipeline("text-classification", model=re_model, tokenizer=re_tokenizer)
def process_ner_output(entity_mention, inputs):
re_input = []
for idx1 in range(len(entity_mention) - 1):
for idx2 in range(idx1 + 1, len(entity_mention)):
ent_1 = entity_mention[idx1]
ent_2 = entity_mention[idx2]
ent_1_type = ent_1['entity_group']
ent_2_type = ent_2['entity_group']
ent_1_s = ent_1['start']
ent_1_e = ent_1['end']
ent_2_s = ent_2['start']
ent_2_e = ent_2['end']
new_re_input = ""
for c_idx, c in enumerate(inputs):
if c_idx == ent_1_s:
new_re_input += "<{}>".format(ent_1_type)
elif c_idx == ent_1_e:
new_re_input += "</{}>".format(ent_1_type)
elif c_idx == ent_2_s:
new_re_input += "<{}>".format(ent_2_type)
elif c_idx == ent_2_e:
new_re_input += "</{}>".format(ent_2_type)
new_re_input += c
re_input.append({"re_input": new_re_input, "arg1": ent_1, "arg2": ent_2, "input": inputs})
return re_input
def post_process_re_output(re_output, text_input, ner_output):
final_output = []
for idx, out in enumerate(re_output):
if out["label"] != 'O':
tmp = re_input[idx]
tmp['relation_type'] = out
tmp.pop('re_input', None)
final_output.append(tmp)
template = {"input": text_input,
"entity": ner_output,
"relation": final_output}
return template
text_input = """ويتزامن ذلك مع اجتماع بايدن مع قادة الدول الأعضاء في الناتو في قمة موسعة في العاصمة الإسبانية، مدريد."""
ner_output = ner_pip(text_input) # inference NER tags
re_input = process_ner_output(ner_output, text_input) # prepare a pair of entity and predict relation type
re_output = []
for idx in range(len(re_input)):
tmp_re_output = re_pip(re_input[idx]["re_input"]) # for each pair of entity, predict relation
re_output.append(tmp_re_output[0])
re_ner_output = post_process_re_output(re_output, text_input, ner_output) # post process NER and relation predictions
print("Sentence: ",re_ner_output["input"])
print('====Entity====')
for ent in re_ner_output["entity"]:
print('{}--{}'.format(ent["word"], ent["entity_group"]))
print('====Relation====')
for rel in re_ner_output["relation"]:
print('{}--{}:{}'.format(rel['arg1']['word'], rel['arg2']['word'], rel['relation_type']['label']))
Sentence: ويتزامن ذلك مع اجتماع بايدن مع قادة الدول الأعضاء في الناتو في قمة موسعة في العاصمة الإسبانية، مدريد.
====Entity====
بايدن--PER
قادة--PER
الدول--GPE
الناتو--ORG
العاصمة--GPE
الاسبانية--GPE
مدريد--GPE
====Relation====
قادة--الدول:ORG-AFF
الدول--الناتو:ORG-AFF
العاصمة--الاسبانية:PART-WHOLE
```
### BibTeX entry and citation info
```bibtex
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {Giga{BERT}: Zero-shot Transfer Learning from {E}nglish to {A}rabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
```
|
Talha/urdu-audio-emotions | 89fb626253dca39fa0334f7a36cebf895e6e4e0c | 2022-07-02T11:04:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | Talha | null | Talha/urdu-audio-emotions | 41 | 1 | transformers | 6,415 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1638
- Accuracy: 0.975
## Model description
The model Urdu audio and classify in following categories
* Angry
* Happy
* Neutral
* Sad
## Training and evaluation data
The dataset is available at
https://www.kaggle.com/datasets/kingabzpro/urdu-emotion-dataset
## Training procedure
Training code is available at
https://www.kaggle.com/code/chtalhaanwar/urdu-emotions-hf
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3838 | 1.0 | 10 | 1.3907 | 0.225 |
| 1.3732 | 2.0 | 20 | 1.3872 | 0.2125 |
| 1.3354 | 3.0 | 30 | 1.3116 | 0.6625 |
| 1.2689 | 4.0 | 40 | 1.1820 | 0.6375 |
| 1.1179 | 5.0 | 50 | 1.0075 | 0.7 |
| 0.9962 | 6.0 | 60 | 0.8707 | 0.7125 |
| 0.8842 | 7.0 | 70 | 0.7485 | 0.7625 |
| 0.786 | 8.0 | 80 | 0.6326 | 0.8 |
| 0.6757 | 9.0 | 90 | 0.5995 | 0.8 |
| 0.6104 | 10.0 | 100 | 0.4835 | 0.825 |
| 0.5821 | 11.0 | 110 | 0.3886 | 0.9 |
| 0.4721 | 12.0 | 120 | 0.3935 | 0.8625 |
| 0.3976 | 13.0 | 130 | 0.3020 | 0.925 |
| 0.4483 | 14.0 | 140 | 0.3171 | 0.9 |
| 0.2665 | 15.0 | 150 | 0.3016 | 0.9125 |
| 0.2119 | 16.0 | 160 | 0.2722 | 0.925 |
| 0.3376 | 17.0 | 170 | 0.3163 | 0.8875 |
| 0.1518 | 18.0 | 180 | 0.2681 | 0.9125 |
| 0.1559 | 19.0 | 190 | 0.3074 | 0.925 |
| 0.1031 | 20.0 | 200 | 0.3526 | 0.8875 |
| 0.1557 | 21.0 | 210 | 0.2254 | 0.9375 |
| 0.0846 | 22.0 | 220 | 0.2410 | 0.9375 |
| 0.0733 | 23.0 | 230 | 0.2369 | 0.925 |
| 0.0964 | 24.0 | 240 | 0.2273 | 0.9375 |
| 0.0574 | 25.0 | 250 | 0.2066 | 0.95 |
| 0.1113 | 26.0 | 260 | 0.2941 | 0.9125 |
| 0.1313 | 27.0 | 270 | 0.2715 | 0.925 |
| 0.0851 | 28.0 | 280 | 0.1725 | 0.9625 |
| 0.0402 | 29.0 | 290 | 0.2221 | 0.95 |
| 0.1075 | 30.0 | 300 | 0.2199 | 0.9625 |
| 0.0418 | 31.0 | 310 | 0.1699 | 0.95 |
| 0.1869 | 32.0 | 320 | 0.2287 | 0.9625 |
| 0.0637 | 33.0 | 330 | 0.3230 | 0.9125 |
| 0.0483 | 34.0 | 340 | 0.1602 | 0.975 |
| 0.0891 | 35.0 | 350 | 0.1615 | 0.975 |
| 0.0359 | 36.0 | 360 | 0.1571 | 0.975 |
| 0.1006 | 37.0 | 370 | 0.1809 | 0.9625 |
| 0.0417 | 38.0 | 380 | 0.1923 | 0.9625 |
| 0.0346 | 39.0 | 390 | 0.2035 | 0.9625 |
| 0.0417 | 40.0 | 400 | 0.1737 | 0.9625 |
| 0.0396 | 41.0 | 410 | 0.1833 | 0.9625 |
| 0.0202 | 42.0 | 420 | 0.1946 | 0.9625 |
| 0.0137 | 43.0 | 430 | 0.1785 | 0.9625 |
| 0.0214 | 44.0 | 440 | 0.1841 | 0.9625 |
| 0.0304 | 45.0 | 450 | 0.1690 | 0.9625 |
| 0.0199 | 46.0 | 460 | 0.1646 | 0.975 |
| 0.0122 | 47.0 | 470 | 0.1622 | 0.975 |
| 0.0324 | 48.0 | 480 | 0.1615 | 0.975 |
| 0.0269 | 49.0 | 490 | 0.1625 | 0.975 |
| 0.0245 | 50.0 | 500 | 0.1638 | 0.975 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
datien228/distilbart-ftn-wiki_lingua | 3f7f4c27a0e1550bb01571aadf1946405bbda52d | 2022-07-05T12:12:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wiki_lingua",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | datien228 | null | datien228/distilbart-ftn-wiki_lingua | 41 | null | transformers | 6,416 | ---
language:
- en
tags:
- summarization
license: mit
datasets:
- wiki_lingua
metrics:
- rouge
---
#### Pre-trained BART Model fine-tune on WikiLingua dataset
The repository for the fine-tuned BART model (by sshleifer) using the **wiki_lingua** dataset (English)
**Purpose:** Examine the performance of a fine-tuned model research purposes
**Observation:**
- Pre-trained model was trained on the XSum dataset, which summarize a not-too-long documents into one-liner summary
- Fine-tuning this model using WikiLingua is appropriate since the summaries for that dataset are also short
- In the end, however, the model cannot capture much clearer key points, but instead it mostly extracts the opening sentence
- Some data pre-processing and models' hyperparameter are also need to be tuned more properly. |
cambridgeltl/simctg_cnwikitext | f9d6161e94e5d3cefb87911307c7320a107b93a3 | 2022-07-03T20:44:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/simctg_cnwikitext | 41 | null | transformers | 6,417 | Entry not found |
nakamura196/roberta-small-hi-char | 5e04e68b8f96846486af357e356a2fa5cd8b1f2c | 2022-07-14T20:32:40.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | nakamura196 | null | nakamura196/roberta-small-hi-char | 41 | null | transformers | 6,418 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "入[MASK]外無之候江戸大水又ハ大地震なと"
- text: "日向[MASK]御望之由可令披露候"
---
# roberta-small-hi-char
## Model Description
This is a RoBERTa model pre-trained on HI texts with character tokenizer.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nakamura196/roberta-small-hi-char")
model=AutoModelForMaskedLM.from_pretrained("nakamura196/roberta-small-hi-char")
```
|
robingeibel/reformer-finetuned-big_patent-16384 | 229f24c58abe50cbce42a408f2771dccd3a87bba | 2022-07-15T17:20:21.000Z | [
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | robingeibel | null | robingeibel/reformer-finetuned-big_patent-16384 | 41 | null | transformers | 6,419 | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-finetuned-big_patent-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-finetuned-big_patent-16384
This model is a fine-tuned version of [robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384](https://huggingface.co/robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 6.7398 | 1.0 | 53286 | 6.7243 |
| 6.7449 | 2.0 | 106572 | 6.7388 |
| 6.7534 | 3.0 | 159858 | 6.7382 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ArthurBaia/xlm-roberta-base-squad-pt | a3fd9b3c344b480e991c66bd0fa5f7d59b0e9ca5 | 2022-07-11T22:42:37.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad_v1_pt",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | ArthurBaia | null | ArthurBaia/xlm-roberta-base-squad-pt | 41 | 1 | transformers | 6,420 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v1_pt
model-index:
- name: xlm-roberta-base-squad-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-squad-pt
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v1_pt dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- "epoch": 3.0,
- "eval_exact_match": 44.45600756859035,
- "eval_f1": 57.37953911779836,
- "eval_samples": 11095
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1 |
finiteautomata/definition-ner | 197175a16e419b7442ddb9f090e3eabf2c226eec | 2022-07-13T01:51:40.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | finiteautomata | null | finiteautomata/definition-ner | 41 | null | transformers | 6,421 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: definition-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# definition-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8437
- Precision: 0.5601
- Recall: 0.6051
- Macro F1: 0.2731
- Micro F1: 0.5817
- Accuracy: 0.8511
- Alias Term F1: 0.5684
- Alias Term Precision: 0.4909
- Alias Term Recall: 0.675
- Definition F1: 0.5130
- Definition Precision: 0.4927
- Definition Recall: 0.5350
- Definition Frag F1: 0.2222
- Definition Frag Precision: 0.1667
- Definition Frag Recall: 0.3333
- Ordered Definition F1: 0.0
- Ordered Definition Precision: 0.0
- Ordered Definition Recall: 0.0
- Ordered Term F1: 0.0
- Ordered Term Precision: 0.0
- Ordered Term Recall: 0.0
- Qualifier F1: 0.0
- Qualifier Precision: 0.0
- Qualifier Recall: 0.0
- Referential Definition F1: 0.3429
- Referential Definition Precision: 0.3158
- Referential Definition Recall: 0.375
- Referential Term F1: 0.2353
- Referential Term Precision: 0.1667
- Referential Term Recall: 0.4
- Secondary Definition F1: 0.1395
- Secondary Definition Precision: 0.12
- Secondary Definition Recall: 0.1667
- Term F1: 0.7101
- Term Precision: 0.7164
- Term Recall: 0.7040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Macro F1 | Micro F1 | Accuracy | Alias Term F1 | Alias Term Precision | Alias Term Recall | Definition F1 | Definition Precision | Definition Recall | Definition Frag F1 | Definition Frag Precision | Definition Frag Recall | Ordered Definition F1 | Ordered Definition Precision | Ordered Definition Recall | Ordered Term F1 | Ordered Term Precision | Ordered Term Recall | Qualifier F1 | Qualifier Precision | Qualifier Recall | Referential Definition F1 | Referential Definition Precision | Referential Definition Recall | Referential Term F1 | Referential Term Precision | Referential Term Recall | Secondary Definition F1 | Secondary Definition Precision | Secondary Definition Recall | Term F1 | Term Precision | Term Recall |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|:--------:|:-------------:|:--------------------:|:-----------------:|:-------------:|:--------------------:|:-----------------:|:------------------:|:-------------------------:|:----------------------:|:---------------------:|:----------------------------:|:-------------------------:|:---------------:|:----------------------:|:-------------------:|:------------:|:-------------------:|:----------------:|:-------------------------:|:--------------------------------:|:-----------------------------:|:-------------------:|:--------------------------:|:-----------------------:|:-----------------------:|:------------------------------:|:---------------------------:|:-------:|:--------------:|:-----------:|
| 0.8153 | 1.0 | 756 | 0.4755 | 0.4503 | 0.4510 | 0.1694 | 0.4506 | 0.8450 | 0.25 | 0.3571 | 0.1923 | 0.3563 | 0.3208 | 0.4007 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5 | 0.375 | 0.75 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5875 | 0.6290 | 0.5511 |
| 0.3215 | 2.0 | 1512 | 0.4521 | 0.4501 | 0.6266 | 0.2020 | 0.5239 | 0.8516 | 0.3248 | 0.2088 | 0.7308 | 0.4761 | 0.4184 | 0.5522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5926 | 0.4211 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6262 | 0.5461 | 0.7337 |
| 0.2626 | 3.0 | 2268 | 0.4585 | 0.5123 | 0.6120 | 0.2241 | 0.5577 | 0.8557 | 0.5079 | 0.4324 | 0.6154 | 0.4738 | 0.4093 | 0.5623 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1429 | 0.1111 | 0.2 | 0.4138 | 0.2857 | 0.75 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7026 | 0.6994 | 0.7059 |
| 0.2078 | 4.0 | 3024 | 0.4552 | 0.5337 | 0.5915 | 0.2097 | 0.5611 | 0.8653 | 0.4857 | 0.3864 | 0.6538 | 0.5085 | 0.4688 | 0.5556 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4444 | 0.4 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6586 | 0.6431 | 0.6749 |
| 0.1621 | 5.0 | 3780 | 0.4999 | 0.5316 | 0.6032 | 0.2259 | 0.5652 | 0.8537 | 0.5556 | 0.5357 | 0.5769 | 0.5053 | 0.4588 | 0.5623 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4706 | 0.4444 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0426 | 0.0333 | 0.0588 | 0.6849 | 0.6737 | 0.6966 |
| 0.1291 | 6.0 | 4536 | 0.5596 | 0.5161 | 0.6559 | 0.2294 | 0.5777 | 0.8529 | 0.5152 | 0.425 | 0.6538 | 0.5098 | 0.4365 | 0.6128 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5 | 0.375 | 0.75 | 0.0 | 0.0 | 0.0 | 0.0625 | 0.0667 | 0.0588 | 0.7066 | 0.6685 | 0.7492 |
| 0.1028 | 7.0 | 5292 | 0.6322 | 0.5481 | 0.6340 | 0.2242 | 0.5879 | 0.8591 | 0.5231 | 0.4359 | 0.6538 | 0.5509 | 0.5146 | 0.5926 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1429 | 0.1111 | 0.2 | 0.3333 | 0.25 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6922 | 0.6601 | 0.7276 |
| 0.0894 | 8.0 | 6048 | 0.7350 | 0.5692 | 0.6325 | 0.2430 | 0.5992 | 0.8615 | 0.5217 | 0.4186 | 0.6923 | 0.5508 | 0.5071 | 0.6027 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2222 | 0.25 | 0.2 | 0.3478 | 0.2667 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0870 | 0.1667 | 0.0588 | 0.7003 | 0.6918 | 0.7090 |
| 0.0708 | 9.0 | 6804 | 0.6997 | 0.5572 | 0.6633 | 0.2369 | 0.6056 | 0.8554 | 0.5806 | 0.5 | 0.6923 | 0.5693 | 0.5150 | 0.6364 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4706 | 0.4444 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0465 | 0.0385 | 0.0588 | 0.7016 | 0.6621 | 0.7461 |
| 0.0603 | 10.0 | 7560 | 0.7696 | 0.5805 | 0.6545 | 0.2423 | 0.6153 | 0.8632 | 0.6000 | 0.5294 | 0.6923 | 0.5564 | 0.5027 | 0.6229 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4706 | 0.4444 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0714 | 0.0909 | 0.0588 | 0.7242 | 0.7092 | 0.7399 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
baptiste/deberta-finetuned-ner | 75f2bdd20e3bd03fe16c24c692d6e8cb3c2163d2 | 2022-07-16T05:46:56.000Z | [
"pytorch",
"tensorboard",
"deberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | baptiste | null | baptiste/deberta-finetuned-ner | 41 | null | transformers | 6,422 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9577488309953239
- name: Recall
type: recall
value: 0.9651632446987546
- name: F1
type: f1
value: 0.961441743503772
- name: Accuracy
type: accuracy
value: 0.9907182964622135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-ner
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0515
- Precision: 0.9577
- Recall: 0.9652
- F1: 0.9614
- Accuracy: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0742 | 1.0 | 1756 | 0.0526 | 0.9390 | 0.9510 | 0.9450 | 0.9868 |
| 0.0374 | 2.0 | 3512 | 0.0528 | 0.9421 | 0.9554 | 0.9487 | 0.9879 |
| 0.0205 | 3.0 | 5268 | 0.0505 | 0.9505 | 0.9636 | 0.9570 | 0.9900 |
| 0.0089 | 4.0 | 7024 | 0.0528 | 0.9531 | 0.9636 | 0.9583 | 0.9898 |
| 0.0076 | 5.0 | 8780 | 0.0515 | 0.9577 | 0.9652 | 0.9614 | 0.9907 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aimlab/xlm-roberta-base-finetuned-urdu | 9fd928480ae7b6bcc5bce9c0efe0e2897116839e | 2022-07-25T07:58:10.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ur",
"transformers",
"license:afl-3.0"
] | text-classification | false | Aimlab | null | Aimlab/xlm-roberta-base-finetuned-urdu | 41 | 1 | transformers | 6,423 | ---
language: ur
license: afl-3.0
---
# XLM-RoBERTa-Urdu-Classification
This [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) text classification model trained on Urdu sentiment [data-set](https://huggingface.co/datasets/hassan4830/urdu-binary-classification-data) performs binary sentiment classification on any given Urdu sentence. The model has been fine-tuned for better results in manageable time frames.
## Model description
XLM-RoBERTa is a scaled cross-lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks.
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov.
It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.
### How to use
You can import this model directly from the transformers library:
```python
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu")
>>> model = AutoModelForSequenceClassification.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu", id2label = {0: 'negative', 1: 'positive'})
```
Here is how to use this model to get the label of a given text:
```python
>>> from transformers import TextClassificationPipeline
>>> text = "وہ ایک برا شخص ہے"
>>> pipe = TextClassificationPipeline(model = model, tokenizer = tokenizer, top_k = 2, device = 0)
>>> pipe(text)
[{'label': 'negative', 'score': 0.9987003803253174},
{'label': 'positive', 'score': 0.001299630501307547}]
``` |
asi/igpt-fr-cased-base | 75771e2d79a2fc86e91c487b6766ee639fb48d3e | 2022-07-27T17:12:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"fr",
"transformers",
"tf",
"text-to-image",
"license:apache-2.0"
] | text-generation | false | asi | null | asi/igpt-fr-cased-base | 41 | 2 | transformers | 6,424 | ---
language:
- fr
thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png
tags:
- tf
- pytorch
- gpt2
- text-to-image
license: apache-2.0
---
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/igpt-logo.png" width="400">
## Model description
**iGPT-fr** 🇫🇷 is a GPT model for French pre-trained incremental language model developped by the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We adapted [GPT-fr 🇫🇷](https://huggingface.co/asi/gpt-fr-cased-base) model to generate images conditionned by text inputs.
## Intended uses & limitations
The model can be leveraged for image generation tasks. The model is currently under a developpment phase.
#### How to use
The model might be used through the 🤗 `Transformers` librairie. You will also need to install the `Taming Transformers` library for high-resolution image synthesis:
```bash
pip install git+https://github.com/CompVis/taming-transformers.git
```
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from huggingface_hub import hf_hub_download
from omegaconf import OmegaConf
from taming.models import vqgan
import torch
from PIL import Image
import numpy as np
# Load VQGAN model
vqgan_ckpt = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="model.ckpt", force_download=False)
vqgan_config = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="config.yaml", force_download=False)
config = OmegaConf.load(vqgan_config)
vqgan_model = vqgan.VQModel(**config.model.params)
vqgan_model.eval().requires_grad_(False)
vqgan_model.init_from_ckpt(vqgan_ckpt)
# Load pretrained model
model = GPT2LMHeadModel.from_pretrained("asi/igpt-fr-cased-base")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("asi/igpt-fr-cased-base")
# Generate a sample of text
input_sentence = "Une carte de l'europe"
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')
input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1) # Add image generation token
greedy_output = model.generate(
input_ids.to(device),
max_length=256+input_ids.shape[1],
do_sample=True,
top_p=0.92,
top_k=0)
def custom_to_pil(x):
x = x.detach().cpu()
x = torch.clamp(x, -1., 1.)
x = (x + 1.)/2.
x = x.permute(1,2,0).numpy()
x = (255*x).astype(np.uint8)
x = Image.fromarray(x)
if not x.mode == "RGB":
x = x.convert("RGB")
return x
z_idx = greedy_output[0, input_ids.shape[1]:] - 50001
z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256))
x_rec = vqgan_model.decode(z_quant).to('cpu')[0]
display(custom_to_pil(x_rec))
```
You may also filter results based on CLIP:
```python
from tqdm import tqdm
def hallucinate(prompt, num_images=64):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1).to(device) # Add image generation token
all_images = []
for i in tqdm(range(num_images)):
greedy_output = model.generate(
input_ids.to(device),
max_length=256+input_ids.shape[1],
do_sample=True,
top_p=0.92,
top_k=0)
z_idx = greedy_output[0, input_ids.shape[1]:] - 50001
z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256))
x_rec = vqgan_model.decode(z_quant).to('cpu')[0]
all_images.append(custom_to_pil(x_rec))
return all_images
input_sentence = "Une carte de l'europe"
all_images = hallucinate(input_sentence)
from transformers import pipeline
opus_model = "Helsinki-NLP/opus-mt-fr-en"
opus_translator = pipeline("translation", model=opus_model)
opus_translator(input_sentence)
from transformers import CLIPProcessor, CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
def clip_top_k(prompt, images, k=8):
prompt_fr = opus_translator(input_sentence)[0]['translation_text']
inputs = clip_processor(text=prompt_fr, images=images, return_tensors="pt", padding=True)
outputs = clip_model(**inputs)
logits = outputs.logits_per_text # this is the image-text similarity score
scores = np.array(logits[0].detach()).argsort()[-k:][::-1]
return [images[score] for score in scores]
filtered_images = clip_top_k(input_sentence, all_images)
for fi in filtered_images:
display(fi)
```
## Training data
We created a dedicated corpus to train our generative model. The training corpus consists in text-image pairs. We aggregated portions from existing corpora: [Laion-5B](https://laion.ai/blog/laion-5b/) and [WIT](https://github.com/google-research-datasets/wit). The final dataset includes 10,807,534 samples.
## Training procedure
We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 8 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 1161.22 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
|
NLPScholars/Roberta-Earning-Call-Transcript-Classification | b2a2f9acea1ffb73453a8361960c35215cf16f77 | 2022-07-29T15:38:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | NLPScholars | null | NLPScholars/Roberta-Earning-Call-Transcript-Classification | 41 | 1 | transformers | 6,425 | ---
widget:
- text: "Paytm’s Revenue Growth Trajectory To Remain Strong In Q1: Goldman Sachs"
- text: "Nifty ends above 16,900, Sensex gains 1,041 pts led by IT, metal, realty"
- text: "Amazon reports BLOWOUT earnings, beating revenue estimates and raising Q3 guidance"
- text: "Company went through great loss due to lawsuit in Q1"
---
## What is Roberta-Earning-Call-Transcript-Classification Model?
Roberta-Earning-Call-Transcript-Classification is a Multi-Label Classification Model trained with Annotated earning call transcript data. Roberta-base model was fine-tuned to train on earning call transcript data. This model could be very helpful in finding Negative, Positive, Litigious, Constraining and Uncertain thing in the sentence. This could be really helpful in analyzing Profit warning of a company.
## What is RoBERTa
RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT.
## What is Earning Call Transcript?
An earnings call is a teleconference, or webcast, in which a public company discusses the financial results of a reporting period. The name comes from earnings per share, the bottom line number in the income statement divided by the number of shares outstanding.
Example of Earning call Transcipt: https://www.fool.com/earnings/call-transcripts/2022/04/29/apple-aapl-q2-2022-earnings-call-transcript
Scraped 10 years of earning call transcript data for 10 companies like Apple, google, microsoft, Nvidia, Amazon, Intel, Cisco etc. Annotate the data in various categories of sentences like Negative, Positive, Litigious, Constraining and Uncertainty
And then used Loughran-McDonald sentiment lexicon and Use FinancialPhraseBank [Malo, P., Sinha, A., Korhonen, P., Wallenius, J., & Takala, P. (2014). Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology, 65(4), 782-796.] for data annotation.
## Hyperparameters
| Parameter | |
| ----------------- | :---: |
| Learning rate | 1e-5 |
| Epochs | 12 |
| Max Seq Length | 240 |
| Batch size | 128 |
## Results
Best Result of `Micro F1` - 91.8%
## Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification")
model = AutoModelForSequenceClassification.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification")
```
# Contributors
*Sumit Ranjan- [email protected]*,
*Aanchal Varma- [email protected]*,
*Akshul Mittal- [email protected]*
|
Fujitsu/pytorrent | 1bf2dc4833d0cc0dc3020c0a36364aafc54694f1 | 2021-10-12T18:37:18.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"en",
"dataset:pytorrent",
"arxiv:2110.01710",
"transformers",
"license:mit"
] | feature-extraction | false | Fujitsu | null | Fujitsu/pytorrent | 40 | null | transformers | 6,426 | ---
license: mit
widget:
language:
- en
datasets:
- pytorrent
---
# 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥
Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages.
We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.
### Training Objective
This model is trained with a Masked Language Model (MLM) objective.
## How to use the model?
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent")
model = AutoModel.from_pretrained("Fujitsu/pytorrent")
```
## Citation
Preprint: [https://arxiv.org/pdf/2110.01710.pdf](https://arxiv.org/pdf/2110.01710.pdf)
```
@misc{bahrami2021pytorrent,
title={PyTorrent: A Python Library Corpus for Large-scale Language Models},
author={Mehdi Bahrami and N. C. Shrikanth and Shade Ruangwan and Lei Liu and Yuji Mizobuchi and Masahiro Fukuyori and Wei-Peng Chen and Kazuki Munakata and Tim Menzies},
year={2021},
eprint={2110.01710},
archivePrefix={arXiv},
primaryClass={cs.SE},
howpublished={https://arxiv.org/pdf/2110.01710},
}
```
|
Helsinki-NLP/opus-mt-de-eo | 9188e5326cba934d553fcb0150a9e88de140a286 | 2021-09-09T21:30:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-eo | 40 | null | transformers | 6,427 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-eo
* source languages: de
* target languages: eo
* OPUS readme: [de-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-eo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.eo | 48.6 | 0.673 |
|
Helsinki-NLP/opus-mt-en-ceb | a5e0a21b4e9db37945be9cd5977573b53cd95999 | 2021-09-09T21:34:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ceb",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ceb | 40 | null | transformers | 6,428 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ceb
* source languages: en
* target languages: ceb
* OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ceb | 51.3 | 0.704 |
| Tatoeba.en.ceb | 31.3 | 0.600 |
|
Helsinki-NLP/opus-mt-en-ilo | 7342bf73c00ab920930c2f25166e0521a32f9048 | 2021-09-09T21:36:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ilo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ilo | 40 | null | transformers | 6,429 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ilo
* source languages: en
* target languages: ilo
* OPUS readme: [en-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ilo | 33.2 | 0.584 |
|
Helsinki-NLP/opus-mt-en-xh | 7cd68494528d1a3fe8f726fb0cf3713e448b70a4 | 2021-09-09T21:40:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"xh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-xh | 40 | null | transformers | 6,430 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-xh
* source languages: en
* target languages: xh
* OPUS readme: [en-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.xh | 37.9 | 0.652 |
|
Helsinki-NLP/opus-mt-nso-en | 4c49083b63f01ac0a3b32a81ade912d4f1367948 | 2021-09-10T13:59:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nso",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nso-en | 40 | null | transformers | 6,431 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-en
* source languages: nso
* target languages: en
* OPUS readme: [nso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.en | 48.6 | 0.634 |
|
Narrativa/byt5-base-finetuned-tweet-qa | 5b01a203964d7a8d81c31475610732ff749bcbd1 | 2021-06-30T14:55:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:tweet_qa",
"arxiv:1907.06292",
"arxiv:1910.10683",
"transformers",
"qa",
"Question Answering",
"autotrain_compatible"
] | text2text-generation | false | Narrativa | null | Narrativa/byt5-base-finetuned-tweet-qa | 40 | null | transformers | 6,432 | ---
language: en
datasets:
- tweet_qa
tags:
- qa
- Question Answering
widget:
- text: "question: how far away was the putt context: GET THE CIGAR READY! Miguel aces the 15th from 174 yards, and celebrates as only he knows how! The European Tour (@EuropeanTour) January, 15 2015"
---
# ByT5-base fine-tuned for Question Answering (on Tweets)
[ByT5](https://huggingface.co/google/byt5-base) base fine-tuned on [TweetQA](https://huggingface.co/datasets/tweet_qa) dataset for **Question Answering** downstream task.
# Details of ByT5 - Base 🧠
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is usable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
## Details of the downstream task (Question Answering) - Dataset 📚
[TweetQA](https://huggingface.co/datasets/tweet_qa)
With social media becoming increasingly more popular, lots of news and real-time events are being covered. Developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have focused on formal text such as news and Wikipedia, we present the first large-scale dataset for QA over social media data. To make sure that the tweets are meaningful and contain interesting information, we gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD (in which the answers are extractive), we allow the answers to be abstractive. The task requires the model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer.
- Data Instances:
Sample
```json
{
"Question": "who is the tallest host?",
"Answer": ["sam bee","sam bee"],
"Tweet": "Don't believe @ConanOBrien's height lies. Sam Bee is the tallest host in late night. #alternativefacts\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u2014 Full Frontal (@FullFrontalSamB) January 22, 2017",
"qid": "3554ee17d86b678be34c4dc2c04e334f"
}
```
- Data Fields:
*Question*: a question based on information from a tweet
*Answer*: list of possible answers from the tweet
*Tweet*: source tweet
*qid*: question id
## Model in Action 🚀
```sh
git clone https://github.com/huggingface/transformers.git
pip install -q ./transformers
```
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
ckpt = 'Narrativa/byt5-base-finetuned-tweet-qa'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = T5ForConditionalGeneration.from_pretrained(ckpt).to('cuda')
def get_answer(question, context):
input_text = 'question: %s context: %s' % (question, context)
inputs = tokenizer([input_text], return_tensors='pt')
input_ids = inputs.input_ids.to('cuda')
attention_mask = inputs.attention_mask.to('cuda')
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
context = "MONSTARS BASKETBALL @M0NSTARSBBALLWiggins answers Kemba's floater with a three! game tied 106-106. 8.9 to play. CHA ball!12/4/2016, 2:26:30 AM"
question = 'who answered kemba\'s "floater"?'
get_answer(question, context)
# wiggins
```
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
Narrativa/spanish-gpt2-finetuned-rap-lyrics | 3447ebcd3d4592df8c525c4c9b303c11fa4c1735 | 2021-09-11T08:46:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"es",
"dataset:large_spanish_corpus",
"transformers",
"GPT-2",
"Rap",
"Lyrics",
"Songs",
"license:mit"
] | text-generation | false | Narrativa | null | Narrativa/spanish-gpt2-finetuned-rap-lyrics | 40 | 3 | transformers | 6,433 | ---
language: es
tags:
- GPT-2
- Rap
- Lyrics
- Songs
datasets:
- large_spanish_corpus
widget:
- text: "Déjame contarte lo importante que es buscarte un plan\nNo para golpearles o ganarles, sino para darles paz\n"
license: mit
---
# Spanish GPT-2 trained on [Spanish RAP Lyrics](https://www.kaggle.com/smunoz3801/9325-letras-de-rap-en-espaol)
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
NbAiLab/nb-t5-base | 73ebe153a4543ebef6449c70f3b1f3190208139c | 2021-09-23T15:53:02.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"no",
"dataset:Norwegian Nynorsk/Bokmål",
"transformers",
"seq2seq",
"license:cc-by-4.0"
] | feature-extraction | false | NbAiLab | null | NbAiLab/nb-t5-base | 40 | 2 | transformers | 6,434 | ---
language: no
license: cc-by-4.0
tags:
- seq2seq
datasets:
- Norwegian Nynorsk/Bokmål
---
# 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴
This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything.
Currently the model is training. It is expected that it should be finished by the end of August 2021.
The following setting were used in training:
```bash
./run_t5_mlm_flax_streaming.py \
--output_dir="./" \
--model_type="t5" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \
--max_seq_length="512" \
--weight_decay="0.01" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--learning_rate="8e-3" \
--warmup_steps="5000" \
--overwrite_output_dir \
--cache_dir /mnt/disks/flaxdisk/cache/ \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--logging_steps="500" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="5000" \
--eval_steps="5000" \
--preprocessing_num_workers 96 \
--adafactor \
--push_to_hub
```
|
Salesforce/qaconv-bert-large-uncased-whole-word-masking-squad2 | 48552b4258b70a1f007efb9d01877fe88adbe22b | 2021-05-27T19:15:05.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Salesforce | null | Salesforce/qaconv-bert-large-uncased-whole-word-masking-squad2 | 40 | null | transformers | 6,435 | Entry not found |
alireza7/ARMAN-MSR-persian-base | be651ea4fbace219818c4958f37cb85a2a801291 | 2021-09-29T19:17:50.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/ARMAN-MSR-persian-base | 40 | null | transformers | 6,436 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
ankitkhowal/minutes-of-meeting | 359cda32c1112823f9ec08f568c78062727b14fc | 2022-03-09T18:08:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ankitkhowal | null | ankitkhowal/minutes-of-meeting | 40 | null | transformers | 6,437 | Model to summarize the meeting transcripts. |
any0019/text_style_classifier | e0aa348f627e207319e7c684ffdb4e05cb1f3ac9 | 2021-12-14T13:35:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | any0019 | null | any0019/text_style_classifier | 40 | null | transformers | 6,438 | Entry not found |
bertin-project/bertin-base-gaussian | fea8231d1b0dab3b5ab5d9157e36935d2685e197 | 2021-09-23T13:41:46.000Z | [
"pytorch",
"jax",
"tensorboard",
"joblib",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | bertin-project | null | bertin-project/bertin-base-gaussian | 40 | null | transformers | 6,439 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
This is a **RoBERTa-base** model trained from scratch in Spanish.
The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).
This model has been trained for 250.000 steps.
Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
cardiffnlp/twitter-roberta-base-jun2020 | c961f32a31e1ba8674577c10bcbcbb0f51ab975a | 2022-02-09T11:14:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-jun2020 | 40 | null | transformers | 6,440 | # Twitter June 2020 (RoBERTa-base, 99M)
This is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.52684 not
2) 0.18349 getting
3) 0.07971 fully
4) 0.05598 being
5) 0.02347 self
------------------------------
I keep forgetting to bring a <mask>.
1) 0.13266 mask
2) 0.04859 book
3) 0.04851 laptop
4) 0.03123 pillow
5) 0.02747 blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.35750 The
2) 0.32703 the
3) 0.13048 End
4) 0.02261 this
5) 0.01066 This
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99078 The movie was great
2) 0.96610 Just finished reading 'Embeddings in NLP'
3) 0.96095 What time is the next game?
4) 0.95855 I just ordered fried chicken 🐣
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
dadada/opus-mt-zh-en-ep1-renri-zh-to-en | e6c208e18c39a3149fccfd87fffe0ca3477f0c67 | 2021-08-22T06:54:09.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | dadada | null | dadada/opus-mt-zh-en-ep1-renri-zh-to-en | 40 | null | transformers | 6,441 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: opus-mt-zh-en-ep1-renri-zh-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 18.2579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-zh-en-ep1-renri-zh-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2192
- Bleu: 18.2579
- Gen Len: 28.4817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.2194 | 1.0 | 59472 | 2.2192 | 18.2579 | 28.4817 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
elozano/tweet_offensive_eval | 36e34e4a6c0f9cb628af3b481c8d95dfe4d1fc3f | 2022-02-07T17:59:03.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"transformers",
"license:mit"
] | text-classification | false | elozano | null | elozano/tweet_offensive_eval | 40 | 2 | transformers | 6,442 | ---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
|
flax-community/swe-gpt-wiki | 5141a894196173cbee58fbecf1936dd6660a35c0 | 2021-07-17T07:46:24.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"sv",
"transformers"
] | text-generation | false | flax-community | null | flax-community/swe-gpt-wiki | 40 | 1 | transformers | 6,443 | ---
language: sv
widget:
- text: "Jag är en svensk språkmodell."
---
# GPT2-svenska-wikipedia
A swedish GPT2 style model trained using Flax CLM pipeline on the Swedish
part of the wiki40b dataset.
https://huggingface.co/datasets/wiki40b
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
## Data cleaning and preprocessing
The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work.
```python
from datasets import load_dataset
def load_and_clean_wiki():
dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train")
#dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner')
dataset = dataset.remove_columns(['wikidata_id', 'version_id'])
filtered_dataset = dataset.map(filter_wikipedia)
# filtered_dataset[:3]
# print(filtered_dataset[:3])
return filtered_dataset
def filter_wikipedia(batch):
batch["text"] = " ".join(batch["text"].split("\
_START_SECTION_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_PARAGRAPH_\
"))
batch["text"] = " ".join(batch["text"].split("_NEWLINE_"))
batch["text"] = " ".join(batch["text"].split("\xa0"))
return batch
```
## Training script
The following training script was used to train the model.
```bash
./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="sv" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub
```
|
google/t5-efficient-xl-nl16 | 2c27584bfd88690ebb930dc31e81d3791ef7b91f | 2022-02-15T10:57:40.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-xl-nl16 | 40 | 0 | transformers | 6,444 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XL-NL16 (Deep-Narrow version)
T5-Efficient-XL-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl16** - is of model type **Xl** with the following variations:
- **nl** is **16**
It has **1912.07** million parameters and thus requires *ca.* **7648.29 MB** of memory in full precision (*fp32*)
or **3824.14 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
helboukkouri/character-bert | 7fd0716432001b0b67a001db1c596edc213a835c | 2021-05-17T10:40:43.000Z | [
"pytorch",
"character_bert",
"transformers"
] | null | false | helboukkouri | null | helboukkouri/character-bert | 40 | 1 | transformers | 6,445 | Entry not found |
huggingtweets/empathywarrior | 758c75ea347cdfa99ebd8184a02148df56244658 | 2021-05-22T03:06:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/empathywarrior | 40 | null | transformers | 6,446 | ---
language: en
thumbnail: https://www.huggingtweets.com/empathywarrior/1616731099747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1367656161525166087/96tn3PnK_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lucia Lorenzi 🤖 AI Bot </div>
<div style="font-size: 15px">@empathywarrior bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@empathywarrior's tweets](https://twitter.com/empathywarrior).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 580 |
| Short tweets | 227 |
| Tweets kept | 2425 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lyqowfz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @empathywarrior's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30a6mswh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30a6mswh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/empathywarrior')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/heartswellzz | ff28400d302c35bf5d986544b9f90c24d2710d27 | 2021-05-22T06:43:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/heartswellzz | 40 | null | transformers | 6,447 | ---
language: en
thumbnail: https://www.huggingtweets.com/heartswellzz/1616679682815/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1264932555335360515/Ga3y8vi1_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ashley 🤖 AI Bot </div>
<div style="font-size: 15px">@heartswellzz bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@heartswellzz's tweets](https://twitter.com/heartswellzz).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1190 |
| Retweets | 167 |
| Short tweets | 121 |
| Tweets kept | 902 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24x0r300/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heartswellzz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mz3zqs4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mz3zqs4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/heartswellzz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nickjfuentes | 8bb58125c047d85837a849c8fb34b7a42dcfcb4e | 2021-05-22T16:19:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nickjfuentes | 40 | null | transformers | 6,448 | ---
language: en
thumbnail: https://www.huggingtweets.com/nickjfuentes/1603507476320/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/885086017623130114/KLyK4cVD_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nicholas J. Fuentes 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@nickjfuentes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nickjfuentes's tweets](https://twitter.com/nickjfuentes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3033</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1751</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>270</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1012</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3m3hnz6a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickjfuentes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/difypst9) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/difypst9/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/nickjfuentes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
leduytan93/Fine-Tune-XLSR-Wav2Vec2-Speech2Text-Vietnamese | 67e3051208e7188148ac71f791ae96e5874e6d86 | 2021-07-06T09:51:23.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"transformers",
"language-modeling",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | leduytan93 | null | leduytan93/Fine-Tune-XLSR-Wav2Vec2-Speech2Text-Vietnamese | 40 | null | transformers | 6,449 | ---
language: vi
datasets:
- common_voice
- FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4
metrics:
- wer
tags:
- language-modeling
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: MT5 Fix Asr Vietnamese by Ontocord
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 25.207182
---
|
m3hrdadfi/albert-fa-base-v2-sentiment-multi | 15abc2f4ede4d3571e21b2b65c1383ec1a50fae8 | 2020-12-26T08:46:20.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2-sentiment-multi | 40 | null | transformers | 6,450 | ---
language: fa
license: apache-2.0
---
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
## Results
The model obtained an F1 score of 70.72% for a composition of all three datasets into a multi-labels `Negative`, `Neutral` and `Positive`.
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
m3hrdadfi/gpt2-persian-qa | b7ce36e9ae4d841bdb0f4b559fff47e8ec4aaeb5 | 2021-07-30T09:00:42.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"fa",
"dataset:persian_qa",
"dataset:parsinlu_reading_comprehension",
"transformers"
] | text-generation | false | m3hrdadfi | null | m3hrdadfi/gpt2-persian-qa | 40 | null | transformers | 6,451 | ---
language: fa
datasets:
- persian_qa
- parsinlu_reading_comprehension
tags:
- text-generation
widget:
- text: "قرارداد کرسنت قراردادی برای فروش روزانه معادل 500 میلیون فوت مکعب، گاز ترش میدان سلمان است، که در سال 1381 و در زمان وزارت بیژن نامدار زنگنه در دولت هفتم مابین شرکت کرسنت پترولیوم و شرکت ملی نفت ایران منعقد گردید. مذاکرات اولیه این قرارداد از سال 1997 آغاز شد و در نهایت، سال 2001 ( 1381 ) به امضای این تفاهم نامه مشترک انجامید. بر اساس مفاد این قرارداد، مقرر شده بود که از سال 2005 با احداث خط لوله در خلیج فارس، گاز فرآورده نشده میدان سلمان (مخزن مشترک با ابوظبی)، به میزان روزانه 500 میلیون فوت مکعب (به قول برخی منابع 600 میلیون فوت مکعب) به امارات صادر شود. این قرارداد مطابق قوانین داخلی ایران بسته شدهو تنها قرارداد نفتی ایران است که از طرف مقابل خود، تضمین گرفتهاست. اجرای این پروژه در سال 1384 با دلایل ارایه شده از سوی دیوان محاسبات ایران از جمله تغییر نیافتن بهای گاز صادراتی و ثابت ماندن آن در هفت سال اول اجرای قرارداد متوقف شد. این در حالی است که طبق تعریف حقوقی، دیوان محاسبات ایران، حق دخالت در قراردادها، پیش از آنکه قراردادها اجرایی و مالی شوند را ندارد. پرسش: طرفین قرار داد کرسنت کیا بودن؟ پاسخ:"
- text: "ناف جایی قرار گرفته که در واقع بندناف در داخل رحم در آنجا به شکم جنین وصل بودهاست. بندناف که جفت را به جنین متصل کرده بعد از تولد از نوزاد جدا میشود. برای جدا کردن بند ناف از دو پنس استفاده میکنند و بین آن دو را میبرند. پنس دیگری نزدیک شکم نوزاد قرار داده میشود که بعد از دو روز برداشته خواهد شد. بندناف باقیمانده طی 15 روز خشک شده و میافتد و به جای آن اسکاری طبیعی به جای میماند. البته بر خلاف تصور عامه مردم شکل ناف در اثر بریدن بند ناف به وجود نمیآید و پیش از این در شکم مادر حالت ناف شکل گرفتهاست. شکل ناف در میان مردم مختلف متفاوت است و اندازه آن بین 1 ٫ 5 تا 2 سانتیمتر است. تمام پستانداران جفتزیست ناف دارند. ناف در انسانها به سادگی قابل مشاهدهاست. پرسش: بند ناف انسان به کجا وصل است؟ پاسخ:"
- text: "بیش از ده هزار سال است که انسانها در قاره آمریکا زندگی میکنند. قاره آمریکا توسط کریستف کلمب و در سال 1492 کشف شد اما او به اشتباه فکر کرد که آنجا هندوستان است اما مدتها بعد آمریگو وسپوچی اعلام کرد که این قاره جدیدی است. اما تاریخ آمریکا به عنوان یک کشور مستقل به سال 1783 میلادی بازمیگردد که در آن آمریکا بر طبق معاهده پاریس به رسمیت شناخته گردید. پرسش: قاره آمریکا در چه سالی کشف شد؟ پاسخ:"
- text: "الکترونیک آرتز یا بهطور مختصر ایای شرکتی آمریکایی است که از بزرگترین شرکتهای تولید و توزیع بازیهای رایانهای بهشمار میآید. تریپ هاوکینگز این شرکت را در سال 1982 ت سیس کرد و هدف اولیه او تولید انواعی از بازیهای رایانهای بود که در خانه میتوان با آنها بازی کرد. ایای در اواخر دهه 80 به بهبود و توسعه حوزه کاری خود در زمینه بازیهای رایانهای پرداخت و با جذب چندین چهره مبتکر، موفق به رشد و توسعه بسیار در این زمینه شد. شرکت ایای در سال 2007 رتبه هشتم در فهرست بزرگترین شرکتهای طراحی نرمافزار را به خود اختصاص داد. درآمد سالانه شرکت ایای در مه 2008 به بیش از 4 ٫ 02 میلیارد دلار رسید و این مقدار، رو به افزایش است. موفقترین بازیهای ایای، بازیهای ورزشی (که توسط بخش ایای اسپورتز، وابسته به این شرکت تولید میشود)، بازیهای برگرفته از فیلمهای محبوب و البته بازیهای معروفی است که این شرکت همواره به ساختن آنها مشغول بودهاست از جمله این بازیها میتوان به بازیهایی مانند نید فور اسپید، مدال افتخار، سیمز، بتل فیلد و برن اوت اشاره کرد. یک نکته حایز اهمیت در مورد این شرکت این است که در جمع 5 شرکت منفور دنیا قرار دارد. پرسش: بازیهای سبک ورزشی شرکت الکترونیک آرتز توسط کدوم قسمت ساخته میشه؟ پاسخ:"
- text: "کویر یا نمک زار منطقهای است که به دلیل موقعیت جغرافیایی (معمولا ختم رودخانهها در آن) و حرارت شدید آفتاب به نمکزار بدل شده باشد. برخی کویرها قبلا دریاچه یا دریاهایی بودهاند که در اثر تبخیر آب از آنها به نمکزار بدل شدهاند. کویر مرکزی ایران که دشت کویر نامیده میشود، درون خود تعداد زیادی کویر کوچکتر، مانند کویر درانجیر، کویر ساغند، کویر بند ریگ را جا دادهاست. با وجود اینکه در بین عامه مردم رایج است که اصطلاح 'کویر' و 'بیابان' را بهجای یکدیگر بهکار میبرند ولی بین این دو اصطلاح تفاوت اساسی وجود دارد. بیابان به بخشی از مناطق خشک گفته میشود که بارندگی سالانه آن کمتر از 50 میلیمتر است و ممکن است چند سال در آن باران نبارد و با کمآبی و تبخیر شدید مواجه است و پوشش گیاهی آن بسیار ضعیف است. اما کویر به زمینهای رسی پفکرده، با شوری و نمک بسیار شدید گفته میشود که گیاهان نمیتوانند در آن رشد نمایند. در بعضی از کویرها که شوری خاک کمتر است، ممکن است گیاهانی مانند گز که دربرابر املاح نمکی مقاوم است، در آن رشد نماید. پرسش: بافت گیاهی در کویر چگونه است؟ پاسخ:"
- text: "قطبنما وسیلهای برای تعیین جهت (جهتیابی) است. این وسیله با استفاده از میدان مغناطیسی زمین جهت قطب شمال را نشان میدهد که در حقیقت شمال مغناطیسی زمین است که با شمال حقیقی مقداری فاصله دارد. زاویه بین شمال حقیقی و شمال مغناطیسی، میل مغناطیسی نامیده میشود. امروزه برای تعیین شمال حقیقی از قطبنماهای پیشرفتهتری مانند قطبنمای ژیروسکوپی استفاده میشود. قطبنمایی که از یک آهنربا ساخته شده یعنی قطبنمای مغناطیسی جهت را نشان میدهد زیرا زمین چون آهنربای بزرگی عمل میکند. نیروی آهنربایی زمین قطبنما یا سوزن مغناطیسی را به سوی شمال و جنوب میکشد. کسی نمیداند که چه کسی اول بار قطبنما را ساخت. برخی گمان میکنند که چینیان نخستین بار قطبنما را ساختند برخی دیگر میگویند که قطبنما در ایتالیا اختراع شدهاست. بعضی از نخستین قطبنماها تکههای اکسید مغناطیسی آهن بودهاند که بر قطعات چوبی یا چوبپنبه قرار داشتند و در یک ظرف آب شناور بودند. اکسید مغناطیسی آهن نوعی کانی آهن است یک نام دیگر آن ماگنتیت است. تکههای ماگنتیت آهنرباهای طبیعی هستند. پس از آن مردم ساختن آهنربا از فولاد را یادگرفتند و توانستند قطبنماهای بهتری بسازند. پرسش: اکسید مغناطیسی آهن چیه؟ پاسخ:"
- text: "لاستیک طبیعی که لاستیک هندی یا کایوچو نیز نامیده میشود، قدیمیترین الاستومر تجاری است که از لاتکس ساخته میشود. لاتکس ترشحات داخلی یک درخت گرمسیری به نام درخت لاستیک است. لاتکس در شکل خام خود، نوعی چسب بسیار خوب است و میتوان با انحلال آن در حلالهای مناسب، چسبهای مختلفی تولید کرد. لاتکس در ابتدای تولید، از پلیمرهایی از ترکیب آلی ایزوپرین با ناخالصیهای جزیی از سایر ترکیبات آلی، به علاوه آب تشکل شدهاست. تایلند، مالزی و اندونزی کشورهای پیشرو در تولید لاستیک هستند. انواع پلی ایزوپرین که به عنوان لاستیکهای طبیعی استفاده میشوند، در دسته الاستومرها طبقهبندی میشوند. اولین استفاده از لاستیک توسط فرهنگهای بومی آمریکای میانه انجام شد. آنها از این لاستیک برای ساخت توپ بازی استفاده میکردند. بعدها لاستیک توسط فرهنگهای مایا و آزتک مورد استفاده قرار گرفت. آزتکها علاوه بر ساخت توپ، از لاستیک برای اهداف دیگری مانند ساخت ظروف و ضدآب ساختن منسوجات از طریق اشباع آنها با شیره لاتکس استفاده میکردند. پرسش: آمریکای میانه در ابتدا از لاستیک برای تولید چی استفاده میکرد؟ پاسخ:"
- text: "آتیلا ( 405 453 میلادی) یکی از رهبران قوم هون بود که بزرگترین امپراتوری را در اروپا، از رود اورال تا دانوب تشکیل داد. در زمان فرمانروایی، وی یکی از مخوفترین دشمنان امپراتوریهای روم غربی و شرقی بود. رومیان به او لقب تازیانه خداوند داده بودندو به او باج میدادند تا کاری به کار رم نداشته باشد. آتیلا در آغاز به ایران حمله کرد و با شکست مواجه شد. حملهای که او در سال 441 میلادی به امپراتوری بیزانس کرد باعث شد تا تصمیم به حملات بیشتری به سوی غرب بگیرد. وی در اروپا شهرهای بسیاری را نابود و غارت کرد.سرانجام، در نبرد دشت کاتالانیها، در مقابل فلاویوس آییتیوس شکست خورد. در این جنگ، رومیها و آلانیها به مصاف با هونها رفتند.هونها در ناحیه بین رود ولگا و دشتهای مجارستان میزیستند، از آغاز سده پنجم به تاخت و تازهای فراوان و پرسودی در حوالی رود دانوب دست زدند، بنابراین، در حدود 445 تا 440 میلادی، دربار آتیلا به تجمل و زیبایی آراسته بود، شماره اسیرانی که میگرفتند بسیار بود، هر دو زبان یونانی و لاتین در دربار تکلم میشد، و دبیران رومیتبار رویدادهای خارجی را همواره به آگاهی خان میرساندند، آتیلا، زرد رنگتر از بیشتر افراد قومش بود، پرسش: رومیها چه لقبی به اتیلا داده بودند؟ پاسخ:"
- text: "ماده سوختنی مادهای است که در اثر تغییرات (معمولا شیمیایی) تولید انرژی مفید میکند که بعدا میتواند تبدیل به انرژی مکانیکی شود. این تغییرات معمولا با سوختن (یعنی ترکیب با اکسیژن) همراه است. فرایندهای مورد استفاده برای تبدیل سوخت به انرژی عبارتند از: واکنشهای شیمیایی مختلف و گرمازا، واکنشهای هستهای مانند شکافت هستهای یا گداخت هستهای. هیدروکربنها تا حد زیادی شایعترین منبع سوخت مورد استفاده توسط انسان است، اما در بسیاری از موارد فلزات رادیو اکتیو نیز استفاده میشوند. اولین استفاده از سوخت توسط بشر ، احتراق و سوزاندن تکههای چوب در حدود 2 میلیون سال پیش توسط انسان راست قامت بود . به صورت کلی در طول تاریخ زندگی بشر که تا به حال با آن آشنا شدهایم ، تنها سوخت هایی که بیشترین استفاده را داشته است از گیاهان و یا چربی حیوانات بدست میآمده است و مورد استفاده انسان قرار گرفته است . انسانها از 6000 سال قبل از میلاد مسیح برای ذوب آهن از زغال چوب و مشتقات چوب استفاده میکردند. بعدها این سوختها جای خودشان را با کک عوض کردند . به دلیل اینکه در حوالی قرن 18 جنگلهای اروپا در حال نابودی بودند. پرسش: سوخت چجوری انرژی قابل استفاده تولید میکنه؟ پاسخ:"
- text: "ژرمن شپرد یا سگ چوپان آلمانی یکی از نژادهای سگ است. سگ چوپان آلمانی یکی از نژادهای اصیل آلمانی است که برای نخستین بار در سال 1899 ثبت گردید. سگی باهوش، شجاع و مناسب برای کارهای مختلف از جمله گله داری، نگهبانی، راهنمای نابینایان، همراه خانواده، و جستجو و نجات است. قد استاندارد تا جدوگاه در نرها 60 تا 65 سانتیمتر و در مادهها 55 تا 60 سانتیمتر است. طول عمر از 9 تا 13 سال است. این نژاد را اکثر افراد به دلیل استفاده در فیلمهایی نظیر رکس میشناسند و همچنین این سگ حضور موثری در صحنههای امدادی دارد. در خاورمیانه دستههایی از شپردهای پلاس فراوان هستند اما نژاد ژرمن شپرد بیشتر در اروپا زندگی دیده شدهاست. مهمترین ویژگی در این نژاد رفتارهای اشرافی، شهامت و توانایی آموختن رفتارها و فعالیتهای اختصاصی است. نخستین ویژگی یک جرمن شپرد خوب، قدرت، چالاکی، عضلات مناسب و هوشیاری است. رنگ در سگهای ژرمن شپرد متفاوت است و تقریبا اکثر رنگها قابل قبول هستند. با این وجود رنگهای خیلی کم رنگ یا سفید یک دست قابل قبول نمیباشد. پرسش: عمر سگ ژرمن شپرد چند ساله؟ پاسخ:"
---
# GPT2 QA - Persian
It is a new approach to using GPT2 in other downstream NLP tasks like QA. The model was trained on PersianQA and evaluated on PersianQA and PersiNLU (Reading Comprehension).
## Dataset
- [PersianQA](https://github.com/sajjjadayobi/PersianQA)
- [ParsiNLU](https://github.com/persiannlp/parsinlu)
## Evaluation
The following table summarizes the scores obtained by the model.
| Dataset | F1 Score (%) | Exact Match (%) | Total (#) |
|:---------:|:------------:|:---------------:|:---------:|
| ParsNLU | 46.95 | 20.39 | 564 |
| PersianQA | 45.93 | 23.19 | 651 |
## Demo
[Streamlit GPT2 QA - Persian](https://huggingface.co/spaces/m3hrdadfi/gpt2-persian-qa)
## How to use
TODO (will be filled shortly)... |
meghanabhange/Hinglish-DistilBert | 16200381e14a51d090e9095cead77c5dea305f5f | 2020-10-21T12:46:32.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | meghanabhange | null | meghanabhange/Hinglish-DistilBert | 40 | null | transformers | 6,452 | Entry not found |
mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es | f292c3eb72b2a7a4074d14541ad86320fa643e64 | 2022-02-09T13:39:25.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"es",
"dataset:stsb_multi_mt",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | mrm8488 | null | mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es | 40 | 1 | sentence-transformers | 6,453 | ---
language: es
thumbnail: https://imgur.com/a/G77ZqQN
pipeline_tag: sentence-similarity
datasets:
- stsb_multi_mt
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Distiluse-m-v2 fine-tuned on stsb_multi_mt for Spanish Semantic Textual Similarity
This is a [sentence-transformers](https://www.SBERT.net) model (distiluse-base-multilingual-cased-v2): It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Nerea va a comprar un cuadro usando bitcoins", "Se puede comprar arte con bitcoins"]
model = SentenceTransformer('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
model = AutoModel.from_pretrained('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## How to evaluate
```py
from datasets import load_dataset
from sentence_transformers import SentenceTransformer, InputExample
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
test_data = load_dataset('stsb_multi_mt', 'es', split='test')
test_data = test_data.rename_columns({'similarity_score': 'label'})
test_data = test_data.map(lambda x: {'label': x['label'] / 5.0})
samples = []
for sample in test_data:
samples.append(InputExample(
texts=[sample['sentence1'], sample['sentence2']],
label=sample['label']
))
evaluator = EmbeddingSimilarityEvaluator.from_input_examples(
samples, write_csv=False
)
model = SentenceTransformer('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
evaluator(model)
# It outputs: 0.7604056195656299
```
## Evaluation Results
**Spearman’s rank correlation: 0.7604056195656299**
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 906 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 271,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mrm8488/t5-small-finetuned-text2log | 8a1cc5d2cfb22837838824f46147c0eeffaf3acc | 2022-02-23T18:30:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"en",
"transformers",
"generated_from_trainer",
"tex2log",
"log2tex",
"foc",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-text2log | 40 | 1 | transformers | 6,454 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- tex2log
- log2tex
- foc
widget:
- text: "translate to nl: all x1.(_explanation(x1) -> -_equal(x1))"
- text: "translate to fol: All chains are bad."
model-index:
- name: t5-small-text2log
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5 (small) fine-tuned on Text2Log
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an Text2Log dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0749 | 1.0 | 21661 | 0.0509 |
| 0.0564 | 2.0 | 43322 | 0.0396 |
| 0.0494 | 3.0 | 64983 | 0.0353 |
| 0.0425 | 4.0 | 86644 | 0.0332 |
| 0.04 | 5.0 | 108305 | 0.0320 |
| 0.0381 | 6.0 | 129966 | 0.0313 |
### Usage:
```py
from transformers import AutoTokenizer, T5ForConditionalGeneration
MODEL_CKPT = "mrm8488/t5-small-finetuned-text2log"
model = T5ForConditionalGeneration.from_pretrained(MODEL_CKPT).to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_CKPT)
def translate(text):
inputs = tokenizer(text, padding="longest", max_length=64, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask, early_stopping=False, max_length=64)
return tokenizer.decode(output[0], skip_special_tokens=True)
prompt_nl_to_fol = "translate to fol: "
prompt_fol_to_nl = "translate to nl: "
example_1 = "Every killer leaves something."
example_2 = "all x1.(_woman(x1) -> exists x2.(_emotion(x2) & _experience(x1,x2)))"
print(translate(prompt_nl_to_fol + example_1)) # all x1.(_killer(x1) -> exists x2._leave(x1,x2))
print(translate(prompt_fol_to_nl + example_2)) # Every woman experiences emotions.
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
oskrmiguel/mt5-simplification-spanish | 0b78e767ac9bf700152ba5a9761f09a69343a552 | 2022-01-27T13:32:24.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"transformers",
"simplification",
"spanish",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | oskrmiguel | null | oskrmiguel/mt5-simplification-spanish | 40 | 4 | transformers | 6,455 |
---
language:
- es
thumbnail:
tags:
- simplification
- mt5
- spanish
license: cc-by-nc-sa-4.0
metrics:
- sari
widget:
- text: "La Simplificación Textual es el proceso de transformación de un texto a otro texto equivalente más comprensible para un determinado tipo de grupo o población."
- text: "Los textos simplificados son apropiados para muchos grupos de lectores, como, por ejemplo: estudiantes de idiomas, personas con discapacidades intelectuales y otras personas con necesidades especiales de lectura y comprensión.
"
---
# mt5-simplification-spanish
## Model description
This is a fine-tuned mt5-small model for generating simple text from complex text.
This model was created with the IXA Group research group of the University of the Basque Country, the model has been evaluated with the Sari, Bleu and Fklg metrics; it was trained and tested using the [Simplext corpus](https://dl.acm.org/doi/10.1145/2738046).
## Dataset
Simplext
## Model Evaluation
Bleu: 13,186
Sari: 42,203
Fklg: 10,284
## Authors
Oscar M. Cumbicus-Pineda, Itziar Gonzalez-Dios, Aitor Soroa, November 2021
## Code
https://github.com/oskrmiguel/mt5-simplification |
shrugging-grace/tweetclassifier | e584ba8282d48af1f1cb0247e7a551b751ed7365 | 2021-05-20T05:55:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shrugging-grace | null | shrugging-grace/tweetclassifier | 40 | null | transformers | 6,456 | # shrugging-grace/tweetclassifier
## Model description
This model classifies tweets as either relating to the Covid-19 pandemic or not.
## Intended uses & limitations
It is intended to be used on tweets commenting on UK politics, in particular those trending with the #PMQs hashtag, as this refers to weekly Prime Ministers' Questions.
#### How to use
``LABEL_0`` means that the tweet relates to Covid-19
``LABEL_1`` means that the tweet does not relate to Covid-19
## Training data
The model was trained on 1000 tweets (with the "#PMQs'), which were manually labeled by the author. The tweets were collected between May-July 2020.
### BibTeX entry and citation info
This was based on a pretrained version of BERT.
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
|
sibt-rj/albert-large-urdu | 997d0507940940ecbb6af536daeb1f4e1b27754e | 2020-12-16T20:27:42.000Z | [
"pytorch",
"albert",
"fill-mask",
"ur",
"dataset:urdu-text-news",
"transformers",
"urdu",
"language-model",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | sibt-rj | null | sibt-rj/albert-large-urdu | 40 | null | transformers | 6,457 | ---
language:
- ur
tags:
- urdu
- language-model
license: mit
datasets:
- urdu-text-news
---
|
tennessejoyce/titlewave-bert-base-uncased | a8d7b656a3058cceefa99dc51cf3e01fdf209ebe | 2021-05-20T07:29:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"license:cc-by-4.0"
] | text-classification | false | tennessejoyce | null | tennessejoyce/titlewave-bert-base-uncased | 40 | null | transformers | 6,458 | ---
language: en
license: cc-by-4.0
widget:
- text: "[Gmail API] How can I extract plain text from an email sent to me?"
---
# Titlewave: bert-base-uncased
## Model description
Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See the [github repository](https://github.com/tennessejoyce/TitleWave) for more information.
This is one of two NLP models used in the Titlewave project, and its purpose is to classify whether question will be answered or not just based on the title. The [companion model](https://huggingface.co/tennessejoyce/titlewave-t5-small) suggests a new title based on on the body of the question.
## Intended use
Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer.
You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the [github repository](https://github.com/tennessejoyce/TitleWave), which integrates the tool directly into the Stack Overflow website.
You can also run the model locally in Python like this (which automatically downloads the model to your machine):
```python
>>> from transformers import pipeline
>>> classifier = pipeline('sentiment-analysis', model='tennessejoyce/titlewave-bert-base-uncased')
>>> classifier('[Gmail API] How can I extract plain text from an email sent to me?')
[{'label': 'Answered', 'score': 0.8053370714187622}]
```
The 'score' in the output represents the probability of getting an answer with this title: 80.5%.
## Training data
The weights were initialized from the [BERT base model](https://huggingface.co/bert-base-uncased), which was trained on BookCorpus and English Wikipedia.
Then the model was fine-tuned on the dataset of previous Stack Overflow post titles, which is publicly available [here](https://archive.org/details/stackexchange).
Specifically I used three years of posts from 2017-2019, filtered out posts which were closed (e.g., duplicates, off-topic), and selected 5% of the remaining posts at random to use in the training set, and the same amount for validation and test sets (278,155 posts each).
## Training procedure
The model was fine-tuned for two epochs with a batch size of 32 (17,384 steps total) using 16-bit mixed precision.
After some hyperparameter tuning, I found that the following two-phase training procedure yields the best performance (ROC-AUC score) on the validation set:
* In the first epoch, all layers were frozen except for the last two (pooling layer and classification layer) and a learning rate of 3e-4 was used.
* In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5.
Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments),
including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that were used to train the model.
## Evaluation
See [this notebook](https://github.com/tennessejoyce/TitleWave/blob/master/model_training/test_classifier.ipynb) for the performance of the title classification model on the test set.
|
uer/chinese_roberta_L-8_H-256 | c3e6cb43e6908a7cf43de4ea5a20b8fd28d84981 | 2022-07-15T08:14:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-8_H-256 | 40 | null | transformers | 6,459 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
unideeplearning/polibert_sa | 9f191fddcab67467555d22cebe8f88a1a908867a | 2021-09-23T16:42:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"it",
"transformers",
"sentiment",
"Italian",
"license:mit"
] | text-classification | false | unideeplearning | null | unideeplearning/polibert_sa | 40 | null | transformers | 6,460 | ---
language: it
tags:
- sentiment
- Italian
license: mit
widget:
- text: Giuseppe Rossi è un ottimo politico
---
# 🤗 + polibert_SA - POLItic BERT based Sentiment Analysis
## Model description
This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets. You can try it out at https://www.unideeplearning.com/twitter_sa/ (in italian!)
#### Hands-on
```python
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("unideeplearning/polibert_sa")
model = AutoModelForSequenceClassification.from_pretrained("unideeplearning/polibert_sa")
text = "Giuseppe Rossi è un pessimo politico"
input_ids = tokenizer.encode(text, add_special_tokens=True, return_tensors= 'pt')
logits, = model(input_ids)
logits = logits.squeeze(0)
prob = nn.functional.softmax(logits, dim=0)
# 0 Negative, 1 Neutral, 2 Positive
print(prob.argmax().tolist())
```
#### Hyperparameters
- Optimizer: **AdamW** with learning rate of **2e-5**, epsilon of **1e-8**
- Max epochs: **2**
- Batch size: **16**
## Acknowledgments
Thanks to the support from:
the [Hugging Face](https://huggingface.co/), https://www.unioneprofessionisti.com
https://www.unideeplearning.com/
|
sanchit-gandhi/wav2vec2-gpt2-wandb-grid-search | f2f66e759c9dc110e409c1d75eea893c400b8248 | 2022-03-03T13:39:57.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-gpt2-wandb-grid-search | 40 | null | transformers | 6,461 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
batterydata/batteryscibert-uncased-abstract | 1eca2bfd6e213253a958e71707048fc4ee20625f | 2022-03-05T14:54:59.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
] | text-classification | false | batterydata | null | batterydata/batteryscibert-uncased-abstract | 40 | null | transformers | 6,462 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-uncased for Battery Abstract Classification
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryscibert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.12,
"Test accuracy": 97.47,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
armageddon/albert-squad-v2-covid-qa-deepset | afcd01d73da4acba7c3ed433138fe8b84a5ff9e0 | 2022-03-01T02:04:26.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/albert-squad-v2-covid-qa-deepset | 40 | null | transformers | 6,463 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_albert_base_squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
QuickRead/pegasus-reddit-7e05 | f143e174018b5deaa9f5f89c1bc216fb12707a3c | 2022-03-15T17:13:28.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuickRead | null | QuickRead/pegasus-reddit-7e05 | 40 | null | transformers | 6,464 | Entry not found |
MarioBlue/Portuguese-Poems-Small-Gpt2 | 127c492b61ea4d86684842588e28fea1cd576cd6 | 2022-07-29T01:59:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | MarioBlue | null | MarioBlue/Portuguese-Poems-Small-Gpt2 | 40 | null | transformers | 6,465 | This is A GPT2 Fine Tuned Model for Poems in Portuguese
This Model is still not working properly,to generate a Poem you need to write on generator "Poema: [Tittle] \n" or just "Poema:" if you want the model to generate a tittle.
You are only allowed to use this software for academic purpose any commercial is not allowed,
any paper or research made with this software must say where the ai came from.
This model was generated by a dataset of Poems ,this Transformers could generate text with prejudice,be aware.
A better model will be generate soon.
Fine Tuned with this decoder : Fine Tuned on GPorTuguese-2 (Portuguese GPT-2 small): a Language Model for Portuguese text generation (and more NLP tasks...) |
alichte/TG-Relation-Model | 44133dec4604b0c375eb13b6fb54834aa2e941c2 | 2022-04-02T19:47:55.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | alichte | null | alichte/TG-Relation-Model | 40 | null | transformers | 6,466 | ---
license: afl-3.0
---
|
alexjercan/codebert-base-buggy-token-classification | 72e54a618c91e21e8a89272d33f9d370e12bf4a4 | 2022-04-09T16:00:35.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | alexjercan | null | alexjercan/codebert-base-buggy-token-classification | 40 | 1 | transformers | 6,467 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: codebert-base-buggy-token-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-base-buggy-token-classification
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5217
- Precision: 0.6942
- Recall: 0.0940
- F1: 0.1656
- Accuracy: 0.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
medhabi/distilbert-base-uncased-mlm-ta-local | 4ca17169c1392c6d114e2255aec2bd38c57e0cf5 | 2022-04-05T14:05:55.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | medhabi | null | medhabi/distilbert-base-uncased-mlm-ta-local | 40 | null | transformers | 6,468 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-mlm-ta-local
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-mlm-ta-local
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4431 | 1.0 | 3125 | 2.1817 |
| 2.2197 | 2.0 | 6250 | 2.0929 |
| 2.1519 | 3.0 | 9375 | 2.0696 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist | a3c852d467c4900d277dfe918a545346a5c736b4 | 2022-07-07T15:23:09.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"arxiv:1907.11692",
"transformers",
"biomedical",
"clinical",
"eHR",
"spanish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist | 40 | 1 | transformers | 6,469 | ---
language:
- es
tags:
- biomedical
- clinical
- eHR
- spanish
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/cantemist-ner"
metrics:
- f1
model-index:
- name: PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- name: f1
type: f1
value: 0.8340
widget:
- text: "El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo."
- text: "Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma."
- text: "Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B."
---
# Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the Cantemist dataset.
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed.
For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card.
## Dataset
The dataset used is [CANTEMIST](https://huggingface.co/datasets/PlanTL-GOB-ES/cantemist-ner), a NER dataset annotated with tumor morphology entities. For further information, check the [official website](https://temu.bsc.es/cantemist/).
## Evaluation and results
F1 Score: 0.8340
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Citing
To be announced soon!
## Funding
This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
|
akoksal/bounti | ecfc692450eb64d2bbd3950e0c5e7ada89232de6 | 2022-04-11T20:12:25.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"transformers",
"sentiment",
"twitter",
"turkish"
] | text-classification | false | akoksal | null | akoksal/bounti | 40 | null | transformers | 6,470 | ---
language: "tr"
tags:
- sentiment
- twitter
- turkish
---
This Turkish Sentiment Analysis model is a fine-tuned checkpoint of pretrained [BERTurk model 128k uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) with [BounTi dataset](https://ieeexplore.ieee.org/document/9477814).
## Usage in Hugging Face Pipeline
```
from transformers import pipeline
bounti = pipeline("sentiment-analysis",model="akoksal/bounti")
print(bounti("Bu yemeği pek sevmedim"))
>> [{'label': 'negative', 'score': 0.8012508153915405}]
```
## Results
The scores of the finetuned model with BERTurk:
||Accuracy|Precision|Recall|F1|
|-------------|:---------:|:---------:|:------:|:-----:|
|Validation|0.745|0.706|0.730|0.715|
|Test|0.723|0.692|0.729|0.701|
## Dataset
You can find the dataset in [our Github repo](https://github.com/boun-tabi/BounTi-Turkish-Sentiment-Analysis) with the training, validation, and test splits.
Due to Twitter copyright, we cannot release the full text of the tweets. We share the tweet IDs, and the full text can be downloaded through official Twitter API.
| | Training | Validation | Test |
|----------|:--------:|:----------:|:----:|
| Positive | 1691 | 188 | 469 |
| Neutral | 3034 | 338 | 843 |
| Negative | 1008 | 113 | 280 |
| Total | 5733 | 639 | 1592 |
## Citation
You can cite the following paper if you use our work:
```
@INPROCEEDINGS{BounTi,
author={Köksal, Abdullatif and Özgür, Arzucan},
booktitle={2021 29th Signal Processing and Communications Applications Conference (SIU)},
title={Twitter Dataset and Evaluation of Transformers for Turkish Sentiment Analysis},
year={2021},
volume={},
number={}
}
```
---
|
ken11/albert-base-japanese-v1-with-japanese-tokenizer | e1d6e479f98299eeab8ee82bf1288f862731e0e5 | 2022-04-20T17:28:13.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"ja",
"transformers",
"japanese",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ken11 | null | ken11/albert-base-japanese-v1-with-japanese-tokenizer | 40 | null | transformers | 6,471 | ---
tags:
- fill-mask
- japanese
- albert
language:
- ja
license: mit
widget:
- text: "明日は明日の[MASK]が吹く"
---
## albert-base-japanese-v1-with-japanese
日本語事前学習済みALBERTモデルです
このモデルではTokenizerに[BertJapaneseTokenizerクラス](https://huggingface.co/docs/transformers/main/en/model_doc/bert-japanese#transformers.BertJapaneseTokenizer)を利用しています
[albert-base-japanese-v1](https://huggingface.co/ken11/albert-base-japanese-v1)よりトークナイズ処理が楽になっています
## How to use
### ファインチューニング
このモデルはPreTrainedモデルです
基本的には各種タスク用にファインチューニングして使用されることを想定しています
### Fill-Mask
#### for PyTorch
```py
from transformers import (
AutoModelForMaskedLM, AutoTokenizer
)
tokenizer = AutoTokenizer.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer")
model = AutoModelForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer")
text = "明日は明日の[MASK]が吹く"
tokens = tokenizer(text, return_tensors="pt")
mask_index = tokens["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predict = model(**tokens)[0]
_, result = predict[0, mask_index].topk(5)
print(tokenizer.convert_ids_to_tokens(result.tolist()))
```
#### for TensorFlow
```py
from transformers import (
TFAutoModelForMaskedLM, AutoTokenizer
)
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer")
model = TFAutoModelForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1-with-japanese-tokenizer")
text = "明日は明日の[MASK]が吹く"
tokens = tokenizer(text, return_tensors="tf")
mask_index = tokens["input_ids"][0].numpy().tolist().index(tokenizer.mask_token_id)
predict = model(**tokens)[0]
result = tf.math.top_k(predict[0, mask_index], k=5)
print(tokenizer.convert_ids_to_tokens(result.indices.numpy()))
```
## Training Data
学習には
- [日本語Wikipediaの全文](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89)
を利用しています
## Tokenizer
トークナイザーは[BertJapaneseTokenizerクラス](https://huggingface.co/docs/transformers/main/en/model_doc/bert-japanese#transformers.BertJapaneseTokenizer)を利用しています
こちらも学習データは同様です
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
doc2query/msmarco-italian-mt5-base-v1 | 1fc4f8c520029aa0a92060c8b9ed632bf8b0568c | 2022-04-29T12:06:16.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"it",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-italian-mt5-base-v1 | 40 | 1 | transformers | 6,472 | ---
language: it
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
license: apache-2.0
---
# doc2query/msmarco-italian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-italian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
allenai/ivila-block-layoutlm-finetuned-s2vl-v2 | af5f5335af8513f076460128208fd6a94d8fe5b1 | 2022-04-29T22:47:15.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | allenai | null | allenai/ivila-block-layoutlm-finetuned-s2vl-v2 | 40 | null | transformers | 6,473 | Entry not found |
TweebankNLP/bertweet-tb2_wnut17-ner | af754e27765c8c9db1ff7a31505a465000346aec | 2022-05-05T00:23:17.000Z | [
"pytorch",
"roberta",
"token-classification",
"arxiv:2201.07281",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | TweebankNLP | null | TweebankNLP/bertweet-tb2_wnut17-ner | 40 | 1 | transformers | 6,474 | ---
license: cc-by-nc-4.0
---
## Model Specification
- This is the **state-of-the-art Twitter NER model (with 74.35\% Entity-Level F1)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and WNUT 17 training data.
- For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page.
- In the paper, it is referred as `HuggingFace-BERTweet (TB2+W17).`
## How to use the model
- **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
```
## References
If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf):
```bibtex
@article{jiang2022tweetnlp,
title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis},
author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb},
journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` |
sonoisa/sentence-bert-base-ja-en-mean-tokens | 51931e799ed487cedf94e7d6e382e06a9e196dbf | 2022-05-08T03:29:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"ja",
"sentence-transformers",
"sentence-bert",
"sentence-similarity",
"license:cc-by-sa-4.0"
] | feature-extraction | false | sonoisa | null | sonoisa/sentence-bert-base-ja-en-mean-tokens | 40 | 1 | sentence-transformers | 6,475 | ---
language: ja
license: cc-by-sa-4.0
tags:
- sentence-transformers
- sentence-bert
- feature-extraction
- sentence-similarity
---
This is a Japanese+English sentence-BERT model.
日本語+英語用Sentence-BERTモデルです。
[日本語のみバージョン](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2)と比べて、手元の非公開データセットでは日本語の精度が0.8pt低く、英語STSbenchmarkでは精度が8.3pt高い(Cosine-Similarity Spearmanが79.11%)結果が得られました。
事前学習済みモデルとして[cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking)を利用しました。
推論の実行にはfugashiとipadicが必要です(pip install fugashi ipadic)。
# 日本語のみバージョンの解説
https://qiita.com/sonoisa/items/1df94d0a98cd4f209051
モデル名を"sonoisa/sentence-bert-base-ja-en-mean-tokens"に書き換えれば、本モデルを利用した挙動になります。
# 使い方
```python
from transformers import BertJapaneseTokenizer, BertModel
import torch
class SentenceBertJapanese:
def __init__(self, model_name_or_path, device=None):
self.tokenizer = BertJapaneseTokenizer.from_pretrained(model_name_or_path)
self.model = BertModel.from_pretrained(model_name_or_path)
self.model.eval()
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = torch.device(device)
self.model.to(device)
def _mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
@torch.no_grad()
def encode(self, sentences, batch_size=8):
all_embeddings = []
iterator = range(0, len(sentences), batch_size)
for batch_idx in iterator:
batch = sentences[batch_idx:batch_idx + batch_size]
encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest",
truncation=True, return_tensors="pt").to(self.device)
model_output = self.model(**encoded_input)
sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu')
all_embeddings.extend(sentence_embeddings)
# return torch.stack(all_embeddings).numpy()
return torch.stack(all_embeddings)
MODEL_NAME = "sonoisa/sentence-bert-base-ja-en-mean-tokens"
model = SentenceBertJapanese(MODEL_NAME)
sentences = ["暴走したAI", "暴走した人工知能"]
sentence_embeddings = model.encode(sentences, batch_size=8)
print("Sentence embeddings:", sentence_embeddings)
```
|
deepgai/tweet_eval-sentiment-finetuned | aa1eb67695dbf99de9c720ddf505ee94d0400b6c | 2022-05-09T10:46:47.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | deepgai | null | deepgai/tweet_eval-sentiment-finetuned | 40 | null | transformers | 6,476 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: tweet_eval-sentiment-finetuned
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: tweeteval
type: tweeteval
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7099
- name: f1
type: f1
value: 0.7097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_eval-sentiment-finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the Tweet_Eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Accuracy: 0.744
- F1: 0.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7491 | 1.0 | 357 | 0.6089 | 0.7345 | 0.7314 |
| 0.5516 | 2.0 | 714 | 0.5958 | 0.751 | 0.7516 |
| 0.4618 | 3.0 | 1071 | 0.6131 | 0.748 | 0.7487 |
| 0.4066 | 4.0 | 1428 | 0.6532 | 0.744 | 0.7437 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
emre/turkish-sentiment-analysis | 86f31923fda4bcec1c59218c4f0a4aa4938dc716 | 2022-05-15T22:07:26.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:emre/autotrain-data-turkish-sentiment-analysis",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | emre | null | emre/turkish-sentiment-analysis | 40 | null | transformers | 6,477 | ---
tags: autotrain
language: tr
widget:
- text: "Bu ürün gerçekten güzel çıktı"
datasets:
- emre/autotrain-data-turkish-sentiment-analysis
co2_eq_emissions: 120.82460124309924
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 870727732
- CO2 Emissions (in grams): 120.82460124309924
## Validation Metrics
- Loss: 0.1098366305232048
- Accuracy: 0.9697853317600073
- Macro F1: 0.9482820974460786
- Micro F1: 0.9697853317600073
- Weighted F1: 0.9695237873890088
- Macro Precision: 0.9540948884759232
- Micro Precision: 0.9697853317600073
- Weighted Precision: 0.9694186941924757
- Macro Recall: 0.9428467518468838
- Micro Recall: 0.9697853317600073
- Weighted Recall: 0.9697853317600073
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Bu ürün gerçekten güzel çıktı"}' https://api-inference.huggingface.co/models/emre/turkish-sentiment-analysis
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emre/turkish-sentiment-analysis", use_auth_token=True)
inputs = tokenizer("Bu ürün gerçekten güzel çıktı", return_tensors="pt")
outputs = model(**inputs)
``` |
Remicm/sentiment-analysis-model-for-socialmedia | d6837382b32a7bffbde9d6fd7283c6f64933d86f | 2022-05-19T22:46:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Remicm | null | Remicm/sentiment-analysis-model-for-socialmedia | 40 | null | transformers | 6,478 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-analysis-model-for-socialmedia
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9297083333333334
- name: F1
type: f1
value: 0.9298923658729169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-model-for-socialmedia
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2368
- Accuracy: 0.9297
- F1: 0.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nowalab/nepali-bert-npvec1 | a352715b9ba188fde67064a712cd82c40b8a4460 | 2022-05-25T05:58:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | nowalab | null | nowalab/nepali-bert-npvec1 | 40 | 1 | transformers | 6,479 | ---
license: apache-2.0
---
We are releasing the first BERT model trained on monolingual text for Nepali. Please refer our paper [NPVec1: Word Embeddings for Nepali - Construction and Evaluation](https://aclanthology.org/2021.repl4nlp-1.18.pdf) to get details on its construction and evaluation. |
Cristian-dcg/beto-sentiment-analysis-finetuned-onpremise | 562a37ff6841952b984ccbf22398e6152a5972a0 | 2022-06-07T22:36:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Cristian-dcg | null | Cristian-dcg/beto-sentiment-analysis-finetuned-onpremise | 40 | null | transformers | 6,480 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beto-sentiment-analysis-finetuned-onpremise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-sentiment-analysis-finetuned-onpremise
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7939
- Accuracy: 0.8301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4573 | 1.0 | 1250 | 0.4375 | 0.8191 |
| 0.2191 | 2.0 | 2500 | 0.5367 | 0.8288 |
| 0.1164 | 3.0 | 3750 | 0.7939 | 0.8301 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
RUCAIBox/mvp-story | c671474956d9a93fe68901fb4d40ef29e8ccd50d | 2022-06-27T02:28:15.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-story | 40 | null | transformers | 6,481 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Example1"
- text: "Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there."
example_title: "Example2"
---
# MVP-story
The MVP-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-story is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled story generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-story")
>>> inputs = tokenizer(
... "Given the story title: I think all public schools should have a uniform dress code.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs, max_length=1024)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['I think it would be a good idea to have uniform dress codes for all public schools. It would make it easier for students to dress appropriately.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
rsuwaileh/IDRISI-LMR-HD-TL | 340690c67df4b45e6b5494cf591a71328d0a9ebd | 2022-07-18T09:16:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | rsuwaileh | null | rsuwaileh/IDRISI-LMR-HD-TL | 40 | null | transformers | 6,482 | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training, development, and test data are used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-less LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
wwbproj/empathic_conversations_dialog_acts | b6404104cc5dd3c98df42566f004332762958fcc | 2022-06-22T19:39:49.000Z | [
"pytorch",
"roberta",
"en",
"transformers"
] | null | false | wwbproj | null | wwbproj/empathic_conversations_dialog_acts | 40 | null | transformers | 6,483 | ---
language:
- en
---
# Empathic Conversations: Dialog Acts
Model owner(s): Ryan Guan, [[email protected]](mailto:[email protected])
Associated paper:
## Model description
### Related models
- wwbproj/empathic_conversations_empathy
- wwbproj/empathic_conversations_emotion
- wwbproj/empathic_conversations_emotional_polarity
- wwbproj/empathic_conversations_self_disclosure
## Intended uses & limitations
## How to use
Code: https://github.com/wwbp/models/tree/master/neural_model_code/empathic_conversations
## Training data
## Training procedure
|
robinhad/gpt2-uk-conversational | c70fdb543d8bf0509e5787ce3a7e768ef52e6991 | 2022-06-14T20:02:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | robinhad | null | robinhad/gpt2-uk-conversational | 40 | 3 | transformers | 6,484 | ---
tags:
- conversational
license: mit
widget:
- text: "привіт, як тебе звати?"
example_title: "Питаємо ім'я"
---
# Ukrainian AI chatbot alpha release
This model was trained on dataset of movie dialogs (uncleaned) from opensubtitles.org.
Link to training scripts: [https://github.com/robinhad/ukrainian-ai](https://github.com/robinhad/ukrainian-ai).
Link to end-to-end open source AI demo (speech-to-text-to-AI-to-voice): [https://huggingface.co/spaces/robinhad/ukrainian-ai](https://huggingface.co/spaces/robinhad/ukrainian-ai).
# Example usage
```python
from transformers import Conversation, pipeline, ConversationalPipeline
conv: ConversationalPipeline = pipeline("conversational", "robinhad/gpt2-uk-conversational")
text = "привіт, як тебе звати?"
result = conv(Conversation(text))
# result.add_user_input()
print(result)
```
<img src="https://visitor-badge-reloaded.herokuapp.com/badge?page_id=robinhad.ukrainian-ai-chatbot" alt="visitors badge"/> |
huggingtweets/unknownco123 | 2308321bddeab6c1eb7b8e8129362782f3f676d2 | 2022-06-16T16:20:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/unknownco123 | 40 | null | transformers | 6,485 | ---
language: en
thumbnail: http://www.huggingtweets.com/unknownco123/1655396407192/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522164949904248832/IdAMZkO9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">UnknownCollector 🇺🇦🕊🙏🏼</div>
<div style="text-align: center; font-size: 14px;">@unknownco123</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from UnknownCollector 🇺🇦🕊🙏🏼.
| Data | UnknownCollector 🇺🇦🕊🙏🏼 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 1208 |
| Short tweets | 184 |
| Tweets kept | 1852 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gtnmsztt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unknownco123's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2osaytek) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2osaytek/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unknownco123')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bookpanda/wangchanberta-base-att-spm-uncased-masking | d124223e6c095758652a21a2721540ac0818d423 | 2022-06-19T11:05:59.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | bookpanda | null | bookpanda/wangchanberta-base-att-spm-uncased-masking | 40 | null | transformers | 6,486 | ---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-masking
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Lexemo/roberta_large_legal_act_extraction | e29807386021b28c685254bcfcc4ebd74edd3af0 | 2022-06-24T12:03:38.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | Lexemo | null | Lexemo/roberta_large_legal_act_extraction | 40 | null | transformers | 6,487 | ---
language: en
license: mit
metrics:
- seqeval
widget:
- text: "When Member States adopt those measures, they shall contain a reference to this Directive or be accompanied by such reference on the occasion of their official publication. They shall also include a statement that references in existing laws, regulations and administrative provisions to Article 9 of Directive 97/23/EC shall be construed as references to Article 13 of this Directive. Member States shall determine how such reference is to be made and how that statement is to be formulated."
example_title: "Example 1"
- text: "2. Member States shall adopt and publish, by 18 July 2016, the laws, regulations and administrative provisions necessary to comply with Article 2(15) to (32), Articles 6 to 12, 14, 17 and 18, Article 19(3) to (5), Articles 20 to 43, 47 and 48 and Annexes I, II, III and IV. They shall forthwith communicate the text of those measures to the Commission."
example_title: "Example 2"
- text: "When applying Article 84(1), point (a), of Regulation (EU) No 575/2013 (CRR) in respect of subsidiary institutions in third countries, should the excess capital attributable to minorities be determined by applying, namely in subparagraph (i), the provisions and requirements of CRR, together with any additional local requirements, to the extent these have to be met with CET1 capital? Although the wording of Article 84(1)(a)(i) CRR seems clear, different interpretations have arisen as to how it applies in practice, particularly in the case of third country institutions operating under a regulatory framework different from CRD IV/CRR."
example_title: "Example 3"
---
# Legal act Extraction Model
With growing legal complexity keeping track of changes in interconnectivity and hierarchical structure of the legislation is a challenging task. Entity extraction technique (also known as token classification) facilitates document analysis by assigning a label to each word in a text.
A way to decide which data elements are to be extracted and how they should be labeled mostly depends on a particular business problem and is limited only by a tokenization process meaning that an element shouldn’t be less than a token as split by a tokenizer. So as long as these data elements correspond to at least one whole token they could represent legal terms, legal entities, legal parties, deadlines and so on.
This model is fine-tuned to label mentioned legal acts and their articles. Extracted information could be used to create an interconnectivity map for legal acts.
## Model Description
This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large).
More details about RoBERTa large are available in [RoBERTa large model card](https://huggingface.co/roberta-large).
| Id | Label | Description |
| -------- | ------------------------------------------ | ----------------------------------------------------------------------- |
| 0 | O | Not a legal act and not an article |
| 1 | abbreviation_relevant_following_act | A legal act abbreviation relevant to the following legal act |
| 2 | abbreviation_relevant_previous_act | A legal act abbreviation relevant to a previously mentioned legal act |
| 3 | another_act | A legal act |
| 4 | another_act_abbreviation | A legal act mentioned as an abbreviation |
| 5 | another_act_equal_previous_act | An assumed legal act introduced previously |
| 6 | another_act_sequence_end | Inside a sequence of legal acts |
| 7 | another_act_sequence_start | At the beginning of a sequence of legal acts |
| 8 | another_article_equal_previous_article | An assumed article introduced previously |
| 9 | article_current | An article mentioning itself |
| 10 | article_relevant_current_act | An article of the same legal act as the one being processed |
| 11 | article_relevant_current_act_range_end | A range end of articles belonging to the current act |
| 12 | article_relevant_current_act_range_start | A range start of articles belonging to the current act |
| 13 | article_relevant_following_act | An article of a following legal act |
| 15 | article_relevant_following_act_range_end | A range end of articles belonging to a following act |
| 16 | article_relevant_following_act_range_start | A range start of articles belonging to a following legal act |
| 17 | article_relevant_previous_act | An article of a previously mentioned legal act |
| 18 | article_relevant_previous_act_range_end | A range end of articles belonging to a previously mentioned legal act |
| 19 | article_relevant_previous_act_range_start | A range start of articles belonging to a previously mentioned legal act |
| 20 | current_act | A legal act mentioning itself |
| 21 | treaty_abbreviation | A treaty mentioned as an abbreviation |
| 22 | treaty_name | A treaty |
| 23 | service_label | A token comprising more than 1 label |
## Intended Uses & Limitations
The model could be used to extract mentioned legal acts and their articles.
### Limitations
This legal-act extraction model is very domain-specific and will perform well on legal texts. It's not recommended to use this model for other domains, but you are free to test it out.
It was intended for English documents only.
### How To Use
```python
from transformers import (
TokenClassificationPipeline,
RobertaForTokenClassification,
RobertaTokenizerFast,
)
legal_act_extraction_model = RobertaForTokenClassification.from_pretrained(
'Lexemo/roberta_large_legal_act_extraction')
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-large")
pypeline = TokenClassificationPipeline(model=legal_act_extraction_model,
tokenizer=tokenizer,
aggregation_strategy='simple')
```
```python
# Inference
import pandas as pd
from tabulate import tabulate
text = """When Member States adopt those measures, they shall contain a
reference to this Directive or be accompanied by such reference on the
occasion of their official publication. They shall also include a statement
that references in existing laws, regulations and administrative provisions
to Article 9 of Directive 97/23/EC shall be construed as references to
Article 13 of this Directive. Member States shall determine how such
reference is to be made and how that statement is to be formulated."""
entities = pypeline(text)
df = pd.DataFrame(entities)
print(tabulate(df, showindex=True, headers=df.columns))
```
```
# Output
entity_group score word start end
-- ------------------------------ -------- ------------------ ------- -----
0 current_act 0.999999 Directive 80 89
1 article_relevant_following_act 0.999995 9 296 297
2 another_act 0.999999 Directive 97/23/EC 301 319
3 article_relevant_following_act 0.999996 13 364 366
4 current_act 0.999999 Directive 375 384
```
## Fine-tuning hyper-parameters
- learning_rate = 2e-5
- batch_size = 4
- weight_decay=0.01
- max_seq_length = 514
- num_train_epochs = 56
|
Aalaa/opt-125m-finetuned-wikitext2 | b12ee7517bedff672d31715d19ac60cc1563b6dd | 2022-06-28T03:30:55.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | Aalaa | null | Aalaa/opt-125m-finetuned-wikitext2 | 40 | null | transformers | 6,488 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-finetuned-wikitext2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4123 | 1.0 | 2370 | 3.3621 |
| 3.2096 | 2.0 | 4740 | 3.3452 |
| 3.0822 | 3.0 | 7110 | 3.3409 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cambridgeltl/mle_cnwikitext | 011a9386596ff03303ca005940fcbdfc4702fb68 | 2022-07-03T20:48:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/mle_cnwikitext | 40 | null | transformers | 6,489 | Entry not found |
zhifei/autotrain-chineses-title-summarization-3-1087939403 | cad095fa0b7da79b48f5a1ec5ae6ef5665082680 | 2022-07-05T02:45:16.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:zhifei/autotrain-data-chineses-title-summarization-3",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | zhifei | null | zhifei/autotrain-chineses-title-summarization-3-1087939403 | 40 | null | transformers | 6,490 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zhifei/autotrain-data-chineses-title-summarization-3
co2_eq_emissions: 0.004900087842646563
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1087939403
- CO2 Emissions (in grams): 0.004900087842646563
## Validation Metrics
- Loss: 0.1637328416109085
- Rouge1: 23.8095
- Rouge2: 15.0794
- RougeL: 23.8095
- RougeLsum: 23.8095
- Gen Len: 16.7143
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-chineses-title-summarization-3-1087939403
``` |
danieleV9H/wavlm-base-plus-ft-cv3 | 33063d64180702c07f00c8bfcaa81afe48f0fdd5 | 2022-07-23T15:42:47.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_3_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model-index"
] | automatic-speech-recognition | false | danieleV9H | null | danieleV9H/wavlm-base-plus-ft-cv3 | 40 | null | transformers | 6,491 | ---
tags:
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_3_0
model-index:
- name: wavlm-base-plus-ft-cv3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: '8.06'
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base-plus-ft-cv3
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the "mozilla-foundation/common_voice_3_0 english" dataset: "train" and "validation" splits are used for training while "test" split is used for validation.
It achieves the following results on the evaluation set:
- Loss: 0.4365
- Wer: 0.1801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 5.3448 | 0.05 | 500 | 3.2621 | 1.0 |
| 2.9322 | 0.1 | 1000 | 2.8551 | 1.0 |
| 1.7692 | 0.16 | 1500 | 1.2653 | 0.7447 |
| 1.012 | 0.21 | 2000 | 0.9008 | 0.5601 |
| 0.7129 | 0.26 | 2500 | 0.7684 | 0.4762 |
| 0.6424 | 0.31 | 3000 | 0.6282 | 0.4276 |
| 0.6518 | 0.37 | 3500 | 0.5888 | 0.3916 |
| 0.5142 | 0.42 | 4000 | 0.5428 | 0.3727 |
| 0.48 | 0.47 | 4500 | 0.5614 | 0.3549 |
| 0.4523 | 0.52 | 5000 | 0.5334 | 0.3487 |
| 0.4315 | 0.58 | 5500 | 0.5376 | 0.3317 |
| 0.4292 | 0.63 | 6000 | 0.4939 | 0.3172 |
| 0.4229 | 0.68 | 6500 | 0.4977 | 0.3117 |
| 0.3837 | 0.73 | 7000 | 0.4899 | 0.3056 |
| 0.385 | 0.78 | 7500 | 0.4571 | 0.2864 |
| 0.4155 | 0.84 | 8000 | 0.4635 | 0.2866 |
| 0.3768 | 0.89 | 8500 | 0.4390 | 0.2843 |
| 0.3864 | 0.94 | 9000 | 0.4529 | 0.2764 |
| 0.387 | 0.99 | 9500 | 0.4870 | 0.2755 |
| 0.341 | 1.05 | 10000 | 0.4498 | 0.2696 |
| 0.3334 | 1.1 | 10500 | 0.4355 | 0.2600 |
| 0.3039 | 1.15 | 11000 | 0.4634 | 0.2716 |
| 0.3101 | 1.2 | 11500 | 0.4615 | 0.2582 |
| 0.4343 | 1.25 | 12000 | 0.4510 | 0.2574 |
| 0.3002 | 1.31 | 12500 | 0.4313 | 0.2590 |
| 0.3419 | 1.36 | 13000 | 0.4121 | 0.2493 |
| 0.3162 | 1.41 | 13500 | 0.4423 | 0.2498 |
| 0.3134 | 1.46 | 14000 | 0.4260 | 0.2506 |
| 0.2963 | 1.52 | 14500 | 0.4272 | 0.2556 |
| 0.3297 | 1.57 | 15000 | 0.4413 | 0.2487 |
| 0.3199 | 1.62 | 15500 | 0.4260 | 0.2432 |
| 0.3368 | 1.67 | 16000 | 0.4164 | 0.2464 |
| 0.2981 | 1.73 | 16500 | 0.4111 | 0.2402 |
| 0.2887 | 1.78 | 17000 | 0.4372 | 0.2460 |
| 0.3058 | 1.83 | 17500 | 0.4161 | 0.2397 |
| 0.2877 | 1.88 | 18000 | 0.4046 | 0.2386 |
| 0.2904 | 1.93 | 18500 | 0.4108 | 0.2399 |
| 0.2851 | 1.99 | 19000 | 0.4196 | 0.2385 |
| 0.2451 | 2.04 | 19500 | 0.4096 | 0.2406 |
| 0.259 | 2.09 | 20000 | 0.4437 | 0.2374 |
| 0.2681 | 2.14 | 20500 | 0.4226 | 0.2357 |
| 0.4371 | 2.2 | 21000 | 0.4301 | 0.2356 |
| 0.2468 | 2.25 | 21500 | 0.4431 | 0.2326 |
| 0.2687 | 2.3 | 22000 | 0.4218 | 0.2401 |
| 0.2571 | 2.35 | 22500 | 0.4131 | 0.2337 |
| 0.2541 | 2.41 | 23000 | 0.4105 | 0.2312 |
| 0.2663 | 2.46 | 23500 | 0.4228 | 0.2327 |
| 0.2777 | 2.51 | 24000 | 0.3960 | 0.2254 |
| 0.2659 | 2.56 | 24500 | 0.4074 | 0.2289 |
| 0.2519 | 2.61 | 25000 | 0.4220 | 0.2363 |
| 0.2607 | 2.67 | 25500 | 0.3912 | 0.2253 |
| 0.2749 | 2.72 | 26000 | 0.4017 | 0.2214 |
| 0.2431 | 2.77 | 26500 | 0.3879 | 0.2181 |
| 0.2557 | 2.82 | 27000 | 0.4011 | 0.2268 |
| 0.2662 | 2.88 | 27500 | 0.3884 | 0.2241 |
| 0.2649 | 2.93 | 28000 | 0.3987 | 0.2233 |
| 0.2382 | 2.98 | 28500 | 0.3777 | 0.2215 |
| 0.2198 | 3.03 | 29000 | 0.3952 | 0.2177 |
| 0.2281 | 3.09 | 29500 | 0.4067 | 0.2213 |
| 0.2178 | 3.14 | 30000 | 0.4178 | 0.2192 |
| 0.222 | 3.19 | 30500 | 0.4327 | 0.2208 |
| 0.2262 | 3.24 | 31000 | 0.4028 | 0.2212 |
| 0.2256 | 3.29 | 31500 | 0.4065 | 0.2181 |
| 0.2255 | 3.35 | 32000 | 0.3782 | 0.2139 |
| 0.2364 | 3.4 | 32500 | 0.4443 | 0.2119 |
| 0.2209 | 3.45 | 33000 | 0.4089 | 0.2177 |
| 0.2051 | 3.5 | 33500 | 0.3886 | 0.2154 |
| 0.2242 | 3.56 | 34000 | 0.3810 | 0.2133 |
| 0.2151 | 3.61 | 34500 | 0.4005 | 0.2127 |
| 0.2341 | 3.66 | 35000 | 0.3899 | 0.2165 |
| 0.202 | 3.71 | 35500 | 0.3846 | 0.2121 |
| 0.2107 | 3.76 | 36000 | 0.3859 | 0.2146 |
| 0.2237 | 3.82 | 36500 | 0.3993 | 0.2141 |
| 0.2189 | 3.87 | 37000 | 0.3842 | 0.2113 |
| 0.2124 | 3.92 | 37500 | 0.3919 | 0.2118 |
| 0.4017 | 3.97 | 38000 | 0.3882 | 0.2086 |
| 0.1946 | 4.03 | 38500 | 0.4008 | 0.2121 |
| 0.1919 | 4.08 | 39000 | 0.3939 | 0.2129 |
| 0.1797 | 4.13 | 39500 | 0.3958 | 0.2115 |
| 0.184 | 4.18 | 40000 | 0.3942 | 0.2086 |
| 0.1987 | 4.24 | 40500 | 0.3959 | 0.2092 |
| 0.1919 | 4.29 | 41000 | 0.4250 | 0.2093 |
| 0.2038 | 4.34 | 41500 | 0.3970 | 0.2060 |
| 0.1879 | 4.39 | 42000 | 0.3978 | 0.2109 |
| 0.1852 | 4.44 | 42500 | 0.4065 | 0.2091 |
| 0.2014 | 4.5 | 43000 | 0.4069 | 0.2054 |
| 0.2011 | 4.55 | 43500 | 0.4247 | 0.2099 |
| 0.1937 | 4.6 | 44000 | 0.3754 | 0.2091 |
| 0.1878 | 4.65 | 44500 | 0.3891 | 0.2070 |
| 0.2011 | 4.71 | 45000 | 0.3714 | 0.2030 |
| 0.1958 | 4.76 | 45500 | 0.3994 | 0.2066 |
| 0.1907 | 4.81 | 46000 | 0.4061 | 0.2080 |
| 0.1859 | 4.86 | 46500 | 0.3899 | 0.2056 |
| 0.1894 | 4.92 | 47000 | 0.3808 | 0.2055 |
| 0.3276 | 4.97 | 47500 | 0.3936 | 0.2051 |
| 0.3513 | 5.02 | 48000 | 0.4028 | 0.2041 |
| 0.1654 | 5.07 | 48500 | 0.3929 | 0.2032 |
| 0.1622 | 5.12 | 49000 | 0.4067 | 0.2029 |
| 0.1659 | 5.18 | 49500 | 0.4058 | 0.2007 |
| 0.1779 | 5.23 | 50000 | 0.4085 | 0.2031 |
| 0.1731 | 5.28 | 50500 | 0.3895 | 0.2009 |
| 0.1761 | 5.33 | 51000 | 0.3973 | 0.2022 |
| 0.1741 | 5.39 | 51500 | 0.4116 | 0.2021 |
| 0.1735 | 5.44 | 52000 | 0.4152 | 0.2038 |
| 0.1627 | 5.49 | 52500 | 0.4078 | 0.2003 |
| 0.1728 | 5.54 | 53000 | 0.4088 | 0.2022 |
| 0.179 | 5.6 | 53500 | 0.3828 | 0.1998 |
| 0.1692 | 5.65 | 54000 | 0.3903 | 0.1980 |
| 0.174 | 5.7 | 54500 | 0.4185 | 0.1993 |
| 0.1763 | 5.75 | 55000 | 0.3937 | 0.1976 |
| 0.1792 | 5.8 | 55500 | 0.3767 | 0.1966 |
| 0.1799 | 5.86 | 56000 | 0.3970 | 0.1994 |
| 0.1918 | 5.91 | 56500 | 0.3954 | 0.1981 |
| 0.1836 | 5.96 | 57000 | 0.3984 | 0.1969 |
| 0.1708 | 6.01 | 57500 | 0.3917 | 0.1956 |
| 0.1524 | 6.07 | 58000 | 0.3922 | 0.1977 |
| 0.1567 | 6.12 | 58500 | 0.4108 | 0.1955 |
| 0.1518 | 6.17 | 59000 | 0.4349 | 0.1968 |
| 0.1587 | 6.22 | 59500 | 0.3963 | 0.1988 |
| 0.1563 | 6.27 | 60000 | 0.4235 | 0.1997 |
| 0.154 | 6.33 | 60500 | 0.4026 | 0.1951 |
| 0.1636 | 6.38 | 61000 | 0.4359 | 0.2031 |
| 0.1641 | 6.43 | 61500 | 0.4115 | 0.1972 |
| 0.1604 | 6.48 | 62000 | 0.4166 | 0.1972 |
| 0.1579 | 6.54 | 62500 | 0.4264 | 0.1965 |
| 0.1552 | 6.59 | 63000 | 0.4047 | 0.2007 |
| 0.1461 | 6.64 | 63500 | 0.4263 | 0.2011 |
| 0.1522 | 6.69 | 64000 | 0.4222 | 0.1970 |
| 0.1624 | 6.75 | 64500 | 0.4318 | 0.1971 |
| 0.1474 | 6.8 | 65000 | 0.4265 | 0.1961 |
| 0.1495 | 6.85 | 65500 | 0.4316 | 0.1940 |
| 0.1509 | 6.9 | 66000 | 0.4297 | 0.1965 |
| 0.1479 | 6.95 | 66500 | 0.4232 | 0.1966 |
| 0.1462 | 7.01 | 67000 | 0.4090 | 0.1946 |
| 0.1498 | 7.06 | 67500 | 0.4197 | 0.1939 |
| 0.1436 | 7.11 | 68000 | 0.4215 | 0.1956 |
| 0.1378 | 7.16 | 68500 | 0.4345 | 0.1968 |
| 0.3082 | 7.22 | 69000 | 0.4364 | 0.1972 |
| 0.1386 | 7.27 | 69500 | 0.4284 | 0.1949 |
| 0.1441 | 7.32 | 70000 | 0.4019 | 0.1953 |
| 0.1624 | 7.37 | 70500 | 0.4175 | 0.1951 |
| 0.1454 | 7.43 | 71000 | 0.4224 | 0.1922 |
| 0.1408 | 7.48 | 71500 | 0.4128 | 0.1961 |
| 0.1525 | 7.53 | 72000 | 0.4200 | 0.1946 |
| 0.1459 | 7.58 | 72500 | 0.4166 | 0.1949 |
| 0.1485 | 7.63 | 73000 | 0.4102 | 0.1947 |
| 0.148 | 7.69 | 73500 | 0.4237 | 0.1948 |
| 0.1478 | 7.74 | 74000 | 0.4104 | 0.1928 |
| 0.14 | 7.79 | 74500 | 0.4027 | 0.1928 |
| 0.1473 | 7.84 | 75000 | 0.4034 | 0.1907 |
| 0.1394 | 7.9 | 75500 | 0.3823 | 0.1923 |
| 0.1324 | 7.95 | 76000 | 0.3987 | 0.1899 |
| 0.1459 | 8.0 | 76500 | 0.4003 | 0.1907 |
| 0.1373 | 8.05 | 77000 | 0.4204 | 0.1925 |
| 0.1303 | 8.1 | 77500 | 0.4218 | 0.1907 |
| 0.1346 | 8.16 | 78000 | 0.4091 | 0.1882 |
| 0.2947 | 8.21 | 78500 | 0.4156 | 0.1890 |
| 0.1324 | 8.26 | 79000 | 0.4280 | 0.1888 |
| 0.132 | 8.31 | 79500 | 0.4136 | 0.1873 |
| 0.1377 | 8.37 | 80000 | 0.4099 | 0.1915 |
| 0.3045 | 8.42 | 80500 | 0.4201 | 0.1900 |
| 0.1372 | 8.47 | 81000 | 0.4161 | 0.1876 |
| 0.1377 | 8.52 | 81500 | 0.4107 | 0.1869 |
| 0.1374 | 8.58 | 82000 | 0.4188 | 0.1875 |
| 0.1301 | 8.63 | 82500 | 0.4306 | 0.1860 |
| 0.1386 | 8.68 | 83000 | 0.4131 | 0.1862 |
| 0.1292 | 8.73 | 83500 | 0.3997 | 0.1871 |
| 0.1276 | 8.78 | 84000 | 0.4237 | 0.1873 |
| 0.1377 | 8.84 | 84500 | 0.4284 | 0.1889 |
| 0.1338 | 8.89 | 85000 | 0.4205 | 0.1861 |
| 0.1284 | 8.94 | 85500 | 0.4380 | 0.1875 |
| 0.1471 | 8.99 | 86000 | 0.4238 | 0.1895 |
| 0.1186 | 9.05 | 86500 | 0.4128 | 0.1875 |
| 0.1222 | 9.1 | 87000 | 0.4267 | 0.1864 |
| 0.1229 | 9.15 | 87500 | 0.4169 | 0.1842 |
| 0.1259 | 9.2 | 88000 | 0.4327 | 0.1861 |
| 0.1281 | 9.26 | 88500 | 0.4188 | 0.1877 |
| 0.1247 | 9.31 | 89000 | 0.4212 | 0.1852 |
| 0.1248 | 9.36 | 89500 | 0.4172 | 0.1863 |
| 0.1232 | 9.41 | 90000 | 0.4173 | 0.1858 |
| 0.3255 | 9.46 | 90500 | 0.4225 | 0.1851 |
| 0.1243 | 9.52 | 91000 | 0.4290 | 0.1849 |
| 0.1266 | 9.57 | 91500 | 0.4186 | 0.1842 |
| 0.1257 | 9.62 | 92000 | 0.4364 | 0.1860 |
| 0.1181 | 9.67 | 92500 | 0.4294 | 0.1852 |
| 0.1202 | 9.73 | 93000 | 0.4222 | 0.1836 |
| 0.1264 | 9.78 | 93500 | 0.4191 | 0.1856 |
| 0.1243 | 9.83 | 94000 | 0.4237 | 0.1856 |
| 0.1164 | 9.88 | 94500 | 0.4281 | 0.1848 |
| 0.1283 | 9.94 | 95000 | 0.4332 | 0.1845 |
| 0.123 | 9.99 | 95500 | 0.4316 | 0.1839 |
| 0.1232 | 10.04 | 96000 | 0.4313 | 0.1844 |
| 0.1206 | 10.09 | 96500 | 0.4303 | 0.1840 |
| 0.1145 | 10.14 | 97000 | 0.4299 | 0.1822 |
| 0.1265 | 10.2 | 97500 | 0.4266 | 0.1822 |
| 0.1147 | 10.25 | 98000 | 0.4322 | 0.1844 |
| 0.1122 | 10.3 | 98500 | 0.4251 | 0.1830 |
| 0.1101 | 10.35 | 99000 | 0.4297 | 0.1830 |
| 0.1225 | 10.41 | 99500 | 0.4244 | 0.1842 |
| 0.1177 | 10.46 | 100000 | 0.4343 | 0.1826 |
| 0.1157 | 10.51 | 100500 | 0.4228 | 0.1827 |
| 0.1215 | 10.56 | 101000 | 0.4285 | 0.1814 |
| 0.276 | 10.61 | 101500 | 0.4268 | 0.1820 |
| 0.111 | 10.67 | 102000 | 0.4288 | 0.1836 |
| 0.1164 | 10.72 | 102500 | 0.4283 | 0.1825 |
| 0.111 | 10.77 | 103000 | 0.4198 | 0.1819 |
| 0.1135 | 10.82 | 103500 | 0.4333 | 0.1818 |
| 0.1196 | 10.88 | 104000 | 0.4239 | 0.1817 |
| 0.1176 | 10.93 | 104500 | 0.4252 | 0.1819 |
| 0.117 | 10.98 | 105000 | 0.4317 | 0.1820 |
| 0.1166 | 11.03 | 105500 | 0.4307 | 0.1815 |
| 0.1118 | 11.09 | 106000 | 0.4379 | 0.1821 |
| 0.1116 | 11.14 | 106500 | 0.4363 | 0.1812 |
| 0.1098 | 11.19 | 107000 | 0.4328 | 0.1816 |
| 0.1134 | 11.24 | 107500 | 0.4284 | 0.1811 |
| 0.1104 | 11.29 | 108000 | 0.4365 | 0.1801 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_aeslc_42 | 8e472a9b1ed30a899ac2fa051a9423e039d238dd | 2022-07-07T15:44:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_42 | 40 | null | transformers | 6,492 | Entry not found |
BigSalmon/GPTNeo1.3BInformalToFormal | 1f64abdeba88796c6268597ef6e37eeeb676a2e9 | 2022-07-17T14:11:25.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTNeo1.3BInformalToFormal | 40 | null | transformers | 6,493 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BInformalToForma")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
make longer
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet: embodies compassion.
longer: is the personification of compassion.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: work in an office ).
translated into journalism speak: ( beaver away in windowless offices / toil in drab cubicles / clock in at faceless workstations / report for duty in cheerless quarters / log hours in colorless confines / clack away on keyboards in offices with cinderblock walls / stare at computer screens in bland partitions / shuffle through mounds of paperwork in humdrum offices ).
***
original: easy job ).
translated into journalism speak: ( cushy / hassle-free / uninvolved / vanilla / sedentary / straightforward / effortless / lax / plush / frictionless / painless ) ( gig / perch / post / trade / calling / paycheck ).
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
original: big businesses ).
translated into journalism speak: corporate ( behemoths / heavyweights / titans / steamrollers / powerhouses / bigwigs / kahunas / brutes / honchos / barons / kingpins / rainmakers / headliners ).
***
original: environmental movement ).
translated into journalism speak: ( green lobby / conservationist camp / tree-huggers / ecology-obsessed / sustainability crusaders / preservation-crazed / ecological campaigners ).
***
original:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
``` |
Ahmed007/T5-as-chat-bot | f4824f01599970a8a94bceeedb62b7f776588b6c | 2022-07-19T20:20:47.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Ahmed007 | null | Ahmed007/T5-as-chat-bot | 40 | null | transformers | 6,494 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: T5-as-chat-bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-as-chat-bot
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 187 | 2.4258 |
| No log | 2.0 | 374 | 2.3627 |
| 2.5802 | 3.0 | 561 | 2.3284 |
| 2.5802 | 4.0 | 748 | 2.3109 |
| 2.5802 | 5.0 | 935 | 2.2958 |
| 2.3212 | 6.0 | 1122 | 2.2850 |
| 2.3212 | 7.0 | 1309 | 2.2779 |
| 2.3212 | 8.0 | 1496 | 2.2726 |
| 2.1892 | 9.0 | 1683 | 2.2703 |
| 2.1892 | 10.0 | 1870 | 2.2689 |
| 2.111 | 11.0 | 2057 | 2.2683 |
| 2.111 | 12.0 | 2244 | 2.2672 |
| 2.111 | 13.0 | 2431 | 2.2655 |
| 2.0484 | 14.0 | 2618 | 2.2685 |
| 2.0484 | 15.0 | 2805 | 2.2703 |
| 2.0484 | 16.0 | 2992 | 2.2698 |
| 2.0019 | 17.0 | 3179 | 2.2699 |
| 2.0019 | 18.0 | 3366 | 2.2715 |
| 1.9803 | 19.0 | 3553 | 2.2719 |
| 1.9803 | 20.0 | 3740 | 2.2717 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | 3b9835199ee8e65c65ca320567fe8e5a5ca3698c | 2021-09-14T14:26:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | 39 | 2 | transformers | 6,495 | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-sixteenth** (`bert-base-arabic-camelbert-msa-sixteenth`), a model pre-trained on a sixteenth of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
|✔|`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو التغيير. [SEP]',
'score': 0.08320745080709457,
'token': 7946,
'token_str': 'التغيير'},
{'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
'score': 0.04305094853043556,
'token': 12554,
'token_str': 'التعلم'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.0417640283703804,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.041371218860149384,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو المعرفة. [SEP]',
'score': 0.039794355630874634,
'token': 7344,
'token_str': 'المعرفة'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
Emanuel/autonlp-pos-tag-bosque | 145a83cb3b508cd334eae8dcfa370ed653a9308d | 2021-10-19T12:09:29.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:Emanuel/autonlp-data-pos-tag-bosque",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Emanuel | null | Emanuel/autonlp-pos-tag-bosque | 39 | 2 | transformers | 6,496 | ---
tags: autonlp
language: pt
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Emanuel/autonlp-data-pos-tag-bosque
co2_eq_emissions: 6.2107269129101805
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 21124427
- CO2 Emissions (in grams): 6.2107269129101805
## Validation Metrics
- Loss: 0.09813392907381058
- Accuracy: 0.9714309035997062
- Precision: 0.9721275936822545
- Recall: 0.9735345807918949
- F1: 0.9728305785123967
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Emanuel/autonlp-pos-tag-bosque-21124427
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
tokenizer = AutoTokenizer.from_pretrained("Emanuel/autonlp-pos-tag-bosque")
inputs = tokenizer("A noiva casa de branco", return_tensors="pt")
outputs = model(**inputs)
labelids = outputs.logits.squeeze().argmax(axis=-1)
labels = [model.config.id2label[int(x)] for x in labelids]
labels = labels[1:-1]# Filter start and end of sentence symbols
``` |
Geotrend/bert-base-pt-cased | 793cd00d9242c56f3dccf438a75966f92035b487 | 2021-05-18T20:06:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"pt",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-pt-cased | 39 | null | transformers | 6,497 | ---
language: pt
datasets: wikipedia
license: apache-2.0
---
# bert-base-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
HScomcom/gpt2-fairytales | 8c9263255ae9c24840546543defd0cbf072f38a0 | 2021-05-21T10:16:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | HScomcom | null | HScomcom/gpt2-fairytales | 39 | null | transformers | 6,498 | ### Model information
Fine tuning data: https://www.kaggle.com/cuddlefish/fairy-tales
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 17861.6048 secs
Loss: 0.0412
API page: [Ainize](https://ainize.ai/fpem123/GPT2-FairyTales?branch=master)
Demo page: [End-point](https://master-gpt2-fairy-tales-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other fairytale model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-fairy-tales/68) |
Hate-speech-CNERG/dehatebert-mono-polish | ec586b2e2e6140879c6f533ccd5208d1c2692715 | 2021-09-25T13:58:40.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"pl",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-polish | 39 | null | transformers | 6,499 | ---
language: pl
license: apache-2.0
---
This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.