modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-guw-en | 84fb2fa0624cd832855fb196770f4e294c2df8d5 | 2021-09-09T21:59:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"guw",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-guw-en | 22 | null | transformers | 8,000 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-guw-en
* source languages: guw
* target languages: en
* OPUS readme: [guw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.en | 44.8 | 0.601 |
|
Helsinki-NLP/opus-mt-lu-en | e5a8fbfce6798964395091f07a440a5b568229c7 | 2021-09-10T13:55:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lu",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lu-en | 22 | null | transformers | 8,001 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lu-en
* source languages: lu
* target languages: en
* OPUS readme: [lu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.en | 35.7 | 0.517 |
|
Helsinki-NLP/opus-mt-ng-en | 57bfb4a1922ad1f807a8e951ee46145b9dc45dce | 2021-09-10T13:58:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ng",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ng-en | 22 | null | transformers | 8,002 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ng-en
* source languages: ng
* target languages: en
* OPUS readme: [ng-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ng-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ng.en | 27.3 | 0.443 |
|
Helsinki-NLP/opus-mt-rn-en | 209441b0eb367684cdbb2b4e852571afbbaee771 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"rn",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-rn-en | 22 | null | transformers | 8,003 | ---
language:
- rn
- en
tags:
- translation
license: apache-2.0
---
### run-eng
* source group: Rundi
* target group: English
* OPUS readme: [run-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md)
* model: transformer-align
* source language(s): run
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.eng | 26.7 | 0.428 |
### System Info:
- hf_name: run-eng
- source_languages: run
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'en']
- src_constituents: {'run'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: eng
- short_pair: rn-en
- chrF2_score: 0.428
- bleu: 26.7
- brevity_penalty: 0.99
- ref_len: 10041.0
- src_name: Rundi
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: en
- prefer_old: False
- long_pair: run-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-to-en | 00a0ae6a795fc4cc7da526bc63212fc1a763513c | 2021-09-11T10:48:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"to",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-to-en | 22 | null | transformers | 8,004 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-to-en
* source languages: to
* target languages: en
* OPUS readme: [to-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.to.en | 49.3 | 0.627 |
|
Helsinki-NLP/opus-mt-umb-en | 4baaf48bdbc9f45e0abdbae7490e5ccaa39c12e7 | 2021-09-11T10:51:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"umb",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-umb-en | 22 | null | transformers | 8,005 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-umb-en
* source languages: umb
* target languages: en
* OPUS readme: [umb-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/umb-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.umb.en | 27.5 | 0.425 |
|
Helsinki-NLP/opus-mt-urj-urj | 7a66b71ec9428eb22fe49e189929c64466a45f43 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"se",
"fi",
"hu",
"et",
"urj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-urj-urj | 22 | null | transformers | 8,006 | ---
language:
- se
- fi
- hu
- et
- urj
tags:
- translation
license: apache-2.0
---
### urj-urj
* source group: Uralic languages
* target group: Uralic languages
* OPUS readme: [urj-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est-est.est.est | 5.1 | 0.288 |
| Tatoeba-test.est-fin.est.fin | 50.9 | 0.709 |
| Tatoeba-test.est-fkv.est.fkv | 0.7 | 0.215 |
| Tatoeba-test.est-vep.est.vep | 1.0 | 0.154 |
| Tatoeba-test.fin-est.fin.est | 55.5 | 0.718 |
| Tatoeba-test.fin-fkv.fin.fkv | 1.8 | 0.254 |
| Tatoeba-test.fin-hun.fin.hun | 45.0 | 0.672 |
| Tatoeba-test.fin-izh.fin.izh | 7.1 | 0.492 |
| Tatoeba-test.fin-krl.fin.krl | 2.6 | 0.278 |
| Tatoeba-test.fkv-est.fkv.est | 0.6 | 0.099 |
| Tatoeba-test.fkv-fin.fkv.fin | 15.5 | 0.444 |
| Tatoeba-test.fkv-liv.fkv.liv | 0.6 | 0.101 |
| Tatoeba-test.fkv-vep.fkv.vep | 0.6 | 0.113 |
| Tatoeba-test.hun-fin.hun.fin | 46.3 | 0.675 |
| Tatoeba-test.izh-fin.izh.fin | 13.4 | 0.431 |
| Tatoeba-test.izh-krl.izh.krl | 2.9 | 0.078 |
| Tatoeba-test.krl-fin.krl.fin | 14.1 | 0.439 |
| Tatoeba-test.krl-izh.krl.izh | 1.0 | 0.125 |
| Tatoeba-test.liv-fkv.liv.fkv | 0.9 | 0.170 |
| Tatoeba-test.liv-vep.liv.vep | 2.6 | 0.176 |
| Tatoeba-test.multi.multi | 32.9 | 0.580 |
| Tatoeba-test.vep-est.vep.est | 3.4 | 0.265 |
| Tatoeba-test.vep-fkv.vep.fkv | 0.9 | 0.239 |
| Tatoeba-test.vep-liv.vep.liv | 2.6 | 0.190 |
### System Info:
- hf_name: urj-urj
- source_languages: urj
- target_languages: urj
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'urj']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt
- src_alpha3: urj
- tgt_alpha3: urj
- short_pair: urj-urj
- chrF2_score: 0.58
- bleu: 32.9
- brevity_penalty: 1.0
- ref_len: 19444.0
- src_name: Uralic languages
- tgt_name: Uralic languages
- train_date: 2020-07-27
- src_alpha2: urj
- tgt_alpha2: urj
- prefer_old: False
- long_pair: urj-urj
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
RecordedFuture/Swedish-Sentiment-Violence | 881e0f742307fa2740b47a7d96750d28cf8ff99f | 2021-05-18T22:02:50.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sv",
"transformers",
"license:mit"
] | text-classification | false | RecordedFuture | null | RecordedFuture/Swedish-Sentiment-Violence | 22 | null | transformers | 8,007 | ---
language: sv
license: mit
---
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 | |
SEBIS/code_trans_t5_base_code_documentation_generation_javascript_multitask | 2572188f11a56254b981bac5ea59275b3d769550 | 2021-06-23T04:29:50.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_javascript_multitask | 22 | null | transformers | 8,008 | ---
tags:
- summarization
widget:
- text: "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
---
# CodeTrans model for code documentation generation javascript
Pretrained model on programming language javascript using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_javascript_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
aditeyabaral/sentencetransformer-bert-hinglish-small | 494792d8ff0aa558e2d2bc825814b536d766732e | 2021-10-20T06:28:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-bert-hinglish-small | 22 | null | sentence-transformers | 8,009 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
airesearch/wangchanberta-base-wiki-sefr | 9c7f7cbe9fdf6ec51696769591e11eb8c1b99e76 | 2021-09-11T09:39:05.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearch | null | airesearch/wangchanberta-base-wiki-sefr | 22 | null | transformers | 8,010 | ---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-sefr`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-sefr` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer `(engine="best") `[[Limkonchotiwat et al., 2020]](https://www.aclweb.org/anthology/2020.emnlp-main.315/) based on probablities from CNN-based `deepcut` [[Kittinaradorn et al., 2019]](http://doi.org/10.5281/zenodo.3457707). The total number of word-level tokens in the vocabulary is 92,177.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
alexanderfalk/danbert-small-cased | 2f63f543a11ad95cff4149288d04714da11167c4 | 2021-09-21T15:57:39.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"da",
"en",
"dataset:custom danish dataset",
"transformers",
"named entity recognition",
"token criticality",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | alexanderfalk | null | alexanderfalk/danbert-small-cased | 22 | null | transformers | 8,011 | ---
language:
- da
- en
thumbnail:
tags:
- named entity recognition
- token criticality
license: apache-2.0
datasets:
- custom danish dataset
inference: false
metrics:
- array of metric identifiers
---
# DanBERT
## Model description
DanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis.
The model can be found at:
* [danbert-da](https://huggingface.co/alexanderfalk/danbert-small-cased)
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("alexanderfalk/danbert-small-cased")
model = AutoModel.from_pretrained("alexanderfalk/danbert-small-cased")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={Anonymization of Danish, Real-Time Data, and Personalized Modelling},
author={Alexander Falk},
}
``` |
beomi/kobert | 372ec671481c751af771f28c6f191d420d1a1d86 | 2021-06-08T08:36:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beomi | null | beomi/kobert | 22 | null | transformers | 8,012 | Entry not found |
birgermoell/roberta-swedish | 8b52b9fc6d6589e3dff4bee44c3e811315c1e071 | 2021-07-17T07:52:59.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | birgermoell | null | birgermoell/roberta-swedish | 22 | null | transformers | 8,013 | ---
widget:
- text: "Var kan jag hitta någon <mask> talar engelska?"
---
Swedish RoBERTa
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
bochaowei/t5-small-finetuned-cnn-wei1 | 5996b8c932368e7d0b076ff93d6fe5825ac0039c | 2021-10-28T20:24:24.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bochaowei | null | bochaowei/t5-small-finetuned-cnn-wei1 | 22 | null | transformers | 8,014 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-wei1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 41.1796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-wei1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6819
- Rouge1: 41.1796
- Rouge2: 18.9426
- Rougel: 29.2338
- Rougelsum: 38.4087
- Gen Len: 72.7607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8582 | 1.0 | 23927 | 1.6819 | 41.1796 | 18.9426 | 29.2338 | 38.4087 | 72.7607 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bowipawan/bert-sentimental | 2f9b7e4ef23b55b98cdb38153c67548da073cee5 | 2021-11-12T13:47:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | bowipawan | null | bowipawan/bert-sentimental | 22 | null | transformers | 8,015 | For studying only |
cactode/gpt2_urbandict_textgen_torch | 32706d845387d1a05aaa58ad87a9b7e36f6957ae | 2021-11-05T03:53:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cactode | null | cactode/gpt2_urbandict_textgen_torch | 22 | null | transformers | 8,016 | # GPT2 Fine Tuned on UrbanDictionary
Honestly a little horrifying, but still funny.
## Usage
Use with GPT2Tokenizer. Pad token should be set to the EOS token.
Inputs should be of the form "define <your word>: ".
## Training Data
All training data was obtained from [Urban Dictionary Words And Definitions on Kaggle](https://www.kaggle.com/therohk/urban-dictionary-words-dataset). Data was additionally filtered, normalized, and spell-checked.
## Bias
This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions. |
cahya/wav2vec2-luganda | ad1c5c036b67f488c416912d0aa3f6c0b65c1fa2 | 2022-03-23T18:27:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lg",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"audio",
"common_voice",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-luganda | 22 | null | transformers | 8,017 | ---
language: lg
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- lg
- robust-speech-event
- speech
license: apache-2.0
model-index:
- name: Wav2Vec2 Luganda by Indonesian-NLP
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lg
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 9.332
- name: Test CER
type: cer
value: 1.987
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lg
metrics:
- name: Test WER
type: wer
value: 13.844
- name: Test CER
type: cer
value: 2.68
---
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
castorini/monot5-large-msmarco | 48cfad1d8dd587670393f27ee8ec41fde63e3d98 | 2021-10-17T11:20:56.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/monot5-large-msmarco | 22 | null | transformers | 8,018 | This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
dbmdz/electra-base-french-europeana-cased-discriminator | 685c31965459d92093facd8b2b31ee164ffc031e | 2021-09-13T21:05:37.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"fr",
"transformers",
"historic french",
"license:mit"
] | null | false | dbmdz | null | dbmdz/electra-base-french-europeana-cased-discriminator | 22 | 1 | transformers | 8,019 | ---
language: fr
license: mit
tags:
- "historic french"
---
# 🤗 + 📚 dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models 🎉
# French Europeana ELECTRA
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main)
* French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our models from their S3 storage 🤗
|
dbmdz/electra-base-turkish-cased-v0-generator | 1369328c4a81db81cfa654ceabb4eafb19ab1df1 | 2020-04-24T15:57:22.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/electra-base-turkish-cased-v0-generator | 22 | null | transformers | 8,020 | Entry not found |
dbragdon/noam-masked-lm | 4cc42aa0b51d2907f1f690098e32336205081fda | 2021-06-10T17:21:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbragdon | null | dbragdon/noam-masked-lm | 22 | null | transformers | 8,021 | Masked Language Model trained on the articles and talks of Noam Chomsky. |
ddobokki/klue-roberta-small-nli-sts | e2a0bafb78d6395e6f9bc8fc35338a998eaa9eb0 | 2022-04-14T08:08:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"ko"
] | sentence-similarity | false | ddobokki | null | ddobokki/klue-roberta-small-nli-sts | 22 | 1 | sentence-transformers | 8,022 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- ko
---
# ddobokki/klue-roberta-small-nli-sts
한국어 Sentence Transformer 모델입니다.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
[sentence-transformers](https://www.SBERT.net) 라이브러리를 이용해 사용할 수 있습니다.
```
pip install -U sentence-transformers
```
사용법
```python
from sentence_transformers import SentenceTransformer
sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"]
model = SentenceTransformer('ddobokki/klue-roberta-small-nli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
transformers 라이브러리만 사용할 경우
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ddobokki/klue-roberta-small-nli-sts')
model = AutoModel.from_pretrained('ddobokki/klue-roberta-small-nli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Performance
- Semantic Textual Similarity test set results <br>
| Model | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman |
|------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| KoSRoBERTa<sup>small</sup> | 84.27 | 84.17 | 83.33 | 83.65 | 83.34 | 83.65 | 82.10 | 81.38 |
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dhikri/question_answering_glue | 598dde6797de2e74ec4f04bed3584fd3ea202e0b | 2021-02-22T08:49:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | dhikri | null | dhikri/question_answering_glue | 22 | null | transformers | 8,023 | "hello"
|
diiogo/electra-base | f8eb03cbba5bbf76c8657231115810459a0ba933 | 2021-12-17T17:33:23.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | diiogo | null | diiogo/electra-base | 22 | null | transformers | 8,024 | Entry not found |
dsilin/detok-deberta-xl | 40bc550b16f9c35b2c93d5c9d4fe90a320c86c3c | 2021-05-10T23:15:59.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"english",
"transformers",
"autotrain_compatible"
] | token-classification | false | dsilin | null | dsilin/detok-deberta-xl | 22 | null | transformers | 8,025 | ---
language: english
widget:
- text: "They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,"
- text: "\" We 'll talk go by now ; \" says Shucksmith ;"
- text: "\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said ."
---
This model can be used to more accurately detokenize the moses tokenizer (it does a better job with certain lossy quotes and things)
batched usage:
```python
sentences = [
"They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,",
"\" We 'll talk go by now ; \" says Shucksmith ;",
"He 'll enjoy it more now that this he be dead , if put 'll pardon the expression .",
"I think you 'll be amazed at this way it finds ,",
"Michigan voters ^ are so frightened of fallen in permanent economic collapse that they 'll grab onto anything .",
"You 'll finding outs episode 4 .",
"\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said .",
"You can look at the things I 'm saying about my record and about the events of campaign and history and you 'll find if now and and then I miss a words or I get something slightly off , I 'll correct it , acknowledge where it are wrong .",
"Wonder if 'll alive to see .",
"We 'll have to combine and a numbered of people ."
]
def sentences_to_input_tokens(sentences):
all_tokens = []
max_length = 0
sents_tokens = []
iids = tokenizer(sentences)
for sent_tokens in iids['input_ids']:
sents_tokens.append(sent_tokens)
if len(sent_tokens) > max_length:
max_length = len(sent_tokens)
attention_mask = [1] * len(sent_tokens)
pos_ids = list(range(len(sent_tokens)))
encoding = {
"iids": sent_tokens,
"am": attention_mask,
"pos": pos_ids
}
all_tokens.append(encoding)
input_ids = []
attention_masks = []
position_ids = []
for i in range(len(all_tokens)):
encoding = all_tokens[i]
pad_len = max_length - len(encoding['iids'])
attention_masks.append(encoding['am'] + [0] * pad_len)
position_ids.append(encoding['pos'] + [0] * pad_len)
input_ids.append(encoding['iids'] + [tokenizer.pad_token_id] * pad_len)
encoding = {
"input_ids": torch.tensor(input_ids).to(device),
"attention_mask": torch.tensor(attention_masks).to(device),
"position_ids": torch.tensor(position_ids).to(device)
}
return encoding, sents_tokens
def run_token_predictor_sentences(sentences):
encoding, at = sentences_to_input_tokens(sentences)
predictions = model(**encoding)[0].cpu().tolist()
outstrs = []
for i in range(len(predictions)):
outstr = ""
for p in zip(tokenizer.convert_ids_to_tokens(at[i][1:-1]), predictions[i][1:-1]):
if not "▁" in p[0]:
outstr+=p[0]
else:
if p[1][0] > p[1][1]:
outstr+=p[0].replace("▁", " ")
else:
outstr+=p[0].replace("▁", "")
outstrs.append(outstr.strip())
return outstrs
outs = run_token_predictor_sentences(sentences)
for p in zip(outs, sentences):
print(p[1])
print(p[0])
print('\n------\n')
``` |
facebook/s2t-small-covost2-de-en-st | 2d3a4e9f1046e3fecedf7b6c710aae8f9ec00f78 | 2022-02-07T15:13:02.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"de",
"en",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-small-covost2-de-en-st | 22 | null | transformers | 8,026 | ---
language:
- de
- en
datasets:
- covost2
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-COVOST2-DE-EN-ST
`s2t-small-covost2-de-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end German speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-de-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-de-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-de-en-st is trained on German-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for de-en (BLEU score): 17.58
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
finiteautomata/betonews-tweetcontext | d72a1918e1c1d64b8e193d3694d4f42c284b05ac | 2021-10-03T15:44:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | finiteautomata | null | finiteautomata/betonews-tweetcontext | 22 | null | transformers | 8,027 | Entry not found |
flax-community/bertin-roberta-large-spanish | 1bec2392a37173d35ce6e5dfa1b408b14b39d168 | 2021-09-23T13:53:03.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | flax-community | null | flax-community/bertin-roberta-large-spanish | 22 | null | transformers | 8,028 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
# NOTE: This repository is now superseded by https://huggingface.co/bertin-project/bertin-roberta-base-spanish. This model corresponds to the `beta` version of the model using stepwise over sampling trained for 200k steps with 128 sequence lengths. Version 1 is now available and should be used instead.
# BERTIN
BERTIN is a series of BERT-based models for Spanish. This one is a RoBERTa-large model trained from scratch on the Spanish portion of mC4 using [Flax](https://github.com/google/flax), including training scripts.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Spanish mC4
The Spanish portion of mC4 containes about 416 million records and 235 billion words.
```bash
$ zcat c4/multilingual/c4-es*.tfrecord*.json.gz | wc -l
416057992
```
```bash
$ zcat c4/multilingual/c4-es*.tfrecord-*.json.gz | jq -r '.text | split(" ") | length' | paste -s -d+ - | bc
235303687795
```
## Team members
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Paulo Villegas ([paulo](https://huggingface.co/paulo))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
## Useful links
- [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6)
- [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md)
- [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125)
- [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190)
- [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
- [Model Repository](https://huggingface.co/flax-community/bertin-roberta-large-spanish/)
|
flax-community/gpt2-medium-indonesian | 23930cb6645dcc5208fea615079ff919e0900516 | 2021-09-02T12:22:45.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt2-medium-indonesian | 22 | 1 | transformers | 8,029 | ---
language: id
widget:
- text: "Sewindu sudah kita tak berjumpa, rinduku padamu sudah tak terkira."
---
# GPT2-medium-indonesian
This is a pretrained model on Indonesian language using a causal language modeling (CLM) objective, which was first
introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian).
## How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='flax-community/gpt2-medium-indonesian')
>>> set_seed(42)
>>> generator("Sewindu sudah kita tak berjumpa,", max_length=30, num_return_sequences=5)
[{'generated_text': 'Sewindu sudah kita tak berjumpa, dua dekade lalu, saya hanya bertemu sekali. Entah mengapa, saya lebih nyaman berbicara dalam bahasa Indonesia, bahasa Indonesia'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi dalam dua hari ini, kita bisa saja bertemu.”\
“Kau tau, bagaimana dulu kita bertemu?” aku'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, banyak kisah yang tersimpan. Tak mudah tuk kembali ke pelukan, di mana kini kita berada, sebuah tempat yang jauh'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, sejak aku lulus kampus di Bandung, aku sempat mencari kabar tentangmu. Ah, masih ada tempat di hatiku,'},
{'generated_text': 'Sewindu sudah kita tak berjumpa, tapi Tuhan masih saja menyukarkan doa kita masing-masing.\
Tuhan akan memberi lebih dari apa yang kita'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian')
model = GPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('flax-community/gpt2-medium-indonesian')
model = TFGPT2Model.from_pretrained('flax-community/gpt2-medium-indonesian')
text = "Ubah dengan teks apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
The training data used for this model are Indonesian websites of [OSCAR](https://oscar-corpus.com/),
[mc4](https://huggingface.co/datasets/mc4) and [Wikipedia](https://huggingface.co/datasets/wikipedia). The datasets
contain a lot of unfiltered content from the internet, which is far from neutral. While we have done some filtering on
the dataset (see the **Training data** section), the filtering is by no means a thorough mitigation of biased content
that is eventually used by the training data. These biases might also affect models that are fine-tuned using this model.
As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we
> do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry
> out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender,
> race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with
> similar levels of caution around use cases that are sensitive to biases around human attributes.
We have done a basic bias analysis that you can find in this [notebook](https://huggingface.co/flax-community/gpt2-small-indonesian/blob/main/bias_analysis/gpt2_medium_indonesian_bias_analysis.ipynb), performed on [Indonesian GPT2 medium](https://huggingface.co/flax-community/gpt2-medium-indonesian), based on the bias analysis for [Polish GPT2](https://huggingface.co/flax-community/papuGaPT2) with modifications.
### Gender bias
We generated 50 texts starting with prompts "She/He works as". After doing some preprocessing (lowercase and stopwords removal) we obtain texts that are used to generate word clouds of female/male professions. The most salient terms for male professions are: driver, sopir (driver), ojek, tukang, online.

The most salient terms for female professions are: pegawai (employee), konsultan (consultant), asisten (assistant).

### Ethnicity bias
We generated 1,200 texts to assess bias across ethnicity and gender vectors. We will create prompts with the following scheme:
* Person - we will assess 5 ethnicities: Sunda, Batak, Minahasa, Dayak, Asmat, Neutral (no ethnicity)
* Topic - we will use 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: *let [person] ...*
* define: *is*
Sample of generated prompt: "seorang perempuan sunda masuk ke rumah..." (a Sundanese woman enters the house...)
We used a [model](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-indonesian) trained on Indonesian hate speech corpus ([dataset 1](https://github.com/okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection), [dataset 2](https://github.com/ialfina/id-hatespeech-detection)) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the ethnicity and gender from the generated text before running the hate speech detector.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some ethnicities score higher than the neutral baseline.

### Religion bias
With the same methodology above, we generated 1,400 texts to assess bias across religion and gender vectors. We will assess 6 religions: Islam, Protestan (Protestant), Katolik (Catholic), Buddha (Buddhism), Hindu (Hinduism), and Khonghucu (Confucianism) with Neutral (no religion) as a baseline.
The following chart demonstrates the intensity of hate speech associated with the generated texts with outlier scores removed. Some religions score higher than the neutral baseline.

## Training data
The model was trained on a combined dataset of [OSCAR](https://oscar-corpus.com/), [mc4](https://huggingface.co/datasets/mc4)
and Wikipedia for the Indonesian language. We have filtered and reduced the mc4 dataset so that we end up with 29 GB
of data in total. The mc4 dataset was cleaned using [this filtering script](https://github.com/Wikidepia/indonesian_datasets/blob/master/dump/mc4/cleanup.py)
and we also only included links that have been cited by the Indonesian Wikipedia.
## Training procedure
The model was trained on a TPUv3-8 VM provided by the Google Cloud team. The training duration was `6d 3h 7m 26s`.
### Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| dataset | train loss | eval loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| ID OSCAR+mc4+Wikipedia (29GB) | 2.79 | 2.696 | 14.826 |
### Tracking
The training process was tracked in [TensorBoard](https://huggingface.co/flax-community/gpt2-medium-indonesian/tensorboard) and [Weights and Biases](https://wandb.ai/wandb/hf-flax-gpt2-indonesian?workspace=user-cahya).
## Team members
- Akmal ([@Wikidepia](https://huggingface.co/Wikidepia))
- alvinwatner ([@alvinwatner](https://huggingface.co/alvinwatner))
- Cahya Wirawan ([@cahya](https://huggingface.co/cahya))
- Galuh Sahid ([@Galuh](https://huggingface.co/Galuh))
- Muhammad Agung Hambali ([@AyameRushia](https://huggingface.co/AyameRushia))
- Muhammad Fhadli ([@muhammadfhadli](https://huggingface.co/muhammadfhadli))
- Samsul Rahmadani ([@munggok](https://huggingface.co/munggok))
## Future work
We would like to pre-train further the models with larger and cleaner datasets and fine-tune it to specific domains
if we can get the necessary hardware resources. |
flax-sentence-embeddings/multi-qa_v1-distilbert-mean_cos | 26ec1992576fb6821e1e66ada936860b0cbaa4fa | 2021-07-26T01:34:46.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"arxiv:2102.07033",
"arxiv:2104.08727",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/multi-qa_v1-distilbert-mean_cos | 22 | null | sentence-transformers | 8,030 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa_v1-distilbert-mean_cos
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, mean pooling of hidden states were used as sentence embeddings.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-distilbert-mean_cos')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | |
flax-sentence-embeddings/reddit_single-context_mpnet-base | 33651e090d1a29fc0e66e87603716ff5e15bb759 | 2021-07-26T01:36:18.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"en",
"arxiv:1904.06472",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/reddit_single-context_mpnet-base | 22 | 1 | sentence-transformers | 8,031 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
---
# Model description
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
700M sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/reddit_single-context_mpnet-base')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base).
Please refer to the model card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
We only use the first context response when building the dataset.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
|
fractalego/fewrel-zero-shot | 509e315f9894bf6df8624296cbc5c9e13bbea000 | 2021-08-13T11:22:55.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | fractalego | null | fractalego/fewrel-zero-shot | 22 | 4 | transformers | 8,032 | ## Introduction
This is a zero-shot relation extractor based on the paper [Exploring the zero-shot limit of FewRel](https://www.aclweb.org/anthology/2020.coling-main.124).
## Installation
```bash
$ pip install zero-shot-re
```
## Run the Extractor
```python
from transformers import AutoTokenizer
from zero_shot_re import RelTaggerModel, RelationExtractor
model = RelTaggerModel.from_pretrained("fractalego/fewrel-zero-shot")
tokenizer = AutoTokenizer.from_pretrained("fractalego/fewrel-zero-shot")
relations = ['noble title', 'founding date', 'occupation of a person']
extractor = RelationExtractor(model, tokenizer, relations)
ranked_rels = extractor.rank(text='John Smith received an OBE', head='John Smith', tail='OBE')
print(ranked_rels)
```
with results
```python3
[('noble title', 0.9690611883997917),
('occupation of a person', 0.0012609362602233887),
('founding date', 0.00024014711380004883)]
```
## Accuracy
The results as in the paper are
| Model | 0-shot 5-ways | 0-shot 10-ways |
|------------------------|--------------|----------------|
|(1) Distillbert |70.1±0.5 | 55.9±0.6 |
|(2) Bert Large |80.8±0.4 | 69.6±0.5 |
|(3) Distillbert + SQUAD |81.3±0.4 | 70.0±0.2 |
|(4) Bert Large + SQUAD |86.0±0.6 | 76.2±0.4 |
This version uses the (4) Bert Large + SQUAD model
## Cite as
```bibtex
@inproceedings{cetoli-2020-exploring,
title = "Exploring the zero-shot limit of {F}ew{R}el",
author = "Cetoli, Alberto",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.124",
doi = "10.18653/v1/2020.coling-main.124",
pages = "1447--1451",
abstract = "This paper proposes a general purpose relation extractor that uses Wikidata descriptions to represent the relation{'}s surface form. The results are tested on the FewRel 1.0 dataset, which provides an excellent framework for training and evaluating the proposed zero-shot learning system in English. This relation extractor architecture exploits the implicit knowledge of a language model through a question-answering approach.",
}
```
|
gilparmentier/pokemon_gptj_model | 03a8ea980eae1b67105a61cc11028d6f0ad55021 | 2022-01-31T21:19:06.000Z | [
"pytorch",
"gptj",
"text-generation",
"en",
"dataset:The Pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | gilparmentier | null | gilparmentier/pokemon_gptj_model | 22 | null | transformers | 8,033 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend. |
google/t5-efficient-small | 661ddbe1d7b609351dbc14a1e49acbb595a21baa | 2022-02-15T10:51:02.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-small | 22 | 1 | transformers | 8,034 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL (Deep-Narrow version)
T5-Efficient-SMALL is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small** - is of model type **Small** with no variations.
It has **60.52** million parameters and thus requires *ca.* **242.08 MB** of memory in full precision (*fp32*)
or **121.04 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
huggingtweets/asmallfiction | 2ef5093253a88dc5860c806a24d4dab5cbc0a9e2 | 2021-05-21T19:33:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/asmallfiction | 22 | null | transformers | 8,035 | ---
language: en
thumbnail: https://www.huggingtweets.com/asmallfiction/1616770285259/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/875394454449815552/FAzOLgVh_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">A Small Fiction 🤖 AI Bot </div>
<div style="font-size: 15px">@asmallfiction bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@asmallfiction's tweets](https://twitter.com/asmallfiction).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2034 |
| Retweets | 197 |
| Short tweets | 75 |
| Tweets kept | 1762 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7bib97vd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @asmallfiction's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3blkqco2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3blkqco2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/asmallfiction')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
it5/mt5-base-question-answering | 2220b747c65156a3569f940cf6249441b93ca4ad | 2022-03-09T07:57:29.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"squad_it",
"text2text-question-answering",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/mt5-base-question-answering | 22 | null | transformers | 8,036 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- italian
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: mt5-base-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.757
name: "Test F1"
- type: exact-match
value: 0.663
name: "Test Exact Match"
co2_eq_emissions:
emissions: 40g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# mT5 Base for Question Answering ⁉️ 🇮🇹
This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/mt5-base-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
josephgatto/paint_doctor_speaker_identification | 24eb623bb8b3870efeed315e157d44f6d72e43f2 | 2021-11-01T23:29:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | josephgatto | null | josephgatto/paint_doctor_speaker_identification | 22 | null | transformers | 8,037 | This model is a bert for sequence classification model fine-tuned on the MedDialogue dataset. Basically, the task is just to predict if a given sentence in the corpus was spoken by the patient or doctor. |
malay-huggingface/t5-tiny-bahasa-cased | 223aa52af6551946c0a9e7bb637c776e8d6a64fd | 2021-09-05T13:02:47.000Z | [
"pytorch",
"t5",
"feature-extraction",
"ms",
"transformers"
] | feature-extraction | false | malay-huggingface | null | malay-huggingface/t5-tiny-bahasa-cased | 22 | null | transformers | 8,038 | ---
language: ms
---
# t5-tiny-bahasa-cased
Pretrained T5 tiny language model for Malay.
## Pretraining Corpus
`t5-tiny-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity. |
morenolq/distilbart-bbc | c919ddbd924938e1abd2432552237b123321d631 | 2021-12-29T14:43:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | morenolq | null | morenolq/distilbart-bbc | 22 | 2 | transformers | 8,039 | This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the BBC News Summary dataset (https://www.kaggle.com/pariza/bbc-news-summary).
The model has been generated as part of the in-lab practice of **Deep NLP course** currently held at Politecnico di Torino.
Training parameters:
- `num_train_epochs=2`
- `fp16=True`
- `per_device_train_batch_size=1`
- `warmup_steps=10`
- `weight_decay=0.01`
- `max_seq_length=100` |
mrm8488/bert-medium-wrslb-finetuned-squadv1 | 8a5696507a219939816688fa40c8ced8c1413889 | 2021-05-20T00:25:31.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/bert-medium-wrslb-finetuned-squadv1 | 22 | null | transformers | 8,040 | Entry not found |
mrm8488/bert-spanish-cased-finetuned-pos-syntax | a6adb2b94c4120043a6bdc3493a711b6795b57ae | 2021-05-20T00:37:26.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/bert-spanish-cased-finetuned-pos-syntax | 22 | null | transformers | 8,041 | ---
language: es
thumbnail:
---
# Spanish BERT (BETO) + Syntax POS tagging ✍🏷
This model is a fine-tuned version of the Spanish BERT [(BETO)](https://github.com/dccuchile/beto) on Spanish **syntax** annotations in [CONLL CORPORA](https://www.kaggle.com/nltkdata/conll-corpora) dataset for **syntax POS** (Part of Speech tagging) downstream task.
## Details of the downstream task (Syntax POS) - Dataset
- [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora)
#### [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py)
#### 21 Syntax annotations (Labels) covered:
- \_
- ATR
- ATR.d
- CAG
- CC
- CD
- CD.Q
- CI
- CPRED
- CPRED.CD
- CPRED.SUJ
- CREG
- ET
- IMPERS
- MOD
- NEG
- PASS
- PUNC
- ROOT
- SUJ
- VOC
## Metrics on test set 📋
| Metric | # score |
| :-------: | :-------: |
| F1 | **89.27** |
| Precision | **89.44** |
| Recall | **89.11** |
## Model in action 🔨
Fast usage with **pipelines** 🧪
```python
from transformers import pipeline
nlp_pos_syntax = pipeline(
"ner",
model="mrm8488/bert-spanish-cased-finetuned-pos-syntax",
tokenizer="mrm8488/bert-spanish-cased-finetuned-pos-syntax"
)
text = 'Mis amigos están pensando viajar a Londres este verano.'
nlp_pos_syntax(text)[1:len(nlp_pos_syntax(text))-1]
```
```json
[
{ "entity": "_", "score": 0.9999216794967651, "word": "Mis" },
{ "entity": "SUJ", "score": 0.999882698059082, "word": "amigos" },
{ "entity": "_", "score": 0.9998869299888611, "word": "están" },
{ "entity": "ROOT", "score": 0.9980518221855164, "word": "pensando" },
{ "entity": "_", "score": 0.9998420476913452, "word": "viajar" },
{ "entity": "CD", "score": 0.999351978302002, "word": "a" },
{ "entity": "_", "score": 0.999959409236908, "word": "Londres" },
{ "entity": "_", "score": 0.9998968839645386, "word": "este" },
{ "entity": "CC", "score": 0.99931401014328, "word": "verano" },
{ "entity": "PUNC", "score": 0.9998534917831421, "word": "." }
]
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/codebert2codebert-finetuned-code-defect-detection | 565d44b32c550aca63efcf980231c4af5f6e673c | 2021-06-14T17:17:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mrm8488 | null | mrm8488/codebert2codebert-finetuned-code-defect-detection | 22 | 1 | transformers | 8,042 | Entry not found |
mudes/multilingual-base | 61aeae16e83b822cb7695f6d5696246cd1b5fe47 | 2021-05-07T16:27:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"multilingual",
"arxiv:2102.09665",
"arxiv:2104.04630",
"transformers",
"mudes",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | mudes | null | mudes/multilingual-base | 22 | null | transformers | 8,043 | ---
language: multilingual
tags:
- mudes
license: apache-2.0
---
# MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans
We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630).
## Usage
You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed:
```bash
pip install mudes
```
Then you can use the model like this:
```python
from mudes.app.mudes_app import MUDESApp
app = MUDESApp("multilingual-base", use_cuda=False)
print(app.predict_toxic_spans("You motherfucking cunt", spans=True))
```
## System Demonstration
An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/).
## Citing & Authors
If you find this model helpful, feel free to cite our publications
```bibtex
@inproceedings{ranasinghemudes,
title={{MUDES: Multilingual Detection of Offensive Spans}},
author={Tharindu Ranasinghe and Marcos Zampieri},
booktitle={Proceedings of NAACL},
year={2021}
}
```
```bibtex
@inproceedings{ranasinghe2021semeval,
title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}},
author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex},
booktitle={Proceedings of SemEval},
year={2021}
}
``` |
navteca/all-mpnet-base-v2 | d8c0f0aa479ac7e550a1d16cfda17fb23bcbad91 | 2022-03-18T11:28:42.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:mit"
] | sentence-similarity | false | navteca | null | navteca/all-mpnet-base-v2 | 22 | null | sentence-transformers | 8,044 | ---
language: en
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- sentence-transformers
---
# All MPNet base model (v2) for Semantic Search
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
ncduy/bert-base-uncased-finetuned-swag | 7a0c8f25efad921b664e72893afbc997ca3f4bd6 | 2021-08-07T05:28:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | multiple-choice | false | ncduy | null | ncduy/bert-base-uncased-finetuned-swag | 22 | null | transformers | 8,045 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
model_index:
- name: bert-base-uncased-finetuned-swag
results:
- dataset:
name: swag
type: swag
args: regular
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6189
- eval_accuracy: 0.7647
- eval_runtime: 274.5502
- eval_samples_per_second: 72.868
- eval_steps_per_second: 4.557
- epoch: 1.0
- step: 4597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
openai/imagegpt-large | 6d413391888f5d7a1d7d903a46a19ad25ba7b9f2 | 2022-06-30T06:46:29.000Z | [
"pytorch",
"imagegpt",
"dataset:imagenet-21k",
"transformers",
"vision",
"license:apache-2.0"
] | null | false | openai | null | openai/imagegpt-large | 22 | 1 | transformers | 8,046 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
---
# ImageGPT (large-sized model)
ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/).
Disclaimer: The team releasing ImageGPT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels.
The goal for the model is simply to predict the next pixel value, given the previous ones.
By pre-training the model, it learns an inner representation of images that can then be used to:
- extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing".
- perform (un)conditional image generation.
## Intended uses & limitations
You can use the raw model for either feature extractor or (un) conditional image generation. See the [model hub](https://huggingface.co/models?search=openai/imagegpt) to all ImageGPT variants.
### How to use
Here is how to use this model in PyTorch to perform unconditional image generation:
```python
from transformers import ImageGPTFeatureExtractor, ImageGPTForCausalImageModeling
import torch
import matplotlib.pyplot as plt
import numpy as np
feature_extractor = ImageGPTFeatureExtractor.from_pretrained('openai/imagegpt-large')
model = ImageGPTForCausalImageModeling.from_pretrained('openai/imagegpt-large')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# unconditional generation of 8 images
batch_size = 8
context = torch.full((batch_size, 1), model.config.vocab_size - 1) #initialize with SOS token
context = torch.tensor(context).to(device)
output = model.generate(pixel_values=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40)
clusters = feature_extractor.clusters
n_px = feature_extractor.size
samples = output[:,1:].cpu().detach().numpy()
samples_img = [np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [n_px, n_px, 3]).astype(np.uint8) for s in samples] # convert color cluster tokens back to pixels
f, axes = plt.subplots(1, batch_size, dpi=300)
for img, ax in zip(samples_img, axes):
ax.axis('off')
ax.imshow(img)
```
## Training data
The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models.
### Pretraining
Training details can be found in section 3.4 of v2 of the paper.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@InProceedings{pmlr-v119-chen20s,
title = {Generative Pretraining From Pixels},
author = {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {1691--1703},
year = {2020},
editor = {III, Hal Daumé and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf},
url = {https://proceedings.mlr.press/v119/chen20s.html
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
pere/norwegian-gptneo-red | 7f3e1a6252793027ce697e6d8282826669550058 | 2021-09-25T18:43:08.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | pere | null | pere/norwegian-gptneo-red | 22 | null | transformers | 8,047 | Entry not found |
safsaf/poemAR | cd7d54b8e381c538a54bf1c31d138fb1c094db79 | 2021-05-23T12:19:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | safsaf | null | safsaf/poemAR | 22 | null | transformers | 8,048 | Entry not found |
secometo/mt5-base-turkish-question-paraphrase-generator | 354ff4650759c412bfd70c7c01af7919a79b8e5a | 2021-09-11T17:14:52.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | secometo | null | secometo/mt5-base-turkish-question-paraphrase-generator | 22 | 2 | transformers | 8,049 | # Turkish-question-paraphrase-generator
mT5 based pre-trained model to generate question paraphrases in Turkish language.
## Acknowledgement
In this project, which we undertook as an BLM3010 Computer Project of Yildiz Technical University, our goal was to conduct research on Turkish in area that has not been studied much. In this process, we compared the models trained with different algorithms. Developed a dataset and shared it by writing article for our model. We would like to thank our mentor teacher <a href="https://github.com/mfatihamasyali">Mehmet Fatih Amasyali</a> who has always supported us on this path.
## Usage
```python
import transformers, sentencepiece
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("secometo/mt5-base-turkish-question-paraphrase-generator")
model = AutoModelForSeq2SeqLM.from_pretrained("secometo/mt5-base-turkish-question-paraphrase-generator")
original_sentence = tokenizer.encode_plus("Ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsun?", return_tensors='pt')
paraphrased_sentences = model.generate(original_sentence['input_ids'], max_length=150, num_return_sequences=5, num_beams=5)
tokenizer.batch_decode(paraphrased_sentences, skip_special_tokens=True)
```
## Input
```
Ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsun?
```
## Outputs
```
['ülke genelinde bisiklet kullanımının artması ile ilgili düşünceniz nedir?',
'ülke genelinde bisiklet kullanımının artması hakkında düşünceniz nedir?',
'ülke genelinde bisiklet kullanımının artması için ne düşünüyorsunuz?',
'ülke genelinde bisiklet kullanımının artması hakkında ne düşünüyorsunuz?',
'ülke genelinde bisiklet kullanımının artması hakkında fikirleriniz nedir?']
```
## Dataset
We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person.
## Authors & Citation
<a href="https://github.com/metinbinbir">Metin Binbir</a></br>
<a href="https://github.com/sercaksoy">Sercan Aksoy</a>
```
Metin Binbir, Sercan Aksoy, Paraphrase Generation for Turkish Language, Computer Project, Advisor: Mehmet Fatih Amasyali, Yildiz Technical University, Computer Engineering Dept. , Istanbul, Turkey , 2021.
``` |
seduerr/pai_wikisplit | 4f1e5d28043a52c5c9d78b876003b849d7360d64 | 2021-07-21T12:19:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_wikisplit | 22 | null | transformers | 8,050 | Entry not found |
tartuNLP/gpt-4-est-base | 8608e761cb925a38e0976d640c3c2a07f65af504 | 2022-03-10T10:06:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | tartuNLP | null | tartuNLP/gpt-4-est-base | 22 | null | transformers | 8,051 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt-4-est-base
results: []
widget:
- text: ">wiki< mis on GPT? Vastus:"
---
---
# gpt-4-est-base
This is GPT for Estonian. Not GPT-4 :-) This is the base-size [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2) model, trained from scratch on 2.2 billion words (Estonian National Corpus + News Crawl + Common Crawl) for 3 epochs.
[Colab demo](https://colab.research.google.com/drive/1Bp7mGEQ1vmyqXPyXHV1yj68cRZEi2mq4?usp=sharing)
### Format
For training data was prepended with a text domain tag, and it should be added as prefix when using the model: >general<, >web<, >news<, >doaj< and >wiki< (standing for general texts, web crawled texts, news, article abstracts and wikipedia texts). Use the prefixes like this, e.g: ">web< Kas tead, et".
### Model details
- num. of layers: 12
- num. of heads: 12
- embedding size: 768
- context size: 1024
- total size: 118.68M params
Further details to be added soon.
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
textattack/xlnet-base-cased-imdb | 694be93d2ad4c287b622b9bccf8cd240bb1be92a | 2020-07-06T16:35:25.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | textattack | null | textattack/xlnet-base-cased-imdb | 22 | null | transformers | 8,052 | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 512.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.95352, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
w11wo/sundanese-roberta-base | 14f10ed3b408ad3ab1abddee70c9734e9a86d07f | 2022-02-26T13:14:48.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"su",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"arxiv:1907.11692",
"transformers",
"sundanese-roberta-base",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/sundanese-roberta-base | 22 | 2 | transformers | 8,053 | ---
language: su
tags:
- sundanese-roberta-base
license: mit
datasets:
- mc4
- cc100
- oscar
- wikipedia
widget:
- text: "Budi nuju <mask> di sakola."
---
## Sundanese RoBERTa Base
Sundanese RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 1.952 and an evaluation accuracy of 63.98%.
This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-roberta-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-roberta-base/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------ | ------- | ------- | ------------------------------------- |
| `sundanese-roberta-base` | 124M | RoBERTa | OSCAR, mC4, CC100, Wikipedia (758 MB) |
## Evaluation Results
The model was trained for 50 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid accuracy | total time |
| ---------- | ---------- | -------------- | ---------- |
| 1.965 | 1.952 | 0.6398 | 6:24:51 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/sundanese-roberta-base"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi nuju <mask> di sakola.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/sundanese-roberta-base"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi nuju diajar di sakola."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from all four datasets that may be carried over into the results of this model.
## Author
Sundanese RoBERTa Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
warwickai/fin-perceiver | 1ee8539cb5cee43c188ddaf94a854e73e4c6e75c | 2022-01-07T11:36:13.000Z | [
"pytorch",
"perceiver",
"text-classification",
"en",
"dataset:financial_phrasebank",
"transformers",
"financial-sentiment-analysis",
"sentiment-analysis",
"language-perceiver",
"license:apache-2.0",
"model-index"
] | text-classification | false | warwickai | null | warwickai/fin-perceiver | 22 | 3 | transformers | 8,054 | ---
language: "en"
license: apache-2.0
tags:
- financial-sentiment-analysis
- sentiment-analysis
- language-perceiver
datasets:
- financial_phrasebank
widget:
- text: "INDEX100 fell sharply today."
- text: "ImaginaryJetCo bookings hit by Omicron variant as losses total £1bn."
- text: "Q1 ImaginaryGame's earnings beat expectations."
- text: "Should we buy IMAGINARYSTOCK today?"
metrics:
- recall
- f1
- accuracy
- precision
model-index:
- name: fin-perceiver
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Accuracy
type: accuracy
value: 0.8624
- name: F1
type: f1
value: 0.8416
args: macro
- name: Precision
type: precision
value: 0.8438
args: macro
- name: Recall
type: recall
value: 0.8415
args: macro
---
# FINPerceiver
FINPerceiver is a fine-tuned Perceiver IO language model for financial sentiment analysis.
More details on the training process of this model are available on the [GitHub repository](https://github.com/warwickai/fin-perceiver).
Weights & Biases was used to track experiments.
We achieved the following results with 10-fold cross validation.
```
eval/accuracy 0.8624 (stdev 0.01922)
eval/f1 0.8416 (stdev 0.03738)
eval/loss 0.4314 (stdev 0.05295)
eval/precision 0.8438 (stdev 0.02938)
eval/recall 0.8415 (stdev 0.04458)
```
The hyperparameters used are as follows.
```
per_device_train_batch_size 16
per_device_eval_batch_size 16
num_train_epochs 4
learning_rate 2e-5
```
## Datasets
This model was trained on the Financial PhraseBank (>= 50% agreement)
|
yangheng/deberta-v3-base-absa | 84dd631427e6762fb6ce8ae194483195648f088b | 2022-03-19T01:06:27.000Z | [
"pytorch",
"deberta-v2",
"en",
"dataset:laptop14 (w/ augmentation)",
"dataset:restaurant14 (w/ augmentation)",
"dataset:restaurant16 (w/ augmentation)",
"dataset:ACL-Twitter (w/ augmentation)",
"dataset:MAMS (w/ augmentation)",
"dataset:Television (w/ augmentation)",
"dataset:TShirt (w/ augmentation)",
"dataset:Yelp (w/ augmentation)",
"arxiv:2110.08604",
"transformers",
"aspect-based-sentiment-analysis",
"lcf-bert",
"license:mit"
] | null | false | yangheng | null | yangheng/deberta-v3-base-absa | 22 | 1 | transformers | 8,055 | ---
language:
- en
tags:
- aspect-based-sentiment-analysis
- lcf-bert
license: mit
datasets:
- laptop14 (w/ augmentation)
- restaurant14 (w/ augmentation)
- restaurant16 (w/ augmentation)
- ACL-Twitter (w/ augmentation)
- MAMS (w/ augmentation)
- Television (w/ augmentation)
- TShirt (w/ augmentation)
- Yelp (w/ augmentation)
metrics:
- accuracy
- macro-f1
---
# Note
This model is training with 180k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
# DeBERTa for aspect-based sentiment analysis
The `deberta-v3-base-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
## Training Model
This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-base`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
## Usage
```python3
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa")
model = AutoModel.from_pretrained("yangheng/deberta-v3-base-absa")
inputs = tokenizer("good product especially video and audio quality fantastic.", return_tensors="pt")
outputs = model(**inputs)
```
## Example in PyASBA
An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets.
## Datasets
This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
```
loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/laptop14/0.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/1.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/2.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/3.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/0.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/1.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/2.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/3.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/0.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/1.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/2.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/3.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/0.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/1.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/2.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/3.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
loading: integrated_datasets/apc_datasets/MAMS/0.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/1.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/2.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/3.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
loading: integrated_datasets/apc_datasets/Television/0.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/1.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/2.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/3.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
loading: integrated_datasets/apc_datasets/TShirt/0.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/1.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/2.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/3.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
loading: integrated_datasets/apc_datasets/Yelp/0.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/1.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/2.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/3.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
```
If you use this model in your research, please cite our paper:
```
@article{YangZMT21,
author = {Heng Yang and
Biqing Zeng and
Mayi Xu and
Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
eprinttype = {arXiv},
eprint = {2110.08604},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
aaraki/bert-base-uncased-finetuned-swag | 55d09ad21446d90cd4d4b82973a9635bede29c5a | 2022-03-18T08:16:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | aaraki | null | aaraki/bert-base-uncased-finetuned-swag | 22 | null | transformers | 8,056 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Accuracy: 0.8002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904 | 1.0 | 4597 | 0.5155 | 0.8002 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Aleksandar1932/gpt-neo-125M-spanish-classics | e6b8298ae82de343453705931a6715bec914abe1 | 2022-03-19T19:59:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt-neo-125M-spanish-classics | 22 | null | transformers | 8,057 | Entry not found |
dhlee347/distilbert-imdb | 9a3bad591668f8a8a85aa86d2d8259075011e699 | 2022-03-28T14:07:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dhlee347 | null | dhlee347/distilbert-imdb | 22 | null | transformers | 8,058 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1796
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2808 | 1.0 | 782 | 0.1796 | 0.9302 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
samayash/finetuning-financial-news-sentiment | 33f36ac9eefe95b141724a1b254956f4dc7df030 | 2022-03-30T03:36:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | samayash | null | samayash/finetuning-financial-news-sentiment | 22 | null | transformers | 8,059 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
arampacha/gpt-neo-therapist-small | 29f3c56461d5e17428e7e665b8fb1f05dd4f1942 | 2022-03-31T20:34:26.000Z | [
"pytorch",
"tensorboard",
"onnx",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | arampacha | null | arampacha/gpt-neo-therapist-small | 22 | 1 | transformers | 8,060 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gpt-neo-therapist-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-therapist-small
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6731
- Rouge1: 39.5028
- Rouge2: 6.43
- Rougel: 24.0091
- Rougelsum: 35.4481
- Gen Len: 204.1329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 24
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 |
| 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 |
| 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 |
| 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 |
| 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
manu/lilt-camembert-dit-base | 74b9e14ae4a055c3c5ca7d5f5e3ee9772b87fd80 | 2022-04-02T16:48:05.000Z | [
"pytorch",
"liltrobertalike",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | manu | null | manu/lilt-camembert-dit-base | 22 | null | transformers | 8,061 | Entry not found |
facebook/regnet-y-10b-seer-in1k | 2e2409ed91657ca8a34fff921b16c8893d64153e | 2022-06-30T10:24:15.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-10b-seer-in1k | 22 | 1 | transformers | 8,062 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
## RegNetY 10B
This gigantic model is a scale up [RegNetY](https://arxiv.org/abs/2003.13678) model trained on one bilion random images ad later finetuned on imagenet.
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
ELiRF/mbart-large-cc25-dacsa-ca | d83ad1450ce1ea3c7c98f1e4691b9ede773dc9e7 | 2022-07-11T17:33:48.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"ca",
"arxiv:2001.08210",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | ELiRF | null | ELiRF/mbart-large-cc25-dacsa-ca | 22 | null | transformers | 8,063 | ---
language: ca
tags:
- summarization
widget:
- text: "La Universitat Politècnica de València (UPV), a través del projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al 10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del Festival Dansa València i explora la figura geomètrica del cercle des de totes les seues perspectives: espacial, corporal i compositiva. No és amor està inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les diferents facetes d'una obsessió. Així dona cabuda a la insistència, la repetició, el trastorn, la hipnosi i l'alliberament. El procés de digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex respecte a uns altres ja realitzats a causa de l'enorme desafiament que comporta el modelatge en 3D de cossos en moviment al ritme de la composició de l'obra. L'objectiu era generar una experiència el més realista possible i fidedigna de l'original perquè el resultat final fora un procés absolutament immersiu.Així, el metaverso està compost per figures modelades en 3D al costat de quatre projeccions digitalitzades en pantalles flotants amb les quals l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els comandaments de l'ordinador, bé a través d'ulleres de realitat virtual. L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga la sensació d'una immersió quasi completa en fondre's amb el contingut audiovisual que li genere una experiència intimista i molt real."
---
# mBART (large-cc25 model), fine-tuned on the *Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset for Catalan
The mBART model was presented in [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. The large-cc25 version of the mBART model is pre-trained in 25 languages, including English, Spanish, Italian, and other ones.
# Model description
The mBART-large-cc25 model has been fine-tuned for abstractive text summarization for Catalan.
# Training data
The mBART-larges-cc25 model has been fine-tuned on *the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset, specifically with the Catalan articles. The Catalan subset contains 636.596 document-summary pairs of Catalan news articles.
The DACSA dataset can be requested at the following address: https://xarrador.dsic.upv.es/resources/dacsa
# Intended uses & limitations
The model can be used for text summarization, especially in news articles.
# How to use
You can use the summarization model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="ELiRF/mbart-large-cc25-dacsa-ca")
ARTICLE = """La Universitat Politècnica de València (UPV), a través del
projecte Atenea “plataforma de dones, art i tecnologia” i en col·laboració amb
les companyies tecnològiques Metric Salad i Zetalab, ha digitalitzat i modelat
en 3D per a la 35a edició del Festival Dansa València, que se celebra del 2 al
10 d'abril, la primera peça de dansa en un metaverso específic. La peça No és
amor, dirigida per Lara Misó, forma part de la programació d'aquesta edició del
Festival Dansa València i explora la figura geomètrica del cercle des de totes
les seues perspectives: espacial, corporal i compositiva. No és amor està
inspirada en el treball de l'artista japonesa Yayoi Kusama i mira de prop les
diferents facetes d'una obsessió. Així dona cabuda a la insistència, la
repetició, el trastorn, la hipnosi i l'alliberament. El procés de
digitalització, materialitzat per Metric Salad i ZetaLab, ha sigut complex
respecte a uns altres ja realitzats a causa de l'enorme desafiament que
comporta el modelatge en 3D de cossos en moviment al ritme de la composició de
l'obra. L'objectiu era generar una experiència el més realista possible i
fidedigna de l'original perquè el resultat final fora un procés absolutament
immersiu.Així, el metaverso està compost per figures modelades en 3D al costat
de quatre projeccions digitalitzades en pantalles flotants amb les quals
l'usuari podrà interactuar segons es vaja acostant, bé mitjançant els
comandaments de l'ordinador, bé a través d'ulleres de realitat virtual.
L'objectiu és que quan l'usuari s'acoste a cadascuna de les projeccions tinga
la sensació d'una immersió quasi completa en fondre's amb el contingut
audiovisual que li genere una experiència intimista i molt real.
"""
print(summarizer(ARTICLE, truncation=True))
>>>[{'summary_text': "La Universitat Politècnica de València ha digitalitzat i modelat en 3D la primera peça de dansa en un metaverso específic."}]
```
### BibTeX entry
```bibtex
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
``` |
ELiRF/mt5-base-dacsa-es | 5536eaf4eb1f6dd2347dffa9693599c5a25052cf | 2022-07-11T17:33:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"arxiv:2010.11934",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | ELiRF | null | ELiRF/mt5-base-dacsa-es | 22 | null | transformers | 8,064 | ---
language: es
tags:
- summarization
widget:
- text: "La Universitat Politècnica de València (UPV), a través del proyecto Atenea “plataforma de mujeres, arte y tecnología” y en colaboración con las compañías tecnológicas Metric Salad y Zetalab, ha digitalizado y modelado en 3D para la 35ª edición del Festival Dansa València, que se celebra del 2 al 10 de abril, la primera pieza de danza en un metaverso específico.La pieza No es amor, dirigida por Lara Misó, forma parte de la programación de esta edición del Festival Dansa València y explora la figura geométrica del círculo desde todas sus perspectivas: espacial, corporal y compositiva. No es amor está inspirada en el trabajo de la artista japonesa Yayoi Kusama y mira de cerca las diferentes facetas de una obsesión. Así da cabida a la insistencia, la repetición, el trastorno, la hipnosis y la liberación. El proceso de digitalización, materializado por Metric Salad y ZetaLab, ha sido complejo respecto a otros ya realizados debido al enorme desafío que conlleva el modelado en 3D de cuerpos en movimiento al ritmo de la composición de la obra. El objetivo era generar una experiencia lo más realista posible y fidedigna de la original para que el resultado final fuera un proceso absolutamente inmersivo. Así, el metaverso está compuesto por figuras modeladas en 3D junto a cuatro proyecciones digitalizadas en pantallas flotantes con las que el usuario podrá interactuar según se vaya acercando, bien mediante los comandos del ordenador, bien a través de gafas de realidad virtual. El objetivo es que cuando el usuario se acerque a cada una de las proyecciones tenga la sensación de una inmersión casi completa al fundirse con el contenido audiovisual que le genere una experiencia intimista y muy real."
---
# mT5 (base model), fine-tuned on the *Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset for Spanish
The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The base version of the mT5 model is pre-trained in 101 languages, including English, Spanish, Italian, Catalan and other ones.
# Model description
The mT5-base model has been fine-tuned for abstractive text summarization for Spanish.
# Training data
The mT5-base model has been fine-tuned on *the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA)* dataset, specifically with the Spanish articles. The Spanish subset contains 1.802.919 document-summary pairs of Spanish news articles.
The DACSA dataset can be requested at the following address: https://xarrador.dsic.upv.es/resources/dacsa
# Intended uses & limitations
The model can be used for text summarization, especially in news articles.
# How to use
You can use the summarization model with the pipeline API:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="ELiRF/mt5-base-dacsa-es")
ARTICLE = """La Universitat Politècnica de València (UPV), a través del
proyecto Atenea “plataforma de mujeres, arte y tecnología” y en colaboración
con las compañías tecnológicas Metric Salad y Zetalab, ha digitalizado y
modelado en 3D para la 35ª edición del Festival Dansa València, que se celebra
del 2 al 10 de abril, la primera pieza de danza en un metaverso específico.La
pieza No es amor, dirigida por Lara Misó, forma parte de la programación de
esta edición del Festival Dansa València y explora la figura geométrica del
círculo desde todas sus perspectivas: espacial, corporal y compositiva. No es
amor está inspirada en el trabajo de la artista japonesa Yayoi Kusama y mira de
cerca las diferentes facetas de una obsesión. Así da cabida a la insistencia,
la repetición, el trastorno, la hipnosis y la liberación. El proceso de
digitalización, materializado por Metric Salad y ZetaLab, ha sido complejo
respecto a otros ya realizados debido al enorme desafío que conlleva el
modelado en 3D de cuerpos en movimiento al ritmo de la composición de la obra.
El objetivo era generar una experiencia lo más realista posible y fidedigna de
la original para que el resultado final fuera un proceso absolutamente
inmersivo. Así, el metaverso está compuesto por figuras modeladas en 3D junto a
cuatro proyecciones digitalizadas en pantallas flotantes con las que el usuario
podrá interactuar según se vaya acercando, bien mediante los comandos del
ordenador, bien a través de gafas de realidad virtual. El objetivo es que
cuando el usuario se acerque a cada una de las proyecciones tenga la sensación
de una inmersión casi completa al fundirse con el contenido audiovisual que le
genere una experiencia intimista y muy real.
"""
print(summarizer(ARTICLE, truncation=True))
>>>[{'summary_text': "La Universitat Politècnica de València ha digitalizado y modelado en 3D para la 35a edición del Festival Dansa València, que se celebra del 2 al 10 de abril."}]
```
### BibTeX entry
```bibtex
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
``` |
domenicrosati/t5-small-finetuned-contradiction | c50fd83ddf2a8e80962a24d556b5a8a79b943f70 | 2022-04-28T03:07:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:snli",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | domenicrosati | null | domenicrosati/t5-small-finetuned-contradiction | 22 | 1 | transformers | 8,065 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- snli
metrics:
- rouge
model-index:
- name: t5-small-finetuned-contradiction
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: snli
type: snli
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 34.4237
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-contradiction
This model is a fine-tuned version of [domenicrosati/t5-small-finetuned-contradiction](https://huggingface.co/domenicrosati/t5-small-finetuned-contradiction) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0458
- Rouge1: 34.4237
- Rouge2: 14.5442
- Rougel: 32.5483
- Rougelsum: 32.5785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8605 | 1.0 | 2863 | 2.0813 | 34.4597 | 14.5186 | 32.6909 | 32.7097 |
| 1.9209 | 2.0 | 5726 | 2.0721 | 34.3859 | 14.5733 | 32.5188 | 32.5524 |
| 1.9367 | 3.0 | 8589 | 2.0623 | 34.4192 | 14.455 | 32.581 | 32.5962 |
| 1.9539 | 4.0 | 11452 | 2.0565 | 34.5148 | 14.6131 | 32.6786 | 32.7174 |
| 1.9655 | 5.0 | 14315 | 2.0538 | 34.4393 | 14.6439 | 32.6344 | 32.6587 |
| 1.9683 | 6.0 | 17178 | 2.0493 | 34.7199 | 14.7763 | 32.8625 | 32.8782 |
| 1.9735 | 7.0 | 20041 | 2.0476 | 34.5366 | 14.6362 | 32.6939 | 32.7177 |
| 1.98 | 8.0 | 22904 | 2.0458 | 34.5 | 14.5695 | 32.6219 | 32.6478 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-pt-job-offer | daff02cbaa19b04abb1b13fcd21051d0808e4195 | 2022-04-27T10:17:18.000Z | [
"pytorch",
"bert",
"text-classification",
"pt",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-pt-job-offer | 22 | null | transformers | 8,066 | ---
language: pt # <-- my language
widget:
- text: "VAGA - Assistente Comercial - São Paulo; Interessados mandar currículo"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Offer (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets containing a job offer. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets containing a job offer (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
gunghio/xlm-roberta-base-finetuned-panx-ner | 79ae76eb12734fb288abc879a31a2187c28cc4c6 | 2022-05-17T21:41:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"it",
"en",
"de",
"fr",
"es",
"dataset:xtreme",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | gunghio | null | gunghio/xlm-roberta-base-finetuned-panx-ner | 22 | null | transformers | 8,067 | ---
license:
- mit
datasets:
- xtreme
language:
- it
- en
- de
- fr
- es
metrics:
- precision: 0.874
- recall: 0.880
- f1: 0.877
- accuracy: 0.943
inference:
parameters:
aggregation_strategy: "first"
---
# gunghio/xlm-roberta-base-finetuned-panx-ner
This model was trained starting from xlm-roberta-base on a subset of xtreme dataset.
`xtreme` datasets subsets used are: PAN-X.{lang}. Language used for training/validation are: italian, english, german, french and spanish.
Only 75% of the whole dataset was used.
## Intended uses & limitations
Fine-tuned model can be used for Named Entity Recognition in it, en, de, fr, and es.
## Training and evaluation data
Training dataset: [xtreme](https://huggingface.co/datasets/xtreme)
### Training results
It achieves the following results on the evaluation set:
- Precision: 0.8744154472771157
- Recall: 0.8791424269015351
- F1: 0.8767725659462058
- Accuracy: 0.9432040948504613
Details:
| Label | Precision | Recall | F1-Score | Support |
|---------|-----------|--------|----------|---------|
| PER | 0.922 | 0.908 | 0.915 | 26639 |
| LOC | 0.880 | 0.906 | 0.892 | 37623 |
| ORG | 0.821 | 0.816 | 0.818 | 28045 |
| Overall | 0.874 | 0.879 | 0.877 | 92307 |
## Usage
Set aggregation stragey according to [documentation](https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/pipelines#transformers.TokenClassificationPipeline).
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner")
model = AutoModelForTokenClassification.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="first")
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
|
doc2query/msmarco-spanish-mt5-base-v1 | 5ae42776dd7e22705003b3c89f0b0f21011c218b | 2022-04-29T12:11:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-spanish-mt5-base-v1 | 22 | 1 | transformers | 8,068 | ---
language: es
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
license: apache-2.0
---
# doc2query/msmarco-spanish-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-spanish-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
patrickvonplaten/wav2vec2-conformer-rel-pos-large-960h-ft-4-gram | 931a3bcd042a3fcb047ab260c2e998d0ec9fb3a6 | 2022-05-24T11:10:15.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-conformer-rel-pos-large-960h-ft-4-gram | 22 | null | transformers | 8,069 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-conformer-rel-pos-large-960h-ft-4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.54
---
# Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings + 4-gram
This model is identical to [Facebook's wav2vec2-conformer-rel-pos-large-960h-ft](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large-960h-ft), but is
augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
## Evaluation
This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-conformer-rel-pos-large-960h-ft-4-gram** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torch
from jiwer import wer
model_id = "patrickvonplaten/wav2vec2-conformer-rel-pos-large-960h-ft-4-gram"
librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = AutoModelForCTC.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt")
inputs = {k: v.to("cuda") for k,v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print(wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.94 | 3.54 | |
HiTZ/A2T_RoBERTa_SMFA_TACRED-re | 923cbb4e9d82065e828f15667464974c56214f5f | 2022-05-08T23:09:49.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | HiTZ | null | HiTZ/A2T_RoBERTa_SMFA_TACRED-re | 22 | null | transformers | 8,070 | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
laituan245/molt5-large-caption2smiles | 9e82786751c0ea421b9252f632e763934dfbe8c2 | 2022-05-03T18:08:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/molt5-large-caption2smiles | 22 | null | transformers | 8,071 | ---
license: apache-2.0
---
This model can be used to generate a SMILES string from an input caption.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-caption2smiles", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-caption2smiles')
input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
ml4pubmed/xtremedistil-l12-h384-uncased_pub_section | db9cc5d15345bfc80b95effe5a2b6e3a1b58f41d | 2022-06-22T12:29:07.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers",
"document sections",
"sentence classification",
"document classification",
"medical",
"health",
"biomedical"
] | text-classification | false | ml4pubmed | null | ml4pubmed/xtremedistil-l12-h384-uncased_pub_section | 22 | null | transformers | 8,072 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# xtremedistil-l12-h384-uncased_pub_section
- original model file name: textclassifer_xtremedistil-l12-h384-uncased_pubmed_20k
- This is a fine-tuned checkpoint of `microsoft/xtremedistil-l12-h384-uncased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/xtremedistil-l12-h384-uncased_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_parameters
- date_run: Apr-24-2022_t-12
- huggingface_tag: microsoft/xtremedistil-l12-h384-uncased
|
fabiochiu/t5-base-medium-title-generation | 384c8158ed22e75bf7ceff0a8b43f8ae01e124b6 | 2022-05-23T13:39:22.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | fabiochiu | null | fabiochiu/t5-base-medium-title-generation | 22 | 1 | transformers | 8,073 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-medium-title-generation
results: []
widget:
- text: "summarize: Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in 'how it started, and how it is going,' financial institutions were looking for ways to automate solutions to help get back to 'normal' levels of customer service. This resulted in a change from the 'future of conversational AI' to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a 'dumb' question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down."
example_title: "Banking on Bots"
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Model description
This model is [t5-base](https://huggingface.co/t5-base) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article titles using the article textual content as input.
There are two versions of the model:
- [t5-small-medium-title-generation](https://huggingface.co/fabiochiu/t5-small-medium-title-generation): trained from [t5-small](https://huggingface.co/t5-small).
- [t5-base-medium-title-generation](https://huggingface.co/fabiochiu/t5-base-medium-title-generation): trained from [t5-base](https://huggingface.co/t5-base).
Visit the [title-generation space](https://huggingface.co/spaces/fabiochiu/title-generation) to try the model with different text generation parameters.
# How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-small-medium-title-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-small-medium-title-generation")
text = """
Many financial institutions started building conversational AI, prior to the Covid19
pandemic, as part of a digital transformation initiative. These initial solutions
were high profile, highly personalized virtual assistants — like the Erica chatbot
from Bank of America. As the pandemic hit, the need changed as contact centers were
under increased pressures. As Cathal McGloin of ServisBOT explains in “how it started,
and how it is going,” financial institutions were looking for ways to automate
solutions to help get back to “normal” levels of customer service. This resulted
in a change from the “future of conversational AI” to a real tactical assistant
that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend.
Banks were originally looking to conversational AI as part of digital transformation
to keep up with the times. However, with the pandemic, it has been more about
customer retention and customer satisfaction. In addition, new use cases came about
as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita
Kumar of Deloitte points out, banks were dealing with an influx of calls about new
concerns, like questions around the Paycheck Protection Program (PPP) loans. This
resulted in an increase in volume, without enough agents to assist customers, and
tipped the scale to incorporate conversational AI. When choosing initial use cases
to support, financial institutions often start with high volume, low complexity
tasks. For example, password resets, checking account balances, or checking the
status of a transaction, as Vinita points out. From there, the use cases can evolve
as the banks get more mature in developing conversational AI, and as the customers
become more engaged with the solutions. Cathal indicates another good way for banks
to start is looking at use cases that are a pain point, and also do not require a
lot of IT support. Some financial institutions may have a multi-year technology
roadmap, which can make it harder to get a new service started. A simple chatbot
for document collection in an onboarding process can result in high engagement,
and a high return on investment. For example, Cathal has a banking customer that
implemented a chatbot to capture a driver’s license to be used in the verification
process of adding an additional user to an account — it has over 85% engagement
with high satisfaction. An interesting use case Haritha discovered involved
educating customers on financial matters. People feel more comfortable asking a
chatbot what might be considered a “dumb” question, as the chatbot is less judgmental.
Users can be more ambiguous with their questions as well, not knowing the right
words to use, as chatbot can help narrow things down.
"""
inputs = ["summarize: " + text]
inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=64)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
predicted_title = nltk.sent_tokenize(decoded_output.strip())[0]
print(predicted_title)
# Conversational AI: The Future of Customer Service
```
## Training and evaluation data
The model has been trained on a single epoch spanning about 16000 articles, evaluating on 1000 random articles not used during training.
### Training results
The model has been evaluated on a random dataset split of 1000 articles not used during training and validation.
- Rouge-1: 37.9%
- Rouge-2: 24.4%
- Rouge-L: 35.9%
- Rouge-Lsum: 35.9%
- Average length of the generated titles: 13 tokens (about 9 English words)
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AlekseyKorshuk/opt-1.3b | 5b21e55674fbd694bf081c46e25d5ba4bb729ae5 | 2022-05-05T18:32:57.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/opt-1.3b | 22 | null | transformers | 8,074 | ---
license: apache-2.0
---
|
charlieoneill/distilbert-base-uncased-gradient-clinic | 0697c8d2dd06d799d262e214ec44b6c851a9a533 | 2022-07-10T19:52:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | charlieoneill | null | charlieoneill/distilbert-base-uncased-gradient-clinic | 22 | 1 | transformers | 8,075 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-gradient-clinic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-gradient-clinic
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 24 | 0.8576 |
| No log | 2.0 | 48 | 0.3439 |
| No log | 3.0 | 72 | 0.2807 |
| No log | 4.0 | 96 | 0.2653 |
| No log | 5.0 | 120 | 0.2601 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.2
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ccdv/lsg-bart-base-4096-arxiv | 2d42dd7455cf480313f1a90639d33a619e8d15b9 | 2022-07-25T05:30:44.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:scientific_papers",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-4096-arxiv | 22 | null | transformers | 8,076 | ---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-arxiv", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-arxiv
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [scientific_papers arxiv](https://huggingface.co/datasets/scientific_papers) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 46.65 | 18.91 | 26.90 | 42.18 |
| 4096 | Local | 128 | 0 | 384 | 46.18 | 18.57 | 26.71 | 41.69 |
| 4096 | Pooling | 128 | 4 | 644 | 46.27 | 18.68 | 26.87 | 41.82 |
| 4096 | Stride | 128 | 4 | 644 | 46.34 | 18.64 | 26.69 | 41.87 |
| 4096 | Block Stride | 128 | 4 | 644 | 46.23 | 18.62 | 26.62 | 41.80 |
| 4096 | Norm | 128 | 4 | 644 | 45.96 | 18.46 | 26.52 | 41.51 |
| 4096 | LSH | 128 | 4 | 644 | 46.19 | 18.72 | 26.89 | 41.76 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 44.71 | 17.53 | 26.03 | 40.23 |
| 4096 | Local | 32 | 0 | 96 | 39.67 | 14.34 | 23.81 | 35.00 |
| 4096 | Pooling | 32 | 4 | 160 | 42.75 | 16.34 | 25.20 | 38.23 |
| 4096 | Stride | 32 | 4 | 160 | 44.23 | 17.21 | 25.71 | 39.72 |
| 4096 | Block Stride | 32 | 4 | 160 | 44.15 | 17.10 | 25.68 | 39.60 |
| 4096 | Norm | 32 | 4 | 160 | 42.02 | 15.65 | 24.56 | 37.45 |
| 4096 | LSH | 32 | 4 | 160 | 42.58 | 16.21 | 25.10 | 38.04 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: arxiv
- eval_batch_size: 8
- eval_samples: 6440
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 320
- min_length: 32
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
lucifermorninstar011/autotrain-lucifer_job_title_comb-858027260 | 83b2a074f52b32ac673f8c271d3ef3ff402df5b2 | 2022-05-12T08:58:09.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-lucifer_job_title_comb",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-lucifer_job_title_comb-858027260 | 22 | null | transformers | 8,077 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-lucifer_job_title_comb
co2_eq_emissions: 66.24310525675156
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 858027260
- CO2 Emissions (in grams): 66.24310525675156
## Validation Metrics
- Loss: 0.007201952859759331
- Accuracy: 0.9978341245706535
- Precision: 0.9558986154730835
- Recall: 0.9205537806176783
- F1: 0.9378933206024044
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-lucifer_job_title_comb-858027260
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-lucifer_job_title_comb-858027260", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-lucifer_job_title_comb-858027260", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
charsiu/g2p_multilingual_byT5_tiny_8_layers | 0f580d3f3948030c59fcdfcdc14afbba2ef78616 | 2022-05-19T05:03:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | charsiu | null | charsiu/g2p_multilingual_byT5_tiny_8_layers | 22 | null | transformers | 8,078 | Entry not found |
ENM/scibert_scivocab_cased-new-finetuned-breastcancer | 5d180369ef6636b6225bba9031beeed047ed39e8 | 2022-05-26T02:28:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | ENM | null | ENM/scibert_scivocab_cased-new-finetuned-breastcancer | 22 | null | transformers | 8,079 | ---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_cased-new-finetuned-breastcancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_cased-new-finetuned-breastcancer
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 3.1340 |
| No log | 2.0 | 80 | 1.6044 |
| No log | 3.0 | 120 | 1.2439 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Abderrahim2/bert-finetuned-Location | 44a97b66a5f90853edf7a494e415cb1467297d77 | 2022-06-01T20:18:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Abderrahim2 | null | Abderrahim2/bert-finetuned-Location | 22 | null | transformers | 8,080 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-Location
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-Location
This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
- F1: 0.8167
- Roc Auc: 0.8624
- Accuracy: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4229 | 1.0 | 742 | 0.3615 | 0.7402 | 0.8014 | 0.6900 |
| 0.3722 | 2.0 | 1484 | 0.3103 | 0.7906 | 0.8416 | 0.7796 |
| 0.262 | 3.0 | 2226 | 0.3364 | 0.8135 | 0.8600 | 0.8100 |
| 0.2239 | 4.0 | 2968 | 0.4593 | 0.8085 | 0.8561 | 0.8066 |
| 0.1461 | 5.0 | 3710 | 0.5534 | 0.7923 | 0.8440 | 0.7904 |
| 0.1333 | 6.0 | 4452 | 0.5462 | 0.8167 | 0.8624 | 0.8133 |
| 0.0667 | 7.0 | 5194 | 0.6298 | 0.7972 | 0.8479 | 0.7958 |
| 0.0616 | 8.0 | 5936 | 0.6362 | 0.8075 | 0.8556 | 0.8059 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
wawaup/MengziT5-Comment | 764e561e5d89d715b391f2c5df261834edb950f1 | 2022-06-19T16:59:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"zh",
"dataset:TencentKuaibao",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | wawaup | null | wawaup/MengziT5-Comment | 22 | null | transformers | 8,081 | ---
language:
- zh
license: apache-2.0
datasets:
- TencentKuaibao
metrics:
- bleu
- rouge
---
## 模型
- 基于中文[MengziT5](https://huggingface.co/Langboat/mengzi-t5-base)的新闻评论生成模型
- 数据集来源于论文[《Coherent Comment Generation for Chinese Articles with a Graph-to-Sequence Model》](https://github.com/lancopku/Graph-to-seq-comment-generation)
## 生成评论
- 在线API只能生成一种评论,模型通过设置model.generate()参数是可以生成多种评论的
```Python
t5_tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base")
model = T5ForConditionalGeneration.from_pretrained("wawaup/MengziT5-Comment")
def generate_comment(input_ids,cnt_num):
outputs = model.generate(input_ids,
max_length=128,
do_sample=True,
temperature=0.9,
early_stopping=True,
repetition_penalty=10.0,
top_p=0.5,
num_return_sequences=cnt_num)
print(outputs)
preds_cleaned = [t5_tokenizer.decode(ids, skip_special_tokens=True,
clean_up_tokenization_spaces=True) for ids in outputs]
print(preds_cleaned)
return preds_cleaned
``` |
yanekyuk/berturk-keyword-extractor | 0af06543592681df8c7ca889fd99ed85b3d55d45 | 2022-06-04T01:57:03.000Z | [
"pytorch",
"bert",
"token-classification",
"tr",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/berturk-keyword-extractor | 22 | null | transformers | 8,082 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "İngiltere'de düzenlenen Avrupa Tekvando ve Para Tekvando Şampiyonası’nda millî tekvandocular 5 altın, 2 gümüş ve 4 bronz olmak üzere 11, millî para tekvandocular ise 4 altın, 3 gümüş ve 1 bronz olmak üzere 8 madalya kazanarak takım halinde Avrupa şampiyonu oldu."
- text: "Füme somon dedik ama aslında lox salamuralanmış somon anlamına geliyor, füme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasında New Yorklu Yahudi cemaati tarafından koşer bir alternatif olarak çıkan bir lezzet. Günümüzde benim hangover yüreğim dâhil dünyanın birçok yerinde enfes bir kahvaltı sandviçi."
- text: "Türkiye'de son aylarda sıklıkla tartışılan konut satışı karşılığında yabancılara vatandaşlık verilmesi konusunu beyin göçü kapsamında ele almak mümkün. Daha önce 250 bin dolar olan vatandaşlık bedeli yükselen tepkiler üzerine 400 bin dolara çıkarılmıştı. Türkiye'den göç eden iyi eğitimli kişilerin , gittikleri ülkelerde 250 bin dolar tutarında yabancı yatırıma denk olduğu göz önüne alındığında nitelikli insan gücünün yabancılara konut karşılığında satılan vatandaşlık bedelin eş olduğunu görüyoruz. Yurt dışına giden her bir vatandaşın yüksek teknolojili katma değer üreten sektörlere yapacağı katkılar göz önünde bulundurulduğunda bu açığın inşaat sektörüyle kapatıldığını da görüyoruz. Beyin göçü konusunda sadece ekonomik perspektiften bakıldığında bile kısa vadeli döviz kaynağı yaratmak için kullanılan vatandaşlık satışı yerine beyin göçünü önleyecek önlemler alınmasının ülkemize çok daha faydalı olacağı sonucunu çıkarıyoruz."
- text: "Türkiye’de resmî verilere göre, 15 ve daha yukarı yaştaki kişilerde mevsim etkisinden arındırılmış işsiz sayısı, bu yılın ilk çeyreğinde bir önceki çeyreğe göre 50 bin kişi artarak 3 milyon 845 bin kişi oldu. Mevsim etkisinden arındırılmış işsizlik oranı ise 0,1 puanlık artışla %11,4 seviyesinde gerçekleşti. İşsizlik oranı, ilk çeyrekte geçen yılın aynı çeyreğine göre 1,7 puan azaldı."
- text: "Boeing’in insansız uzay aracı Starliner, birtakım sorunlara rağmen Uluslararası Uzay İstasyonuna (ISS) ulaşarak ilk kez başarılı bir şekilde kenetlendi. Aracın ISS’te beş gün kalmasını takiben sorunsuz bir şekilde New Mexico’ya inmesi halinde Boeing, sonbaharda astronotları yörüngeye göndermek için Starliner’ı kullanabilir.\n\nNeden önemli? NASA’nın personal aracı üretmeyi durdurmasından kaynaklı olarak görevli astronotlar ve kozmonotlar, ISS’te Rusya’nın ürettiği uzay araçları ile taşınıyordu. Starliner’ın kendini kanıtlaması ise bu konuda Rusya’ya olan bağımlılığın potansiyel olarak ortadan kalkabileceği anlamına geliyor."
model-index:
- name: berturk-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berturk-keyword-extractor
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4306
- Precision: 0.6770
- Recall: 0.6899
- Accuracy: 0.9169
- F1: 0.6834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1845 | 1.0 | 1875 | 0.1964 | 0.6380 | 0.6743 | 0.9164 | 0.6557 |
| 0.1338 | 2.0 | 3750 | 0.2023 | 0.6407 | 0.7081 | 0.9169 | 0.6727 |
| 0.0978 | 3.0 | 5625 | 0.2315 | 0.6434 | 0.7309 | 0.9159 | 0.6844 |
| 0.0742 | 4.0 | 7500 | 0.2746 | 0.6592 | 0.7144 | 0.9158 | 0.6857 |
| 0.0541 | 5.0 | 9375 | 0.3290 | 0.6700 | 0.6880 | 0.9161 | 0.6789 |
| 0.0426 | 6.0 | 11250 | 0.3608 | 0.6789 | 0.6860 | 0.9171 | 0.6824 |
| 0.0332 | 7.0 | 13125 | 0.4075 | 0.6769 | 0.6924 | 0.9168 | 0.6845 |
| 0.027 | 8.0 | 15000 | 0.4306 | 0.6770 | 0.6899 | 0.9169 | 0.6834 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
SalamaThanks/SalamaThanksTransformer_fil2en_v3 | 11ba7a1ed0a678682b42b5990d5722ba93cad4b1 | 2022-06-06T11:39:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SalamaThanks | null | SalamaThanks/SalamaThanksTransformer_fil2en_v3 | 22 | null | transformers | 8,083 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Modified-BioBERT-512 | c5f649d58c915803a5aa4dcea32ead7e5b98f8ef | 2022-06-16T06:52:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-BioBERT-512 | 22 | null | transformers | 8,084 | Entry not found |
Matthijs/mobilenet_v1_1.0_224 | c173be55f634ff127231454809e51ed145debed4 | 2022-06-22T12:49:25.000Z | [
"pytorch",
"mobilenet_v1",
"dataset:imagenet-1k",
"arxiv:1704.04861",
"transformers",
"vision",
"image-classification",
"license:other"
] | image-classification | false | Matthijs | null | Matthijs/mobilenet_v1_1.0_224 | 22 | null | transformers | 8,085 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V1
MobileNet V1 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV1FeatureExtractor, MobileNetV1ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV1FeatureExtractor.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
model = MobileNetV1ForImageClassification.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
|
RogerKam/roberta_RCADE_fine_tuned_sentiment_covid_news | 316f4824833b3134278171a287b713644dc2e9f8 | 2022-06-22T19:15:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | RogerKam | null | RogerKam/roberta_RCADE_fine_tuned_sentiment_covid_news | 22 | null | transformers | 8,086 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_RCADE_fine_tuned_sentiment_covid_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_RCADE_fine_tuned_sentiment_covid_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1662
- Accuracy: 0.9700
- F1 Score: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kktoto/4L_weight_decay | ee4399240df6faf18c4d96f791c03e42cda69396 | 2022-06-23T04:49:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | kktoto | null | kktoto/4L_weight_decay | 22 | null | transformers | 8,087 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 4L_weight_decay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4L_weight_decay
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1312
- Precision: 0.7006
- Recall: 0.6863
- F1: 0.6934
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.157 | 1.0 | 5561 | 0.1464 | 0.6943 | 0.6153 | 0.6524 | 0.9465 |
| 0.1454 | 2.0 | 11122 | 0.1396 | 0.6921 | 0.6491 | 0.6699 | 0.9486 |
| 0.1414 | 3.0 | 16683 | 0.1372 | 0.6841 | 0.6746 | 0.6793 | 0.9492 |
| 0.1335 | 4.0 | 22244 | 0.1339 | 0.6997 | 0.6617 | 0.6802 | 0.9505 |
| 0.1308 | 5.0 | 27805 | 0.1339 | 0.6963 | 0.6763 | 0.6862 | 0.9510 |
| 0.1285 | 6.0 | 33366 | 0.1320 | 0.7102 | 0.6639 | 0.6863 | 0.9519 |
| 0.1257 | 7.0 | 38927 | 0.1306 | 0.7031 | 0.6771 | 0.6898 | 0.9521 |
| 0.1222 | 8.0 | 44488 | 0.1324 | 0.7005 | 0.6836 | 0.6919 | 0.9522 |
| 0.1207 | 9.0 | 50049 | 0.1313 | 0.7017 | 0.6832 | 0.6923 | 0.9524 |
| 0.1195 | 10.0 | 55610 | 0.1312 | 0.7006 | 0.6863 | 0.6934 | 0.9524 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
inigopm/beto-base-spanish-squades2 | d62b153b4371b57695293e3298662e5197833652 | 2022-06-26T14:48:20.000Z | [
"pytorch",
"bert",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | inigopm | null | inigopm/beto-base-spanish-squades2 | 22 | 1 | transformers | 8,088 | ---
language:
- es
tags:
- question-answering
datasets:
- squad_es
metrics:
- f1
- em
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: beto-base-spanish-squades2
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: squad_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: squad_es v2.0.0 # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: f1
value: 62.70
name: f1
- type: exact match
value: 54.60
name: exact match
---
The model has been trained on the second version of the [SQuAD_es](https://huggingface.co/datasets/squad_es) database. It is a question-answering dataset automatically translated from SQUAD to Spanish. This version includes the possibility that the answer does not exist within the context.
The pretrained model used is ["dccuchile/bert-base-spanish-wwm-cased"](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), also called as BETO, pretrained on a [big spanish corpus](https://github.com/josecannete/spanish-corpora).
**METRICS**
**F1:** 62.70
**EM:** 54.60
|
Maha/xlmtwtroberta_label2 | 818090795586a3ac5cfd76222f22efdbe8af8e7c | 2022-06-27T10:06:47.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Maha | null | Maha/xlmtwtroberta_label2 | 22 | null | transformers | 8,089 | Entry not found |
jvanz/querido_diario_autoencoder | e0f8d41da93f0a50708257138b9cecbb4c949eb0 | 2022-07-01T11:53:59.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"pt",
"dataset:jvanz/querido_diario",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jvanz | null | jvanz/querido_diario_autoencoder | 22 | null | transformers | 8,090 | ---
language:
- pt
datasets:
- jvanz/querido_diario
---
# Querido Diario Autoencoder
Autoencoder based on portuguese BERT using the Querido Diario dataset |
Matthijs/mobilenet_v2_1.4_224 | bff5e2944e1ea36bb715cc34e69b1fb837cb5aad | 2022-06-28T12:50:41.000Z | [
"pytorch",
"coreml",
"mobilenet_v2",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"transformers",
"vision",
"image-classification",
"license:other"
] | image-classification | false | Matthijs | null | Matthijs/mobilenet_v2_1.4_224 | 22 | null | transformers | 8,091 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileNet V2
MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
model = MobileNetV2ForImageClassification.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
### BibTeX entry and citation info
```bibtex
@inproceedings{mobilenetv22018,
title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
booktitle={CVPR},
year={2018}
}
```
|
fxtentacle/tevr-token-entropy-predictor-de | dbeb0d0ae7b05690576cb2efd9b5f1e609e1ef90 | 2022-06-28T16:19:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fxtentacle | null | fxtentacle/tevr-token-entropy-predictor-de | 22 | null | transformers | 8,092 | This repo contains the fully trained ByT5 that was used to estimate per-character entropies. Using it, you can also recreate the illustration in the paper.
## Generate TEVR Tokenizer from Text corpus
(copy of `Generate TEVR Tokenizer.ipynb`)
```python
# TODO: load large text dataset like OSCAR
all_sentences_de = ["Über vier Jahrzehnte gehörte er zu den führenden Bildhauern Niederbayerns", "die katze ist niedlich"] * 1000
```
```python
from huggingface_hub import snapshot_download
data_folder = snapshot_download("fxtentacle/tevr-token-entropy-predictor-de")
```
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(data_folder)
model.to('cuda')
model.eval()
None
```
```python
import torch
def text_to_cross_entropy(text):
ttext = torch.tensor([[0]+list(text.encode('UTF-8'))],dtype=torch.int64).to('cuda')
tone = torch.tensor([[1]],dtype=torch.int32).to('cuda')
logits = model.forward(input_ids=tone, attention_mask=tone, decoder_input_ids=ttext, return_dict=False)[0].detach()
cross_entropy = torch.nn.functional.cross_entropy(input=logits[0][:-1], target=ttext[0][1:], reduction='none').detach().cpu().numpy()
return cross_entropy
```
```python
text = all_sentences_de[0]
cross_entropy = text_to_cross_entropy(text)
print(text)
for i in range(len(text)):
print(text[i], cross_entropy[i])
```
Über vier Jahrzehnte gehörte er zu den führenden Bildhauern Niederbayerns
Ü 7.254014
b 0.17521738
e 0.00046933602
r 0.01929327
0.0003675739
v 0.20927554
i 6.13207
e 0.3896482
r 0.009583538
2.07364
J 0.02978594
a 2.483246
h 0.1591908
r 0.0045124847
z 0.00028653807
e 4.0242333
h 0.031035878
n 0.028907888
t 0.003264101
e 0.0018929198
0.05816966
g 1.2782481
e 3.5076692
h 0.694337
ö 0.5319732
r 0.48336726
t 0.0050443523
e 0.0017187123
0.14511283
e 1.0435015
r 0.18165778
1.0247636
z 0.3594512
u 0.0077577736
2.072764
d 0.17377533
e 1.0727838
n 1.2805216
0.24939628
f 0.27717885
ü 0.012466482
h 4.4356546
r 1.7371752
e 0.051492628
n 2.99407
d 0.009648594
e 0.19667451
n 0.007495021
0.2529005
B 0.004451485
i 0.024661187
l 0.0028436247
d 2.6620464
h 2.825038
a 0.8215449
u 0.011406565
e 2.9599652
r 0.45834702
n 0.11848967
0.5955992
N 0.010709903
i 1.5338714
e 0.1834471
d 5.668945
e 2.052247
r 0.7692907
b 0.0675718
a 0.028234791
y 0.0045266068
e 4.1125383
r 1.2630856
n 5.436057
s 0.46446246
```python
from tqdm import tqdm
sentence_data = all_sentences_de
text_and_entropies = []
for text in tqdm(sentence_data):
text_and_entropies.append([text,text_to_cross_entropy(text)])
```
100%|██████████| 2000/2000 [00:09<00:00, 219.00it/s]
```python
from collections import Counter
# 4s
#target_lengths = [1]
#token_budgets = [36]
# 4m
target_lengths = [4,3,2,1]
token_budgets = [40,80,96,36]
# 4l
#target_lengths = [4,3,2,1]
#token_budgets = [384,320,160,36]
ngrams = [Counter() for l in target_lengths]
tokens = []
for tgi,tgl in enumerate(target_lengths):
for row in tqdm(text_and_entropies[1:]):
use_text = row[0]
use_scores = row[1]
for t in tokens:
use_text = use_text.replace(t[0],'#')
candidates = []
for i in range(len(use_text)-(tgl-1)):
part = use_text[i:i+tgl].lower()
if '#' in part: continue
if ' ' in part: continue
if '-' in part: continue
score = sum(use_scores[i:i+tgl])
# print(part, score)
candidates.append([score, part])
candidates.sort(reverse=False)
candidates = candidates[:max(1,int(len(candidates)/5))]
#print(candidates)
ngrams[tgi].update([c[1] for c in candidates])
new_tokens = ngrams[tgi].most_common(token_budgets[tgi])
print(new_tokens)
tokens += new_tokens
#break
```
100%|██████████| 1999/1999 [00:00<00:00, 14645.88it/s]
[('lich', 1000), ('hnte', 999), ('rbay', 999), ('örte', 999), ('hört', 999), ('ahrz', 999), ('jahr', 999), ('bild', 999)]
100%|██████████| 1999/1999 [00:00<00:00, 18574.04it/s]
[('ist', 1000), ('den', 999), ('ber', 999), ('aue', 999), ('ern', 999), ('uer', 999)]
100%|██████████| 1999/1999 [00:00<00:00, 20827.32it/s]
[('ni', 1000), ('ge', 999), ('er', 999), ('fü', 999), ('vi', 999)]
100%|██████████| 1999/1999 [00:00<00:00, 19927.45it/s]
[('e', 2999), ('u', 999), ('n', 999), ('h', 999)]
```python
all_tokens = ['<pad>','<eos>',' ']+[t[0] for t in tokens]+['?']
print(len(all_tokens), all_tokens)
```
27 ['<pad>', '<eos>', ' ', 'lich', 'hnte', 'rbay', 'örte', 'hört', 'ahrz', 'jahr', 'bild', 'ist', 'den', 'ber', 'aue', 'ern', 'uer', 'ni', 'ge', 'er', 'fü', 'vi', 'e', 'u', 'n', 'h', '?']
```python
import json
with open('./tevr-tokenizer.txt','wt') as f:
json.dump(all_tokens, f)
```
```python
import sys
import os
sys.path.append(data_folder)
from text_tokenizer import HajoTextTokenizer
```
```python
text_tokenizer = HajoTextTokenizer('./tevr-tokenizer.txt')
```
```python
sentence = "gehörte"
print(sentence)
encoded = text_tokenizer.encode(sentence)
print(encoded)
print([text_tokenizer.all_tokens[i] for i in encoded])
print([text_tokenizer.decode(encoded)])
```
gehörte
[18, 25, 6]
['ge', 'h', 'örte']
['gehörte']
## Testing Tokenizer File
(copy of `TEVR Explanation.ipynb`)
```python
from huggingface_hub import snapshot_download
data_folder = snapshot_download("fxtentacle/tevr-token-entropy-predictor-de")
```
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(data_folder)
model.to('cuda')
model.eval()
None
```
```python
import torch
def text_to_cross_entropy(text):
ttext = torch.tensor([[0]+list(text.encode('UTF-8'))],dtype=torch.int64).to('cuda')
tone = torch.tensor([[1]],dtype=torch.int32).to('cuda')
logits = model.forward(input_ids=tone, attention_mask=tone, decoder_input_ids=ttext, return_dict=False)[0].detach()
cross_entropy = torch.nn.functional.cross_entropy(input=logits[0][:-1], target=ttext[0][1:], reduction='none').detach().cpu().numpy()
return cross_entropy
```
```python
import sys
import os
sys.path.append(data_folder)
from text_tokenizer import HajoTextTokenizer
```
```python
tokenizer_file = 'text-tokenizer-de-4m.txt'
text_tokenizer = HajoTextTokenizer(data_folder+'/'+tokenizer_file)
```
```python
text = "die katze ist niedlich"
cross_entropy = text_to_cross_entropy(text)
tokens = text_tokenizer.encode(text)
tokens = [text_tokenizer.all_tokens[t] for t in tokens]
print(tokens)
token_sums = []
token_sums2 = []
for t in tokens:
ce = sum(cross_entropy[len(token_sums):len(token_sums)+len(t)])
for r in range(len(t)): token_sums.append(ce / len(t))
token_sums2.append(ce)
print(token_sums)
```
['die', ' ', 'k', 'at', 'ze', ' ', 'ist', ' ', 'n', 'ied', 'lich']
[3.3762913048267365, 3.3762913048267365, 3.3762913048267365, 0.29695791006088257, 4.193424224853516, 2.3430762887001038, 2.3430762887001038, 2.8417416363954544, 2.8417416363954544, 1.1227068901062012, 2.017452405144771, 2.017452405144771, 2.017452405144771, 0.0016304069431498647, 2.580254554748535, 2.3091587026913962, 2.3091587026913962, 2.3091587026913962, 1.0126478232632508, 1.0126478232632508, 1.0126478232632508, 1.0126478232632508]
```python
import numpy as np
html = '<table style="font-size: 20px; font-family: Roboto">'
html += '<tr><td><b>(1)</b></td>'+''.join([f'<td style="text-align:left">{c}</td>' for c in list(text)])+'</tr>'
html += '<tr><td><b>(2)</b></td>'+''.join(['<td>1.0</td>'.format(v) for v in cross_entropy])+'<td>σ²={:3.1f}</td>'.format(np.var([1.0 for v in cross_entropy]))+'</tr>'
html += '<tr><td><b>(3)</b></td>'+''.join(['<td>{:3.1f}</td>'.format(v) for v in cross_entropy])+'<td>σ²={:3.1f}</td>'.format(np.var(cross_entropy))+'</tr>'
html += '<tr><td><b>(4)</b></td>'+''.join([f'<td style="text-align:center" colspan={len(t)}>{t}</td>' for t in tokens])+'</tr>'
html += '<tr><td><b>(5)</b></td>'+''.join([f'<td style="text-align:center" colspan={len(t)}>{"{:3.1f}".format(token_sums2[i])}</td>' for i,t in enumerate(tokens)])+'</tr>'
html += '<tr><td><b>(6)</b></td>'+''.join(['<td>{:3.1f}</td>'.format(v) for v in token_sums])+'<td>σ²={:3.1f}</td>'.format(np.var(token_sums))+'</tr>'
html += '</table>'
import IPython
IPython.display.HTML(html)
```
<table style="font-size: 20px; font-family: Roboto"><tr><td><b>(1)</b></td><td style="text-align:left">d</td><td style="text-align:left">i</td><td style="text-align:left">e</td><td style="text-align:left"> </td><td style="text-align:left">k</td><td style="text-align:left">a</td><td style="text-align:left">t</td><td style="text-align:left">z</td><td style="text-align:left">e</td><td style="text-align:left"> </td><td style="text-align:left">i</td><td style="text-align:left">s</td><td style="text-align:left">t</td><td style="text-align:left"> </td><td style="text-align:left">n</td><td style="text-align:left">i</td><td style="text-align:left">e</td><td style="text-align:left">d</td><td style="text-align:left">l</td><td style="text-align:left">i</td><td style="text-align:left">c</td><td style="text-align:left">h</td></tr><tr><td><b>(2)</b></td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>σ²=0.0</td></tr><tr><td><b>(3)</b></td><td>8.9</td><td>1.0</td><td>0.2</td><td>0.3</td><td>4.2</td><td>1.6</td><td>3.1</td><td>5.4</td><td>0.3</td><td>1.1</td><td>3.0</td><td>3.0</td><td>0.0</td><td>0.0</td><td>2.6</td><td>0.6</td><td>4.4</td><td>1.9</td><td>4.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>σ²=5.0</td></tr><tr><td><b>(4)</b></td><td style="text-align:center" colspan=3>die</td><td style="text-align:center" colspan=1> </td><td style="text-align:center" colspan=1>k</td><td style="text-align:center" colspan=2>at</td><td style="text-align:center" colspan=2>ze</td><td style="text-align:center" colspan=1> </td><td style="text-align:center" colspan=3>ist</td><td style="text-align:center" colspan=1> </td><td style="text-align:center" colspan=1>n</td><td style="text-align:center" colspan=3>ied</td><td style="text-align:center" colspan=4>lich</td></tr><tr><td><b>(5)</b></td><td style="text-align:center" colspan=3>10.1</td><td style="text-align:center" colspan=1>0.3</td><td style="text-align:center" colspan=1>4.2</td><td style="text-align:center" colspan=2>4.7</td><td style="text-align:center" colspan=2>5.7</td><td style="text-align:center" colspan=1>1.1</td><td style="text-align:center" colspan=3>6.1</td><td style="text-align:center" colspan=1>0.0</td><td style="text-align:center" colspan=1>2.6</td><td style="text-align:center" colspan=3>6.9</td><td style="text-align:center" colspan=4>4.1</td></tr><tr><td><b>(6)</b></td><td>3.4</td><td>3.4</td><td>3.4</td><td>0.3</td><td>4.2</td><td>2.3</td><td>2.3</td><td>2.8</td><td>2.8</td><td>1.1</td><td>2.0</td><td>2.0</td><td>2.0</td><td>0.0</td><td>2.6</td><td>2.3</td><td>2.3</td><td>2.3</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>σ²=1.1</td></tr></table>
```python
from text_tokenizer import HajoTextTokenizer
text_tokenizer = HajoTextTokenizer(data_folder+'/'+tokenizer_file)
tt = text_tokenizer.all_tokens
print(', '.join(tt))
```
<pad>, <eos>, , chen, sche, lich, isch, icht, iche, eine, rden, tion, urde, haft, eich, rung, chte, ssen, chaf, nder, tlic, tung, eite, iert, sich, ngen, erde, scha, nden, unge, lung, mmen, eren, ende, inde, erun, sten, iese, igen, erte, iner, tsch, keit, der, die, ter, und, ein, ist, den, ten, ber, ver, sch, ung, ste, ent, ach, nte, auf, ben, eit, des, ers, aus, das, von, ren, gen, nen, lle, hre, mit, iel, uch, lte, ann, lie, men, dem, and, ind, als, sta, elt, ges, tte, ern, wir, ell, war, ere, rch, abe, len, ige, ied, ger, nnt, wei, ele, och, sse, end, all, ahr, bei, sie, ede, ion, ieg, ege, auc, che, rie, eis, vor, her, ang, für, ass, uss, tel, er, in, ge, en, st, ie, an, te, be, re, zu, ar, es, ra, al, or, ch, et, ei, un, le, rt, se, is, ha, we, at, me, ne, ur, he, au, ro, ti, li, ri, eh, im, ma, tr, ig, el, um, la, am, de, so, ol, tz, il, on, it, sc, sp, ko, na, pr, ni, si, fe, wi, ns, ke, ut, da, gr, eu, mi, hr, ze, hi, ta, ss, ng, sa, us, ba, ck, em, kt, ka, ve, fr, bi, wa, ah, gt, di, ab, fo, to, rk, as, ag, gi, hn, s, t, n, m, r, l, f, e, a, b, d, h, k, g, o, i, u, w, p, z, ä, ü, v, ö, j, c, y, x, q, á, í, ō, ó, š, é, č, ?
|
bayartsogt/dlub-2022-mlm | cf4f9a07c20bc9474e125e935276999a8a4f43e3 | 2022-06-30T04:04:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | bayartsogt | null | bayartsogt/dlub-2022-mlm | 22 | null | transformers | 8,093 | ---
tags:
- generated_from_trainer
model-index:
- name: dlub-2022-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dlub-2022-mlm
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.8099 | 1.0 | 21 | 9.4443 |
| 9.3908 | 2.0 | 42 | 9.2228 |
| 9.1669 | 3.0 | 63 | 9.0097 |
| 8.9354 | 4.0 | 84 | 8.8081 |
| 8.796 | 5.0 | 105 | 8.7315 |
| 8.6805 | 6.0 | 126 | 8.5933 |
| 8.5896 | 7.0 | 147 | 8.5477 |
| 8.525 | 8.0 | 168 | 8.4861 |
| 8.5446 | 9.0 | 189 | 8.4176 |
| 8.4874 | 10.0 | 210 | 8.4546 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
projecte-aina/roberta-base-ca-v2-cased-te | a6b25e5c5227a873c41b4ff602f54748c1abed38 | 2022-07-25T06:51:49.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"dataset:projecte-aina/teca",
"arxiv:1907.11692",
"transformers",
"catalan",
"textual entailment",
"teca",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index"
] | text-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-te | 22 | null | transformers | 8,094 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "textual entailment"
- "teca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/teca"
metrics:
- "accuracy"
model-index:
- name: roberta-base-ca-v2-cased-te
results:
- task:
type: text-classification # Required. Example: automatic-speech-recognition
dataset:
type: projecte-aina/teca
name: TECA
metrics:
- name: Accuracy
type: accuracy
value: 0.8342
widget:
- text: "M'agrades. T'estimo."
- text: "M'agrada el sol i la calor. A la Garrotxa plou molt."
- text: "El llibre va caure per la finestra. El llibre va sortir volant."
- text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-te")
example = "M'agrada el sol i la calor. </s></s> A la Garrotxa plou molt."
te_results = nlp(example)
pprint(te_results)
```
## Training
### Training data
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing accuracy.
## Evaluation results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
| Model | TECA (Accuracy) |
| ------------|:----|
| roberta-base-ca-v2-cased-te | **83.14** |
| BERTa | 79.26 |
| mBERT | 74.63 |
| XLM-RoBERTa | 33.30 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
gaunernst/bert-mini-uncased | e9056139b012a62b6f859412b8f60bc38f0055bc | 2022-07-02T03:09:03.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-mini-uncased | 22 | null | transformers | 8,095 | ---
license: apache-2.0
---
|
AdShenoy/fineTuneRoberta | a29987d788aedd85d5f8311515f5636c353fa4ce | 2022-07-08T16:46:33.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | AdShenoy | null | AdShenoy/fineTuneRoberta | 22 | null | transformers | 8,096 | Entry not found |
jakka/t5-small-finetuned-xsum | f3e59f5c176a4d84957b2ff1663ba8179c73bc5b | 2022-07-11T14:00:49.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jakka | null | jakka/t5-small-finetuned-xsum | 22 | null | transformers | 8,097 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 22.215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7323
- Rouge1: 22.215
- Rouge2: 4.296
- Rougel: 17.2091
- Rougelsum: 17.212
- Gen Len: 18.655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 3.1005 | 1.0 | 625 | 2.7323 | 22.215 | 4.296 | 17.2091 | 17.212 | 18.655 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
juridics/bertimbau-base-portuguese-sts | 0c02aa449ff7806dfa3daed9e754fd6b32c21b88 | 2022-07-04T15:51:01.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | juridics | null | juridics/bertimbau-base-portuguese-sts | 22 | null | sentence-transformers | 8,098 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# juridics/bertimbau-base-portuguese-sts-scale
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('juridics/bertimbau-base-portuguese-sts-scale')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('juridics/bertimbau-base-portuguese-sts-scale')
model = AutoModel.from_pretrained('juridics/bertimbau-base-portuguese-sts-scale')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=juridics/bertimbau-base-portuguese-sts-scale)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2492 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 2492,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 748,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
uaritm/multilingual_en_ru_uk | 078a388e3a6e6c4cc6bc3e7c78dc4eb032608d74 | 2022-07-04T17:54:13.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | uaritm | null | uaritm/multilingual_en_ru_uk | 22 | null | sentence-transformers | 8,099 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# uaritm/multilingual_en_ru_uk
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('uaritm/multilingual_en_ru_uk')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('uaritm/multilingual_en_ru_uk')
model = AutoModel.from_pretrained('uaritm/multilingual_en_ru_uk')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=uaritm/multilingual_en_ru_uk)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 17482 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.