modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Davlan/afro-xlmr-small | 701aae76654c57e9aa4c5a02b1755df3ffaa0261 | 2022-04-15T14:29:24.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2204.06487",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/afro-xlmr-small | 2 | null | transformers | 25,500 | ---
license: afl-3.0
---
# afro-xlmr-small
AfroXLMR-small was created by [first reducing the vocabulary token size](https://aclanthology.org/2020.sustainlp-1.16/) of XLM-R-base from 250K to 70k, followed by MLM adaptation on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large| afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-
amh |69.5|70.6|76.2|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.4|86.6|83.5
kin |68.6|73.3|73.8|78.0|77.5|74.1
lug |64.7|79.7|81.6|82.9|83.2|77.4
luo |11.7|74.9|73.6|75.1|75.4|17.5
pcm |83.2|87.3|89.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|67.4|65.9|59.0
yor |72.0|78.3|78.9|82.1|81.3|75.1
### BibTeX entry and citation info
```
@misc{afro_maft,
doi = {10.48550/ARXIV.2204.06487},
url = {https://arxiv.org/abs/2204.06487},
author = {Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dreamerdeo/unisar-t5-3b-spider | 4ebde0d1edde644caba8784692492a32efe6ac1c | 2022-04-13T09:33:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dreamerdeo | null | dreamerdeo/unisar-t5-3b-spider | 2 | null | transformers | 25,501 | Entry not found |
thamaine/distilbert-base-uncased-finetuned-squad | 1bc2dae4b1faa2ee455ddf2e7721cc5e31415025 | 2022-06-03T13:37:51.000Z | [
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | thamaine | null | thamaine/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 25,502 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2102 | 1.0 | 5533 | 1.1573 |
| 0.9535 | 2.0 | 11066 | 1.1236 |
| 0.7513 | 3.0 | 16599 | 1.1580 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
dreamerdeo/unisar-t5-3b-cosql | 399b48d00403fa8ca048d00b5cd28e0ee337c504 | 2022-04-13T10:11:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dreamerdeo | null | dreamerdeo/unisar-t5-3b-cosql | 2 | null | transformers | 25,503 | Entry not found |
dreamerdeo/unisar-t5-3b-sparc | cc6fb1fd1b5cbb1e32df230c4faaa17eab0f34e5 | 2022-04-13T10:19:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dreamerdeo | null | dreamerdeo/unisar-t5-3b-sparc | 2 | null | transformers | 25,504 | Entry not found |
philschmid/MiniLMv2-L12-H384-distilled-finetuned-clinc | bc3de52f2e486a36b13546f5560e2ff9c4759bf4 | 2022-04-13T12:07:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/MiniLMv2-L12-H384-distilled-finetuned-clinc | 2 | null | transformers | 25,505 | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9529032258064516
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-finetuned-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3058
- Accuracy: 0.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9908 | 1.0 | 239 | 1.6816 | 0.3910 |
| 1.5212 | 2.0 | 478 | 1.2365 | 0.7697 |
| 1.129 | 3.0 | 717 | 0.9209 | 0.8706 |
| 0.8462 | 4.0 | 956 | 0.6978 | 0.9152 |
| 0.6497 | 5.0 | 1195 | 0.5499 | 0.9342 |
| 0.5124 | 6.0 | 1434 | 0.4447 | 0.9445 |
| 0.4196 | 7.0 | 1673 | 0.3797 | 0.9455 |
| 0.3587 | 8.0 | 1912 | 0.3358 | 0.95 |
| 0.3228 | 9.0 | 2151 | 0.3133 | 0.9513 |
| 0.3052 | 10.0 | 2390 | 0.3058 | 0.9529 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CenIA/albert-large-spanish-finetuned-qa-sqac | 53bd8ffce0d9fa776e04677c60e9ed51ab91a90a | 2022-04-13T19:12:34.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-large-spanish-finetuned-qa-sqac | 2 | null | transformers | 25,506 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-en-lv | 63257c076ff5fff9d06906facd29d53b342b83b3 | 2022-06-01T13:03:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lv",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-lv | 2 | null | transformers | 25,507 | ---
language:
- en
- lv
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-lv
results:
- task:
name: Translation eng-lav
type: translation
args: eng-lav
dataset:
name: flores101-devtest
type: flores_101
args: eng lav devtest
metrics:
- name: BLEU
type: bleu
value: 30.1
- task:
name: Translation eng-lav
type: translation
args: eng-lav
dataset:
name: newsdev2017
type: newsdev2017
args: eng-lav
metrics:
- name: BLEU
type: bleu
value: 28.9
- task:
name: Translation eng-lav
type: translation
args: eng-lav
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-lav
metrics:
- name: BLEU
type: bleu
value: 44.0
- task:
name: Translation eng-lav
type: translation
args: eng-lav
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-lav
metrics:
- name: BLEU
type: bleu
value: 22.1
---
# opus-mt-tc-big-en-lv
Neural machine translation model for translating from English (en) to Latvian (lv).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): lav
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-lav README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-lav/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>lav<< A day has twenty-four hours.",
">>ltg<< He's a good lawyer."
]
model_name = "pytorch-models/opus-mt-tc-big-en-lv"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Dienā ir divdesmit četras stundas.
# Vyss ir labs advokats.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-lv")
print(pipe(">>lav<< A day has twenty-four hours."))
# expected output: Dienā ir divdesmit četras stundas.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lav/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-lav | tatoeba-test-v2021-08-07 | 0.66411 | 44.0 | 1631 | 9932 |
| eng-lav | flores101-devtest | 0.59397 | 30.1 | 1012 | 22092 |
| eng-lav | newsdev2017 | 0.58082 | 28.9 | 2003 | 41503 |
| eng-lav | newstest2017 | 0.53202 | 22.1 | 2001 | 39392 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:36:04 EEST 2022
* port machine: LM0-400-22516.local
|
NeuralNotwork/gpt2-ct | 31b9f8f4089e25b85552d6f6dcca0bca4aac22b4 | 2022-04-13T16:19:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | NeuralNotwork | null | NeuralNotwork/gpt2-ct | 2 | null | transformers | 25,508 | Entry not found |
lucaordronneau/twitter-roberta-base-sentiment-latest-finetuned-FG-CONCAT_SENTENCE-H-NEWS | 576d7e510e19024fe53a221babc657b9f81a1bf5 | 2022-04-13T16:41:38.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | lucaordronneau | null | lucaordronneau/twitter-roberta-base-sentiment-latest-finetuned-FG-CONCAT_SENTENCE-H-NEWS | 2 | null | transformers | 25,509 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: twitter-roberta-base-sentiment-latest-finetuned-FG-CONCAT_SENTENCE-H-NEWS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-latest-finetuned-FG-CONCAT_SENTENCE-H-NEWS
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6335
- Accuracy: 0.5275
- F1: 0.5198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 61 | 1.0568 | 0.4396 | 0.2684 |
| No log | 2.0 | 122 | 1.0518 | 0.4396 | 0.2684 |
| No log | 3.0 | 183 | 1.0584 | 0.4396 | 0.2684 |
| No log | 4.0 | 244 | 1.1720 | 0.3956 | 0.3223 |
| No log | 5.0 | 305 | 1.2473 | 0.5275 | 0.5196 |
| No log | 6.0 | 366 | 1.0789 | 0.5220 | 0.5301 |
| No log | 7.0 | 427 | 1.3556 | 0.5604 | 0.5426 |
| No log | 8.0 | 488 | 1.7314 | 0.5330 | 0.5158 |
| 0.8045 | 9.0 | 549 | 2.2774 | 0.5330 | 0.5161 |
| 0.8045 | 10.0 | 610 | 2.8362 | 0.4451 | 0.4512 |
| 0.8045 | 11.0 | 671 | 2.9130 | 0.5275 | 0.4931 |
| 0.8045 | 12.0 | 732 | 3.1023 | 0.5110 | 0.5010 |
| 0.8045 | 13.0 | 793 | 3.2670 | 0.5385 | 0.5208 |
| 0.8045 | 14.0 | 854 | 3.4151 | 0.4945 | 0.4856 |
| 0.8045 | 15.0 | 915 | 3.7614 | 0.4615 | 0.4458 |
| 0.8045 | 16.0 | 976 | 3.5224 | 0.5220 | 0.5122 |
| 0.0535 | 17.0 | 1037 | 3.5196 | 0.5165 | 0.5102 |
| 0.0535 | 18.0 | 1098 | 3.5791 | 0.5110 | 0.5039 |
| 0.0535 | 19.0 | 1159 | 3.6220 | 0.5220 | 0.5137 |
| 0.0535 | 20.0 | 1220 | 3.6335 | 0.5275 | 0.5198 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls | db4668539e8f05b2640baf2ce3aa412fe2cfa318 | 2022-04-13T16:51:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/distilbert-base-uncased-finetuned-num200-450-405cls | 2 | null | transformers | 25,510 | Entry not found |
NeuralNotwork/blenderbot-400M-ct | 6f9f2d013a3ff053f9b0143e01f05493ee47dfef | 2022-04-13T17:11:24.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NeuralNotwork | null | NeuralNotwork/blenderbot-400M-ct | 2 | null | transformers | 25,511 | Entry not found |
liangyuant/distilbert-base-uncased-finetuned-5epoch-num200-450-405cls | 81f1476ae9f77d58649fbf0f2633b87ee13d8eaf | 2022-04-13T17:42:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/distilbert-base-uncased-finetuned-5epoch-num200-450-405cls | 2 | null | transformers | 25,512 | Entry not found |
liangyuant/distilbert-base-uncased-finetuned-9epoch-num200-450-405cls | b7d2b28c2ad1506007418d9321766b9fdf599312 | 2022-04-13T18:23:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/distilbert-base-uncased-finetuned-9epoch-num200-450-405cls | 2 | null | transformers | 25,513 | Entry not found |
rmihaylov/gpt2-small-theseus-bg | 533baf04e6f7d58453b8a2ad2add32314fcb5d02 | 2022-04-16T17:48:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:2002.02925",
"transformers",
"torch",
"license:mit"
] | text-generation | false | rmihaylov | null | rmihaylov/gpt2-small-theseus-bg | 2 | null | transformers | 25,514 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# GPT-2
Pretrained model on Bulgarian language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
This is the **SMALL** version compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
The compression was executed on Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
## Intended uses & limitations
You can use the raw model for:
- text generation
- auto-complete
- spelling correction
Or fine-tune it to a downstream task.
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/gpt2-small-theseus-bg"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>>
>>> input_ids = tokenizer.encode(
>>> "Здравей,",
>>> add_special_tokens=False,
>>> return_tensors='pt')
>>>
>>> output_ids = model.generate(
>>> input_ids,
>>> do_sample=True,
>>> max_length=50,
>>> top_p=0.92,
>>> pad_token_id=2,
>>> top_k=0)
>>>
>>> output = tokenizer.decode(output_ids[0])
>>>
>>> output = output.replace('<|endoftext|>', '\n\n\n')
>>> output = output.replace('<|unknown|>', '')
>>> output = output.replace('▁', ' ')
>>> output = output.replace('<|n|>', '\n')
>>>
>>> print(output)
Здравей, извинявай, но не мога да заспя.
Джини се обърна и забеляза колко са прегърнати.
— Почакай, Джини. Не мога да повярвам, че е възможно! Толкова искам да те видя.
— Обеща
```
### Limitations and bias
As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes. |
liangyuant/bert-base-uncased-finetuned-10epoch-num200-450-405cls | 468e10c14c805ef941ae8f09a43a2f02e0bdcba0 | 2022-04-14T11:13:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/bert-base-uncased-finetuned-10epoch-num200-450-405cls | 2 | null | transformers | 25,515 | Entry not found |
knok/japanese-distilgpt2 | f9faa84cee65d18d48180d4bf886804acd1c4d1e | 2022-04-15T06:00:51.000Z | [
"pytorch",
"gpt2",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"japanese",
"text-generation",
"lm",
"nlp",
"license:mit"
] | text-generation | false | knok | null | knok/japanese-distilgpt2 | 2 | null | transformers | 25,516 | ---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
datasets:
- wikipedia
- cc100
---
# 日本語 gpt2 蒸留モデル
このモデルは[rinna/japanese-gpt2-meduim](https://huggingface.co/rinna/japanese-gpt2-medium)を教師として蒸留したものです。
蒸留には、HuggigFace Transformersの[コード](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation)をベースとし、[りんなの訓練コード](https://github.com/rinnakk/japanese-pretrained-models)と組み合わせてデータ扱うよう改造したものを使っています。
訓練用コード: https://github.com/knok/japanese-pretrained-models
## 学習に関して
学習に当たり、Google Startup Programにて提供されたクレジットを用いました。
a2-highgpu-4インスタンス(A100 x 4)を使って4か月程度、何度かのresumeを挟んで訓練させました。
## 精度について
Wikipediaをコーパスとし、perplexity 40 程度となります。
rinna/japanese-gpt2-meduim を直接使った場合、27 程度なので、そこまで及びません。
何度か複数のパラメータで訓練の再開を試みたものの、かえって損失が上昇してしまう状態となってしまったので、現状のものを公開しています。
## トークナイザについて
トークナイザは rinna/japanese-gpt2-meduim を使ってください。
# Japanese GPT-2 model
This model is a dillated model from [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium).
To train, I combined HuggingFace Transformers [code](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and [rinna gpt2 train code](https://github.com/rinnakk/japanese-pretrained-models).
The code is available at: https://github.com/knok/japanese-pretrained-models
## training environment
To train, I used GCP credit offered by Google Startup Progam.
Using a2-highgpu-4 instance (A100 x4), it takes about 4 months with some stopping and resume training.
## perplexity
The model gets about 40 perplexity with Wikipedia corpus.
The teacher model rinna/japanese-gpt2-meduim gets about 27 perplexity, so the student model is worse.
## tokenizer
The repository don't have tokenizer, so you shoud use rinna/japanese-gpt2-medium.
# LICENSE
MIT (same as rinna/japanese-gpt2-medium)
|
eleldar/marian-finetuned-kde4-en-to-fr-accelerate-2gpu | b87fc47992a1283a1b3232a91fe24eb0eb3aaa65 | 2022-04-14T15:37:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eleldar | null | eleldar/marian-finetuned-kde4-en-to-fr-accelerate-2gpu | 2 | null | transformers | 25,517 | Entry not found |
NeuralNotwork/gpt2-ul-ts | 764f58c3690c2167d8c02e035c86e6e02d5f361d | 2022-04-14T15:02:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | NeuralNotwork | null | NeuralNotwork/gpt2-ul-ts | 2 | null | transformers | 25,518 | Entry not found |
eleldar/marian-finetuned-kde4-en-to-fr-trainer | 95ae4c01c0830437a5b7c72e5df2d3dd3393379f | 2022-04-15T10:01:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eleldar | null | eleldar/marian-finetuned-kde4-en-to-fr-trainer | 2 | null | transformers | 25,519 | Entry not found |
BigSalmon/InformalToFormalLincoln36 | f4ce5f3fbfa232170780eef7122c6b431b282ef3 | 2022-04-17T17:44:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln36 | 2 | null | transformers | 25,520 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln36")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln36")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
liangyuant/ms-marco-MiniLM-L-12-v2-finetuned-10epoch-num200-450-405cls | cc81b2f5ef8b28501c863b97bfeb82a98a0f919f | 2022-04-15T07:25:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/ms-marco-MiniLM-L-12-v2-finetuned-10epoch-num200-450-405cls | 2 | null | transformers | 25,521 | Entry not found |
NeuralNotwork/gpt2-ul-ts-lrn6 | 1de6deab389f01827b15d215ebf94b4f8ef74443 | 2022-04-15T04:40:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | NeuralNotwork | null | NeuralNotwork/gpt2-ul-ts-lrn6 | 2 | null | transformers | 25,522 | Entry not found |
Chikashi/t5-small-finetuned-cnndm2-wikihow1 | 4fb5227b8fe097528f2714422531d7dfec7d824a | 2022-04-15T11:30:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm2-wikihow1 | 2 | null | transformers | 25,523 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm2-wikihow1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6317
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm2-wikihow1
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm1-wikihow1](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm1-wikihow1) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6305
- Rouge1: 24.6317
- Rouge2: 11.8655
- Rougel: 20.3598
- Rougelsum: 23.2467
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8062 | 1.0 | 71779 | 1.6305 | 24.6317 | 11.8655 | 20.3598 | 23.2467 | 18.9996 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
liangyuant/distilroberta-base-finetuned-10epoch-num200-450-405cls | c5c9c828927128e0db3e1623413b6c0ee8c855d3 | 2022-04-15T08:48:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | liangyuant | null | liangyuant/distilroberta-base-finetuned-10epoch-num200-450-405cls | 2 | null | transformers | 25,524 | Entry not found |
Neria/dummy-model | ce3dea88f896573dcd960ee35e3ebdbc2db3296c | 2022-04-15T07:32:58.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Neria | null | Neria/dummy-model | 2 | null | transformers | 25,525 | Entry not found |
Chikashi/t5-small-finetuned-cnndm2-wikihow2 | 26b6b4d1f194eed3d92c83a93ca2860992c96593 | 2022-04-15T15:13:22.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm2-wikihow2 | 2 | null | transformers | 25,526 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm2-wikihow2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.0962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm2-wikihow2
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow1](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow1) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3311
- Rouge1: 27.0962
- Rouge2: 10.3575
- Rougel: 23.1099
- Rougelsum: 26.4664
- Gen Len: 18.5197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.517 | 1.0 | 39313 | 2.3311 | 27.0962 | 10.3575 | 23.1099 | 26.4664 | 18.5197 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
profoz/distilbert-toxic | ae8ee0b7378e3243b626ea3e1a83044ffc5f6c46 | 2022-04-15T14:24:52.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | profoz | null | profoz/distilbert-toxic | 2 | null | transformers | 25,527 | Entry not found |
profoz/distilbert-toxic-demo | 863bf9a7f05169ed11431b0818b9a476f372e239 | 2022-04-15T14:52:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | profoz | null | profoz/distilbert-toxic-demo | 2 | null | transformers | 25,528 | Entry not found |
shantimohan/distilbert-base-uncased-finetuned-emotion | df3e37f148f578384aefc03475db17a9d3df1b2a | 2022-04-19T18:07:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | shantimohan | null | shantimohan/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,529 | Entry not found |
Chikashi/t5-small-finetuned-cnndm3-wikihow2 | 5c605c0460c6f3ab0e5ca457e9e5f807f295e79a | 2022-04-15T21:49:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm3-wikihow2 | 2 | null | transformers | 25,530 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm3-wikihow2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.6704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm3-wikihow2
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow2](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow2) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6265
- Rouge1: 24.6704
- Rouge2: 11.9038
- Rougel: 20.3622
- Rougelsum: 23.2612
- Gen Len: 18.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8071 | 1.0 | 71779 | 1.6265 | 24.6704 | 11.9038 | 20.3622 | 23.2612 | 18.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
profoz/distilbert-toxic-clf | c797ddbaa1f7ab531032531ed2afec253611e517 | 2022-04-15T17:31:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | profoz | null | profoz/distilbert-toxic-clf | 2 | null | transformers | 25,531 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-toxic-clf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-toxic-clf
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Adrian/distilbert-base-uncased-finetuned-squad-colab | 9653936bcdd8e88ed30330d4bbbff2970a75b98b | 2022-04-15T22:41:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Adrian | null | Adrian/distilbert-base-uncased-finetuned-squad-colab | 2 | null | transformers | 25,532 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-colab
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2123 | 1.0 | 5533 | 1.1550 |
| 0.95 | 2.0 | 11066 | 1.1163 |
| 0.7539 | 3.0 | 16599 | 1.1662 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
enelpol/evalatin2022-lemma-closed | 13e3f1f0db693bc25fc7d664aacb69e670cba5b4 | 2022-04-15T20:39:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | enelpol | null | enelpol/evalatin2022-lemma-closed | 2 | null | transformers | 25,533 | Input have to be constructed with prefix ": ", a word form, the colon and a POS, e.g.: `: effugere:VERB`. |
enelpol/evalatin2022-lemma-open | 8cc43b0b81e2d8f89b83d2ca62c45097e6f889d6 | 2022-04-15T21:02:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | enelpol | null | enelpol/evalatin2022-lemma-open | 2 | null | transformers | 25,534 | Entry not found |
enelpol/evalatin2022-pos-closed | d2640f7cbcf705378c646dd8f739f608fdc9d809 | 2022-04-15T20:53:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | enelpol | null | enelpol/evalatin2022-pos-closed | 2 | null | transformers | 25,535 | Entry not found |
edonath/pegasus-samsum | cbd89e943b622f7630b3b89c6a1c6528b021d5a4 | 2022-06-09T07:56:49.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | edonath | null | edonath/pegasus-samsum | 2 | null | transformers | 25,536 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
enelpol/evalatin2022-feats-closed | f6e891acbac80c06f5db29be252b45028d684ac0 | 2022-04-15T21:21:19.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | enelpol | null | enelpol/evalatin2022-feats-closed | 2 | null | transformers | 25,537 | Entry not found |
chrisvinsen/wav2vec2-base-timit-demo-colab | 5415930bfe1ca6820fc0bb7f19eee3df08c81bef | 2022-05-26T12:14:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 25,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4617
- Wer: 0.3416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4272 | 4.0 | 500 | 1.3108 | 1.0214 |
| 0.5997 | 8.0 | 1000 | 0.4324 | 0.4310 |
| 0.219 | 12.0 | 1500 | 0.4512 | 0.3864 |
| 0.1264 | 16.0 | 2000 | 0.5002 | 0.3721 |
| 0.0834 | 20.0 | 2500 | 0.4934 | 0.3550 |
| 0.0616 | 24.0 | 3000 | 0.4467 | 0.3475 |
| 0.0477 | 28.0 | 3500 | 0.4617 | 0.3416 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
GENG/wav2vec2.0_lv60_timi_pr | c0cebbbaa42cc768231487aaed2465f5032d8091 | 2022-04-19T05:24:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | GENG | null | GENG/wav2vec2.0_lv60_timi_pr | 2 | null | transformers | 25,539 | Entry not found |
adnankhawaja/R_FB_SMS_LM | cd14d32955ff2856bcb508a33558093d4ba3b749 | 2022-04-16T05:03:36.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adnankhawaja | null | adnankhawaja/R_FB_SMS_LM | 2 | null | transformers | 25,540 | Entry not found |
chrisvinsen/wav2vec2-base-commonvoice-demo-colab-1 | f025037f5d774a5d45b7eabfce2c0c9c39395148 | 2022-04-16T07:14:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-base-commonvoice-demo-colab-1 | 2 | null | transformers | 25,541 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-commonvoice-demo-colab-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-commonvoice-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7289
- Wer: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6013 | 13.51 | 500 | 2.7396 | 1.0 |
| 1.1182 | 27.03 | 1000 | 0.7289 | 0.7888 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jason9693/klue-roberta-base-apeach | a804f34e34248e0d19566b94be91de2a77f50d63 | 2022-04-16T06:17:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jason9693 | null | jason9693/klue-roberta-base-apeach | 2 | null | transformers | 25,542 | Entry not found |
V3RX2000/distilbert-base-uncased-finetuned-imdb | e4bb5d309a5f20cc049df4031adf795adb7683e4 | 2022-04-16T06:46:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | V3RX2000 | null | V3RX2000/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,543 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4977 |
| 2.5783 | 2.0 | 314 | 2.4241 |
| 2.5375 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
NeuralNotwork/gpt2-simctg | ff5eed6670b653302b1d7cf81192ef414265666f | 2022-04-16T09:13:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | NeuralNotwork | null | NeuralNotwork/gpt2-simctg | 2 | null | transformers | 25,544 | Entry not found |
chrisvinsen/wav2vec2-base-commonvoice-demo-colab-3 | d28cc3aa030d2ed24df8ff1d3ea9df943b38db2a | 2022-04-16T12:10:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-base-commonvoice-demo-colab-3 | 2 | null | transformers | 25,545 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-commonvoice-demo-colab-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-commonvoice-demo-colab-3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6268
- Wer: 0.6391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.617 | 8.2 | 500 | 2.6274 | 1.0 |
| 1.0694 | 16.39 | 1000 | 0.7238 | 0.7443 |
| 0.3988 | 24.59 | 1500 | 0.6268 | 0.6391 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
masakhane/m2m100_418M_fr_fon_rel_news_ft | 3b7ed58c972bafc9a64035e1c3f4c02fe4d6e385 | 2022-04-16T17:53:22.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_fon_rel_news_ft | 2 | null | transformers | 25,546 | ---
license: afl-3.0
---
|
rmihaylov/gpt2-small-bg | 8b535866828afd20dbce56b1121a6aeb6827c328 | 2022-04-16T17:54:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"transformers",
"torch",
"license:mit"
] | text-generation | false | rmihaylov | null | rmihaylov/gpt2-small-bg | 2 | null | transformers | 25,547 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# GPT-2
Pretrained model on Bulgarian language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
This is the **SMALL** version.
The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
## Intended uses & limitations
You can use the raw model for:
- text generation
- auto-complete
- spelling correction
Or fine-tune it to a downstream task.
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/gpt2-small-bg"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>>
>>> input_ids = tokenizer.encode(
>>> "Здравей,",
>>> add_special_tokens=False,
>>> return_tensors='pt')
>>>
>>> output_ids = model.generate(
>>> input_ids,
>>> do_sample=True,
>>> max_length=50,
>>> top_p=0.92,
>>> pad_token_id=2,
>>> top_k=0)
>>>
>>> output = tokenizer.decode(output_ids[0])
>>>
>>> output = output.replace('<|endoftext|>', '\n\n\n')
>>> output = output.replace('<|unknown|>', '')
>>> output = output.replace('▁', ' ')
>>> output = output.replace('<|n|>', '\n')
>>>
>>> print(output)
Здравей, Ани! Не е ли прекрасно?
Нещото се засмя. Зъбите му блеснаха.
— Ще те разведа насам-натам!
Ани се замисли, когато той си тръгна. Може би не искаше да го е
```
### Limitations and bias
As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes. |
michaellutz/roberta-finetuned-stance-assertive-hillary | c107741296189e141787062e4953217e6a41ff39 | 2022-04-16T18:45:46.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | michaellutz | null | michaellutz/roberta-finetuned-stance-assertive-hillary | 2 | null | transformers | 25,548 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-stance-assertive-hillary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-stance-assertive-hillary
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
michaellutz/ms-marco-finetuned-stance-assertive-hillary | 72a643ae2c37c871c15bd2f5fe092b40f9e73934 | 2022-04-16T18:26:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | michaellutz | null | michaellutz/ms-marco-finetuned-stance-assertive-hillary | 2 | null | transformers | 25,549 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ms-marco-finetuned-stance-assertive-hillary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ms-marco-finetuned-stance-assertive-hillary
This model is a fine-tuned version of [sentence-transformers/paraphrase-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
js3078/autotrain-BerTweet-749522913 | b6f6a19d4cce5974e9ea95282a8d9a436ed4afa4 | 2022-04-16T22:34:05.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:js3078/autotrain-data-BerTweet",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | js3078 | null | js3078/autotrain-BerTweet-749522913 | 2 | null | transformers | 25,550 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- js3078/autotrain-data-BerTweet
co2_eq_emissions: 4.093939667345746
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 749522913
- CO2 Emissions (in grams): 4.093939667345746
## Validation Metrics
- Loss: 0.6473096609115601
- Accuracy: 0.75
- Macro F1: 0.7506205181665155
- Micro F1: 0.75
- Weighted F1: 0.7506205181665155
- Macro Precision: 0.7555096418732782
- Micro Precision: 0.75
- Weighted Precision: 0.7555096418732782
- Macro Recall: 0.75
- Micro Recall: 0.75
- Weighted Recall: 0.75
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/js3078/autotrain-BerTweet-749522913
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("js3078/autotrain-BerTweet-749522913", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("js3078/autotrain-BerTweet-749522913", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed2 | 0181aa8c038976a65d9f3d957b80edb45a520a7d | 2022-04-17T17:46:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_utt_removed2 | 2 | null | transformers | 25,551 | Entry not found |
MrBananaHuman/engpt_medium_to_kogpt_medium_wo_freezing | c644796376e22fd27736766bb6b3c7a1b6bac437 | 2022-04-17T02:14:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | MrBananaHuman | null | MrBananaHuman/engpt_medium_to_kogpt_medium_wo_freezing | 2 | null | transformers | 25,552 | Entry not found |
rmihaylov/bert-base-theseus-bg | a942e6601940fe18a12557270c699b870ac5d8b9 | 2022-04-17T05:02:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:1810.04805",
"arxiv:2002.02925",
"transformers",
"torch",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | rmihaylov | null | rmihaylov/bert-base-theseus-bg | 2 | null | transformers | 25,553 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# BERT BASE (cased)
Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
The model was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import pipeline
>>>
>>> model = pipeline(
>>> 'fill-mask',
>>> model='rmihaylov/bert-base-theseus-bg',
>>> tokenizer='rmihaylov/bert-base-theseus-bg',
>>> device=0,
>>> revision=None)
>>> output = model("София е [MASK] на България.")
>>> print(output)
[{'score': 0.1586454212665558,
'sequence': 'София е столица на България.',
'token': 76074,
'token_str': 'столица'},
{'score': 0.12992817163467407,
'sequence': 'София е столица на България.',
'token': 2659,
'token_str': 'столица'},
{'score': 0.06064048036932945,
'sequence': 'София е Перлата на България.',
'token': 102146,
'token_str': 'Перлата'},
{'score': 0.034687548875808716,
'sequence': 'София е представителката на България.',
'token': 105456,
'token_str': 'представителката'},
{'score': 0.03053216263651848,
'sequence': 'София е присъединяването на България.',
'token': 18749,
'token_str': 'присъединяването'}]
```
|
chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-1 | a367f02b7e78a856e8e827646fd87a488b8c3ac0 | 2022-04-17T06:13:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-1 | 2 | null | transformers | 25,554 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlsr-wav2vec2-base-commonvoice-demo-colab-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-wav2vec2-base-commonvoice-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3736
- Wer: 0.5517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.5523 | 8.06 | 500 | 2.8965 | 1.0 |
| 2.4454 | 16.13 | 1000 | 0.7292 | 0.8364 |
| 0.6349 | 24.19 | 1500 | 0.3736 | 0.5517 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
rmihaylov/bert-base-nli-theseus-bg | 75f77cd0f2f639e3ab588d177351172797c21379 | 2022-04-17T06:35:37.000Z | [
"pytorch",
"bert",
"text-classification",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:1810.04805",
"arxiv:2002.02925",
"transformers",
"torch",
"license:mit"
] | text-classification | false | rmihaylov | null | rmihaylov/bert-base-nli-theseus-bg | 2 | null | transformers | 25,555 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# BERT BASE (cased) finetuned on Bulgarian natural-language-inference data
Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
It was finetuned on private NLI Bulgarian data.
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> import torch
>>> from transformers import AutoModelForSequenceClassification, AutoTokenizer
>>>
>>> model_id = 'rmihaylov/bert-base-nli-theseus-bg'
>>> model = AutoModelForSequenceClassification.from_pretrained(model_id)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> inputs = tokenizer.encode_plus(
>>> 'Няколко момчета играят футбол.',
>>> 'Няколко момичета играят футбол.',
>>> return_tensors='pt')
>>>
>>> outputs = model(**inputs)
>>> contradiction, entailment, neutral = torch.softmax(outputs[0][0], dim=0).detach()
>>> contradiction, neutral, entailment
(tensor(0.9998), tensor(0.0001), tensor(5.9929e-05))
```
|
adnankhawaja/B_T_SMS_LM | bb2eec0fe829073fbc34fae690769c458d921250 | 2022-04-17T07:38:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adnankhawaja | null | adnankhawaja/B_T_SMS_LM | 2 | null | transformers | 25,556 | Entry not found |
adnankhawaja/B_FB_SMS_LM | de1a29dfcbed8f075d1782d8563e6e03a1298580 | 2022-04-17T07:55:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | adnankhawaja | null | adnankhawaja/B_FB_SMS_LM | 2 | null | transformers | 25,557 | Entry not found |
masakhane/m2m100_418M_mos_fr_news | a39c8490caedc750b23e654519576858e5972f0d | 2022-04-17T08:15:54.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_mos_fr_news | 2 | null | transformers | 25,558 | ---
license: afl-3.0
---
|
ssydyc/distilbert-base-uncased-finetuned-emotion | 1efd3f0915e1132d80a9bf9feb4189469ed95a9a | 2022-04-17T11:28:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ssydyc | null | ssydyc/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,559 | Entry not found |
masakhane/m2m100_418M_fr_mos_rel_news | c3c56a200afb57c7813985408ee5969d2efc93c7 | 2022-04-17T11:50:04.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_mos_rel_news | 2 | null | transformers | 25,560 | ---
license: afl-3.0
---
|
apkbala107/electratamilpos | 92ca08bc2d84c97ab258b4310a56b09f0cef223a | 2022-04-17T12:19:58.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | apkbala107 | null | apkbala107/electratamilpos | 2 | null | transformers | 25,561 | Entry not found |
202015004/Studen1_model_17_april | ee2402306d8311496f9341c03cec76e07c0bacc2 | 2022-04-17T20:40:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/Studen1_model_17_april | 2 | null | transformers | 25,562 | Entry not found |
speydach/layoutlmv2-finetuned-cord2 | 690bf40e067957425a3f0fa2dbe38752fe98ee70 | 2022-04-18T04:44:45.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | speydach | null | speydach/layoutlmv2-finetuned-cord2 | 2 | null | transformers | 25,563 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord2
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln37 | ff7dd2f7259ffb6cadf0ef82f5b687877dbe7024 | 2022-04-18T03:12:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln37 | 2 | null | transformers | 25,564 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln37")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln37")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
csikasote/xls-r-300m-bemba-15hrs | 079d2412c444117c19ed75b432308d07e808ab43 | 2022-04-18T15:18:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xls-r-300m-bemba-15hrs | 2 | null | transformers | 25,565 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r-300m-bemba-15hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-bemba-15hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Wer: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5142 | 0.71 | 400 | 0.5585 | 0.7501 |
| 0.6351 | 1.43 | 800 | 0.3185 | 0.5058 |
| 0.4892 | 2.15 | 1200 | 0.2813 | 0.4655 |
| 0.4021 | 2.86 | 1600 | 0.2539 | 0.4159 |
| 0.3505 | 3.58 | 2000 | 0.2411 | 0.4000 |
| 0.3045 | 4.29 | 2400 | 0.2512 | 0.3951 |
| 0.274 | 5.01 | 2800 | 0.2402 | 0.3922 |
| 0.2335 | 5.72 | 3200 | 0.2403 | 0.3764 |
| 0.2032 | 6.44 | 3600 | 0.2383 | 0.3657 |
| 0.1783 | 7.16 | 4000 | 0.2603 | 0.3518 |
| 0.1487 | 7.87 | 4400 | 0.2479 | 0.3577 |
| 0.1281 | 8.59 | 4800 | 0.2638 | 0.3518 |
| 0.113 | 9.3 | 5200 | 0.2754 | 0.3481 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
supriyaraj47/dummy | 1bd0d8eee34fb60ff050d50b432023f3dfd371b5 | 2022-04-18T00:58:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | supriyaraj47 | null | supriyaraj47/dummy | 2 | null | transformers | 25,566 | Entry not found |
ToToKr/kobigbird-bert-base-finetuned-klue-goorm-q-a-task | 67dbd6e46f59eb473c1c2721252c05b60d176217 | 2022-04-18T03:31:17.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ToToKr | null | ToToKr/kobigbird-bert-base-finetuned-klue-goorm-q-a-task | 2 | null | transformers | 25,567 | ---
tags:
- generated_from_trainer
model-index:
- name: kobigbird-bert-base-finetuned-klue-goorm-q-a-task
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobigbird-bert-base-finetuned-klue-goorm-q-a-task
This model is a fine-tuned version of [ToToKr/kobigbird-bert-base-finetuned-klue](https://huggingface.co/ToToKr/kobigbird-bert-base-finetuned-klue) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6159 | 0.09 | 500 | 1.7522 |
| 1.554 | 0.17 | 1000 | 1.5953 |
| 1.4493 | 0.26 | 1500 | 1.3769 |
| 1.4051 | 0.35 | 2000 | 1.3746 |
| 1.3251 | 0.43 | 2500 | 1.5049 |
| 1.2855 | 0.52 | 3000 | 1.1733 |
| 1.2226 | 0.6 | 3500 | 1.1538 |
| 1.1907 | 0.69 | 4000 | 1.1470 |
| 1.1655 | 0.78 | 4500 | 1.0759 |
| 1.1411 | 0.86 | 5000 | 1.0676 |
| 1.0752 | 0.95 | 5500 | 0.9894 |
| 0.9389 | 1.04 | 6000 | 1.2020 |
| 0.8457 | 1.12 | 6500 | 1.1004 |
| 0.7977 | 1.21 | 7000 | 1.1397 |
| 0.818 | 1.29 | 7500 | 1.2960 |
| 0.8142 | 1.38 | 8000 | 1.2115 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
joniponi/facility-classifier | 4674ea804c20a6491b0d1e90cf8f29a3c679dee8 | 2022-04-18T05:00:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | joniponi | null | joniponi/facility-classifier | 2 | null | transformers | 25,568 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: facility-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facility-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4422
- Accuracy: 0.7872
- F1: 0.7854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.671 | 1.0 | 12 | 0.6529 | 0.6596 | 0.6441 |
| 0.5845 | 2.0 | 24 | 0.5722 | 0.7447 | 0.7461 |
| 0.4902 | 3.0 | 36 | 0.5091 | 0.7447 | 0.7461 |
| 0.378 | 4.0 | 48 | 0.4797 | 0.7660 | 0.7670 |
| 0.354 | 5.0 | 60 | 0.4487 | 0.8085 | 0.8029 |
| 0.2865 | 6.0 | 72 | 0.4422 | 0.7872 | 0.7854 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
rmihaylov/roberta-base-nli-stsb-theseus-bg | d60f80adfa444291a568d854c814918483d8fd8c | 2022-04-18T06:59:18.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:2004.09813",
"arxiv:2002.02925",
"transformers",
"torch",
"license:mit",
"sentence-similarity"
] | sentence-similarity | false | rmihaylov | null | rmihaylov/roberta-base-nli-stsb-theseus-bg | 2 | null | transformers | 25,569 | ---
inference: false
pipeline_tag: sentence-similarity
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# ROBERTA BASE (cased) trained on private Bulgarian-English parallel data
This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences.
Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
This model is cased: it does make a difference between bulgarian and Bulgarian.
It was trained on private Bulgarian-English parallel data.
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> import scipy
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-nli-stsb-theseus-bg')
>>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-nli-stsb-theseus-bg')
>>>
>>> def embed(text):
>>> inputs = tokenizer.encode_plus(text, return_tensors='pt')
>>> outputs = model(**inputs)
>>> sequence_output = outputs[0]
>>> input_mask_expanded = inputs['attention_mask'].unsqueeze(-1).expand(sequence_output.size()).float()
>>> embeddings = torch.sum(sequence_output * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
>>> return embeddings.detach().numpy()[0]
>>>
>>>
>>> query_embedding = embed("Какви са съставките на бисквитките?")
>>>
>>> questions = [
>>> "Какво е бисквитка?",
>>> "От какво са направени бисквитките?",
>>> "Използват ли в Англия думата бисквитки?",
>>> "Къде се правят бисквитките?",
>>> "Какви видове бисквитки има?",
>>> "Къде човек може да купи бисквитки?",
>>> "Откъде дойде думата бисквитка?",
>>> "Кое е чудовището на бисквитките?",
>>> "Как да си направите бисквитки у дома?",
>>> "Колко калории има типичната бисквитка?",
>>> "Какви напитки вървят добре с бисквитките?",
>>> "Бисквитките наричат ли се също сладки?"
>>> ]
>>>
>>> corpus, corpus_embeddings = [], []
>>> for question in questions:
>>> embedding = embed(question)
>>> corpus.append(question)
>>> corpus_embeddings.append(embedding)
>>>
>>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
>>>
>>> results = zip(range(len(distances)), distances)
>>> results = sorted(results, key=lambda x: x[1])
>>>
>>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results])
[['От какво са направени бисквитките?', 0.9855158537034977],
['Къде се правят бисквитките?', 0.9774093134195002],
['Какви видове бисквитки има?', 0.9766014240577192],
['Използват ли в Англия думата бисквитки?', 0.9446492058523037],
['Кое е чудовището на бисквитките?', 0.9269786184641834],
['Къде човек може да купи бисквитки?', 0.9268900421152592],
['Какво е бисквитка?', 0.9188155080718263],
['Бисквитките наричат ли се също сладки?', 0.9060368627614406],
['Откъде дойде думата бисквитка?', 0.9048309659657036],
['Какви напитки вървят добре с бисквитките?', 0.890836765118977],
['Как да си направите бисквитки у дома?', 0.8878968487540497],
['Колко калории има типичната бисквитка?', 0.8652821650136402]]
```
|
rmihaylov/roberta-base-nli-stsb-bg | 632772f3791fa750a719810d8785dcc565f6f731 | 2022-04-18T07:19:42.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:2004.09813",
"transformers",
"torch",
"license:mit",
"sentence-similarity"
] | sentence-similarity | false | rmihaylov | null | rmihaylov/roberta-base-nli-stsb-bg | 2 | null | transformers | 25,570 | ---
inference: false
pipeline_tag: sentence-similarity
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# ROBERTA BASE (cased) trained on private Bulgarian-English parallel data
This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences.
Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
This model is cased: it does make a difference between bulgarian and Bulgarian.
It was trained on private Bulgarian-English parallel data.
### How to use
Here is how to use this model in PyTorch:
```python
>>> import scipy
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-nli-stsb-bg')
>>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-nli-stsb-bg')
>>>
>>> def embed(text):
>>> inputs = tokenizer.encode_plus(text, return_tensors='pt')
>>> outputs = model(**inputs)
>>> sequence_output = outputs[0]
>>> input_mask_expanded = inputs['attention_mask'].unsqueeze(-1).expand(sequence_output.size()).float()
>>> embeddings = torch.sum(sequence_output * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
>>> return embeddings.detach().numpy()[0]
>>>
>>>
>>> query_embedding = embed("Какви са съставките на бисквитките?")
>>>
>>> questions = [
>>> "Какво е бисквитка?",
>>> "От какво са направени бисквитките?",
>>> "Използват ли в Англия думата бисквитки?",
>>> "Къде се правят бисквитките?",
>>> "Какви видове бисквитки има?",
>>> "Къде човек може да купи бисквитки?",
>>> "Откъде дойде думата бисквитка?",
>>> "Кое е чудовището на бисквитките?",
>>> "Как да си направите бисквитки у дома?",
>>> "Колко калории има типичната бисквитка?",
>>> "Какви напитки вървят добре с бисквитките?",
>>> "Бисквитките наричат ли се също сладки?"
>>> ]
>>>
>>> corpus, corpus_embeddings = [], []
>>> for question in questions:
>>> embedding = embed(question)
>>> corpus.append(question)
>>> corpus_embeddings.append(embedding)
>>>
>>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
>>>
>>> results = zip(range(len(distances)), distances)
>>> results = sorted(results, key=lambda x: x[1])
>>>
>>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results])
[['Какви видове бисквитки има?', 0.9749538412820795],
['От какво са направени бисквитките?', 0.9720467855849998],
['Къде се правят бисквитките?', 0.9622582076645853],
['Какво е бисквитка?', 0.9352896865855094],
['Използват ли в Англия думата бисквитки?', 0.8981422328370646],
['Откъде дойде думата бисквитка?', 0.8955433698658758],
['Кое е чудовището на бисквитките?', 0.8902666858687854],
['Бисквитките наричат ли се също сладки?', 0.8839303534407483],
['Какви напитки вървят добре с бисквитките?', 0.8582087653310524],
['Къде човек може да купи бисквитки?', 0.8570532540073935],
['Колко калории има типичната бисквитка?', 0.8387529949080176],
['Как да си направите бисквитки у дома?', 0.8243675958097614]]
```
|
PSW/2nd-ut-pred-pre-train | b3f03e3ee530d428630fef4e6ae6fb2115cce6e5 | 2022-04-18T07:15:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/2nd-ut-pred-pre-train | 2 | null | transformers | 25,571 | Entry not found |
csikasote/xls-r-300m-bemba-5hrs | 6bdf5cf21323f9ef9dc98ad4ca731393d0c03fa5 | 2022-04-18T14:52:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xls-r-300m-bemba-5hrs | 2 | null | transformers | 25,572 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r-300m-bemba-5hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-bemba-5hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3129
- Wer: 0.4430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4473 | 2.16 | 400 | 0.4687 | 0.6798 |
| 0.5882 | 4.32 | 800 | 0.3235 | 0.5089 |
| 0.3508 | 6.49 | 1200 | 0.3190 | 0.4695 |
| 0.21 | 8.65 | 1600 | 0.3129 | 0.4430 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MeshalAlamr/wav2vec2-xls-r-300m-ar-2 | a5d3216f61274d1cad8e79a4b8c43b4058034d4e | 2022-04-21T06:53:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-2 | 2 | null | transformers | 25,573 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ar-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ar-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4764
- Wer: 0.3073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0851 | 1.18 | 400 | 0.5614 | 0.4888 |
| 0.691 | 2.35 | 800 | 0.6557 | 0.5558 |
| 0.6128 | 3.53 | 1200 | 0.5852 | 0.5070 |
| 0.543 | 4.71 | 1600 | 0.5591 | 0.4838 |
| 0.5185 | 5.88 | 2000 | 0.6649 | 0.5514 |
| 0.4816 | 7.06 | 2400 | 0.5598 | 0.4689 |
| 0.4336 | 8.24 | 2800 | 0.5384 | 0.4515 |
| 0.405 | 9.41 | 3200 | 0.4987 | 0.4138 |
| 0.3811 | 10.59 | 3600 | 0.5427 | 0.4644 |
| 0.3539 | 11.76 | 4000 | 0.4881 | 0.4159 |
| 0.3299 | 12.94 | 4400 | 0.5160 | 0.4198 |
| 0.3096 | 14.12 | 4800 | 0.5019 | 0.4077 |
| 0.2881 | 15.29 | 5200 | 0.5146 | 0.4140 |
| 0.2894 | 16.47 | 5600 | 0.4861 | 0.4026 |
| 0.2461 | 17.65 | 6000 | 0.4765 | 0.3742 |
| 0.2371 | 18.82 | 6400 | 0.4679 | 0.3672 |
| 0.2182 | 20.0 | 6800 | 0.4699 | 0.3603 |
| 0.1942 | 21.18 | 7200 | 0.4769 | 0.3519 |
| 0.1823 | 22.35 | 7600 | 0.4719 | 0.3497 |
| 0.1682 | 23.53 | 8000 | 0.4876 | 0.3456 |
| 0.1526 | 24.71 | 8400 | 0.4591 | 0.3300 |
| 0.137 | 25.88 | 8800 | 0.4819 | 0.3314 |
| 0.1283 | 27.06 | 9200 | 0.4823 | 0.3213 |
| 0.1174 | 28.24 | 9600 | 0.4879 | 0.3174 |
| 0.1104 | 29.41 | 10000 | 0.4764 | 0.3073 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
DongHyoungLee/dummy-model | 025016c6c24ca35cd7be916e5a92e7e1763237f7 | 2022-04-19T02:23:10.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | DongHyoungLee | null | DongHyoungLee/dummy-model | 2 | null | transformers | 25,574 | Entry not found |
csikasote/xls-r-300m-bemba-20hrs | e5496adfd02a62a60728bab085fbbcd80256a3d4 | 2022-04-18T18:43:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xls-r-300m-bemba-20hrs | 2 | null | transformers | 25,575 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r-300m-bemba-20hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-bemba-20hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2815
- Wer: 0.3435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3301 | 0.54 | 400 | 0.5177 | 0.7570 |
| 0.6437 | 1.08 | 800 | 0.3580 | 0.5658 |
| 0.5149 | 1.61 | 1200 | 0.2953 | 0.5004 |
| 0.4547 | 2.15 | 1600 | 0.2701 | 0.4464 |
| 0.4084 | 2.69 | 2000 | 0.2743 | 0.4383 |
| 0.3606 | 3.23 | 2400 | 0.2482 | 0.3952 |
| 0.3227 | 3.76 | 2800 | 0.2461 | 0.3965 |
| 0.3025 | 4.3 | 3200 | 0.2484 | 0.4015 |
| 0.2697 | 4.84 | 3600 | 0.2357 | 0.3838 |
| 0.2443 | 5.38 | 4000 | 0.2385 | 0.3822 |
| 0.2287 | 5.91 | 4400 | 0.2353 | 0.3747 |
| 0.1977 | 6.45 | 4800 | 0.2337 | 0.3624 |
| 0.1895 | 6.99 | 5200 | 0.2319 | 0.3568 |
| 0.1561 | 7.53 | 5600 | 0.2540 | 0.3561 |
| 0.1448 | 8.06 | 6000 | 0.2772 | 0.3612 |
| 0.1221 | 8.6 | 6400 | 0.2755 | 0.3596 |
| 0.1133 | 9.14 | 6800 | 0.2733 | 0.3495 |
| 0.0969 | 9.68 | 7200 | 0.2815 | 0.3435 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
wangmiaobeng/distilbert-base-uncased-finetuned-imdb-accelerate | 56361a3197cb9b1ad7406da15a288a245c05f89e | 2022-04-18T12:25:15.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | wangmiaobeng | null | wangmiaobeng/distilbert-base-uncased-finetuned-imdb-accelerate | 2 | null | transformers | 25,576 | Entry not found |
surajp/sanbert-from-indicbert | 425301de0ee2a78833432482eca4cde10f33393d | 2022-04-18T13:58:02.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | surajp | null | surajp/sanbert-from-indicbert | 2 | null | transformers | 25,577 | Entry not found |
csikasote/xls-r-1b-bemba-5hrs | a5cd5ac847e65ff9f19aeb136a20573947b564b1 | 2022-04-20T06:59:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xls-r-1b-bemba-5hrs | 2 | null | transformers | 25,578 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r-1b-bemba-5hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-1b-bemba-5hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2659
- Wer: 0.3884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1067 | 1.08 | 400 | 0.4681 | 0.8206 |
| 0.5003 | 2.16 | 800 | 0.3052 | 0.5253 |
| 0.3641 | 3.24 | 1200 | 0.2665 | 0.4437 |
| 0.2847 | 4.32 | 1600 | 0.2526 | 0.4267 |
| 0.2324 | 5.41 | 2000 | 0.2579 | 0.4211 |
| 0.1789 | 6.49 | 2400 | 0.2593 | 0.3958 |
| 0.1302 | 7.57 | 2800 | 0.2659 | 0.3884 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lgodwangl/sent | e4128d79df7ebbcf3311fceb49a6f0075021563e | 2022-04-18T23:52:21.000Z | [
"pytorch",
"perceiver",
"text-classification",
"transformers"
] | text-classification | false | lgodwangl | null | lgodwangl/sent | 2 | null | transformers | 25,579 | Entry not found |
younggns/mf_distilbert | ad5ef03396a4ccb01d98371af10c5dd824230543 | 2022-04-19T04:41:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | younggns | null | younggns/mf_distilbert | 2 | null | transformers | 25,580 | Entry not found |
fuck/distilbert-base-uncased-finetuned-cola | 70e11e4741f9838adbfa09bb40c03b376252cdf6 | 2022-04-19T04:31:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | fuck | null | fuck/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 25,581 | Entry not found |
AlirezaBaneshi/autotrain-test2-756523213 | 0331c820360f213235c25f3df97190f5f003ebd4 | 2022-04-19T07:34:55.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AlirezaBaneshi | null | AlirezaBaneshi/autotrain-test2-756523213 | 2 | null | transformers | 25,582 | Entry not found |
AlirezaBaneshi/autotrain-test2-756523214 | 1589ef64082d130ae36f6e2de1a2816dbdfbd2d8 | 2022-04-19T07:40:58.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AlirezaBaneshi | null | AlirezaBaneshi/autotrain-test2-756523214 | 2 | null | transformers | 25,583 | Entry not found |
csikasote/xlsr-53-bemba-15hrs | e9c5e4d98a0203b382bd7ce19e24c5459c3536ed | 2022-04-19T13:30:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xlsr-53-bemba-15hrs | 2 | null | transformers | 25,584 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlsr-53-bemba-15hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-53-bemba-15hrs
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2789
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4138 | 0.71 | 400 | 0.4965 | 0.7239 |
| 0.5685 | 1.43 | 800 | 0.2939 | 0.4839 |
| 0.4471 | 2.15 | 1200 | 0.2728 | 0.4467 |
| 0.3579 | 2.86 | 1600 | 0.2397 | 0.3965 |
| 0.3087 | 3.58 | 2000 | 0.2427 | 0.4015 |
| 0.2702 | 4.29 | 2400 | 0.2539 | 0.4112 |
| 0.2406 | 5.01 | 2800 | 0.2376 | 0.3885 |
| 0.2015 | 5.72 | 3200 | 0.2492 | 0.3844 |
| 0.1759 | 6.44 | 3600 | 0.2562 | 0.3768 |
| 0.1572 | 7.16 | 4000 | 0.2789 | 0.3751 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
csikasote/xlsr-53-bemba-10hrs | d5dddf1680a345818d5768aa150002300764426a | 2022-04-19T13:09:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xlsr-53-bemba-10hrs | 2 | null | transformers | 25,585 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlsr-53-bemba-10hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-53-bemba-10hrs
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- Wer: 0.4032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3207 | 1.07 | 400 | 0.3720 | 0.5923 |
| 0.5688 | 2.14 | 800 | 0.3073 | 0.5002 |
| 0.3927 | 3.22 | 1200 | 0.2678 | 0.4521 |
| 0.316 | 4.29 | 1600 | 0.2703 | 0.4261 |
| 0.2531 | 5.36 | 2000 | 0.2663 | 0.4198 |
| 0.2051 | 6.43 | 2400 | 0.2614 | 0.4037 |
| 0.1584 | 7.51 | 2800 | 0.2853 | 0.4046 |
| 0.1343 | 8.58 | 3200 | 0.3072 | 0.4121 |
| 0.1031 | 9.65 | 3600 | 0.3190 | 0.4032 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed | e53b0b7b902482e6e53bbdd96b8963d924249074 | 2022-04-19T09:56:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed | 2 | null | transformers | 25,586 | Entry not found |
s50227harry/TCFD-BERT | cc410dcba48ab41de854ca10da4f5736500529fc | 2022-07-21T14:48:39.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | s50227harry | null | s50227harry/TCFD-BERT | 2 | null | transformers | 25,587 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: TCFD-BERT
results: []
---
Using the ClimateBERT-f model as starting point,the TCFD-BERT language model is additionally pre-trained to include precise paragraphs related to climate change.
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TCFD-BERT
It achieves the following results on the evaluation set:
- Loss: 1.1325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.865 | 0.37 | 500 | 1.4460 |
| 1.6601 | 0.73 | 1000 | 1.3491 |
| 1.593 | 1.1 | 1500 | 1.3190 |
| 1.5336 | 1.46 | 2000 | 1.2801 |
| 1.5081 | 1.83 | 2500 | 1.2446 |
| 1.4547 | 2.19 | 3000 | 1.2281 |
| 1.4358 | 2.56 | 3500 | 1.2065 |
| 1.4121 | 2.92 | 4000 | 1.1874 |
| 1.396 | 3.29 | 4500 | 1.1817 |
| 1.383 | 3.65 | 5000 | 1.1747 |
| 1.3662 | 4.02 | 5500 | 1.1717 |
| 1.3545 | 4.38 | 6000 | 1.1567 |
| 1.3441 | 4.75 | 6500 | 1.1325 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1 |
frozenwalker/SciFive_pubmedqa_question_generation_nmconcept_modifies | eb49c53afc288e386add674643b3e320db035532 | 2022-04-19T12:25:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | frozenwalker | null | frozenwalker/SciFive_pubmedqa_question_generation_nmconcept_modifies | 2 | null | transformers | 25,588 | Entry not found |
csikasote/xls-r-1b-bemba-10hrs | d3ca685683f75b5db70c893e961bee1743ad1f91 | 2022-04-19T22:51:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | csikasote | null | csikasote/xls-r-1b-bemba-10hrs | 2 | null | transformers | 25,589 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xls-r-1b-bemba-10hrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-1b-bemba-10hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2350
- Wer: 0.3524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2547 | 0.54 | 400 | 0.4199 | 0.5888 |
| 0.5422 | 1.07 | 800 | 0.2689 | 0.4360 |
| 0.4154 | 1.61 | 1200 | 0.2342 | 0.4008 |
| 0.4075 | 2.15 | 1600 | 0.2172 | 0.3579 |
| 0.3326 | 2.68 | 2000 | 0.2151 | 0.3603 |
| 0.2837 | 3.22 | 2400 | 0.2117 | 0.3505 |
| 0.2688 | 3.76 | 2800 | 0.2040 | 0.3559 |
| 0.2401 | 4.3 | 3200 | 0.2099 | 0.3445 |
| 0.2176 | 4.83 | 3600 | 0.1973 | 0.3299 |
| 0.1913 | 5.37 | 4000 | 0.2123 | 0.3432 |
| 0.1683 | 5.91 | 4400 | 0.2032 | 0.3358 |
| 0.1445 | 6.44 | 4800 | 0.2350 | 0.3524 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/min_sim_del_seed27 | 1db0ada4fb4e077405ee5dc4e0ee8c4ba475a792 | 2022-04-19T15:00:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_sim_del_seed27 | 2 | null | transformers | 25,590 | Entry not found |
GPL/arguana-msmarco-distilbert-gpl | d7e2844f4f37b1d59bbfc944e065a87ec3948eba | 2022-04-19T15:04:16.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/arguana-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,591 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/climate-fever-msmarco-distilbert-gpl | cbf25cf4908c2fdc8d096a5322fffd4c072d7737 | 2022-04-19T15:13:11.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/climate-fever-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,592 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
irmgnrtop/roberta-finetuned-error-detection-accelerate | 74f24c7ce67f0a7a0f66a30ba16f25d86d038793 | 2022-04-19T20:26:08.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | irmgnrtop | null | irmgnrtop/roberta-finetuned-error-detection-accelerate | 2 | null | transformers | 25,593 | Entry not found |
apkbala107/electrabasetamilpos | 94057b9641971d6767254228a32b84d70d5e8dbc | 2022-04-19T15:46:24.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"license:cc",
"autotrain_compatible"
] | token-classification | false | apkbala107 | null | apkbala107/electrabasetamilpos | 2 | null | transformers | 25,594 | ---
license: cc
---
|
GPL/fever-msmarco-distilbert-gpl | 799f359e1f788489b6e392fc38405f7548f17847 | 2022-04-19T15:13:47.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/fever-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,595 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/hotpotqa-msmarco-distilbert-gpl | 77dfa85fc8b646ec5bac71fe8e910354453777a3 | 2022-04-19T15:14:05.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/hotpotqa-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,596 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/nfcorpus-msmarco-distilbert-gpl | 1885c28512386d8a61866ff17b5dba6334223e97 | 2022-04-19T15:14:42.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/nfcorpus-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,597 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/trec-news-msmarco-distilbert-gpl | 267c50a22487551bc54191d729dbeec891ff399e | 2022-04-19T15:15:55.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/trec-news-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,598 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/climate-fever-tsdae-msmarco-distilbert-gpl | f3ce30a2f7abd9636d12ac23b4fb8ad05e7cab6b | 2022-04-19T15:46:28.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/climate-fever-tsdae-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,599 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.