modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
openclimatefix/dgmr-latent-conditioning-stack | 81151f1561c1334263ee65c8eb3f38856c776f78 | 2022-06-20T08:24:16.000Z | [
"pytorch",
"transformers"
] | null | false | openclimatefix | null | openclimatefix/dgmr-latent-conditioning-stack | 95 | null | transformers | 4,700 | Entry not found |
othrif/wav2vec2-large-xlsr-egyptian | 4cfa2d83da399280eaba031bdcbec7b73613442e | 2021-03-29T02:46:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"arz",
"dataset:https://arabicspeech.org/",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | othrif | null | othrif/wav2vec2-large-xlsr-egyptian | 95 | null | transformers | 4,701 | ---
language: arz
datasets:
- https://arabicspeech.org/
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Egyptian Arabic by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: arabicspeech.org MGB-3
type: arabicspeech.org MGB-3
args: ar
metrics:
- name: Test WER
type: wer
value: 55.2
---
# Wav2Vec2-Large-XLSR-53-Egyptian-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian")
model.to("cuda")
chars_to_ignore_regex = '[\؛\—\_get\«\»\ـ\ـ\,\?\.\!\-\;\:\"\“\%\‘\”\�\#\،\☭,\؟]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.2
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/othrif/xlsr-wav2vec2) |
tartuNLP/EstBERT_NER | fc6f195676c5ae365aac5d12d820dd9bb107a3e2 | 2022-05-06T06:29:01.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"et",
"arxiv:2011.04784",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | tartuNLP | null | tartuNLP/EstBERT_NER | 95 | null | transformers | 4,702 | ---
language: et
license: cc-by-4.0
widget:
- text: "Eesti President on Alar Karis."
---
# EstBERT_NER
## Model description
EstBERT_NER is a fine-tuned EstBERT model that can be used for Named Entity Recognition. This model was trained on the Estonian NER dataset created by [Tkachenko et al](https://www.aclweb.org/anthology/W13-2412.pdf). It can recognize three types of entities: locations (LOC), organizations (ORG) and persons (PER).
## How to use
You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities.
```
from transformers import BertTokenizer, BertForTokenClassification
from transformers import pipeline
tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER')
bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER')
nlp = pipeline("ner", model=bertner, tokenizer=tokenizer)
sentence = 'Eesti Ekspressi teada on Eesti Pank uurinud Hansapanga tehinguid , mis toimusid kaks aastat tagasi suvel ja mille käigus voolas panka ligi miljardi krooni ulatuses kahtlast raha .'
ner_results = nlp(sentence)
print(ner_results)
```
```
[{'word': 'Eesti', 'score': 0.9964128136634827, 'entity': 'B-ORG', 'index': 1}, {'word': 'Ekspressi', 'score': 0.9978809356689453, 'entity': 'I-ORG', 'index': 2}, {'word': 'Eesti', 'score': 0.9988121390342712, 'entity': 'B-ORG', 'index': 5}, {'word': 'Pank', 'score': 0.9985784292221069, 'entity': 'I-ORG', 'index': 6}, {'word': 'Hansapanga', 'score': 0.9979034662246704, 'entity': 'B-ORG', 'index': 8}]
```
## BibTeX entry and citation info
```
@misc{tanvir2020estbert,
title={EstBERT: A Pretrained Language-Specific BERT for Estonian},
author={Hasan Tanvir and Claudia Kittask and Kairit Sirts},
year={2020},
eprint={2011.04784},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
yoshitomo-matsubara/bert-base-uncased-stsb | 8fc0be283c7af38c97c6b7151d921a1f84f647b4 | 2021-05-29T21:58:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:stsb",
"transformers",
"stsb",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-stsb | 95 | null | transformers | 4,703 | ---
language: en
tags:
- bert
- stsb
- glue
- torchdistill
license: apache-2.0
datasets:
- stsb
metrics:
- pearson correlation
- spearman correlation
---
`bert-base-uncased` fine-tuned on STS-B dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/mse/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
mt-empty/english-assyrian | 27fec608a66ff550100bfc7d56001a8a71db94d5 | 2022-03-14T11:01:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"as",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | mt-empty | null | mt-empty/english-assyrian | 95 | null | transformers | 4,704 | ---
language:
- en
- as
tags:
- translation
license: apache-2.0
metrics:
- sacrebleu
---
https://github.com/mt-empty/assyrian-translation-model
This is an English to Assyrian/Eastern Syriac machine translation model, it uses [English to Arabic](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) model as the base model.
Although the project aim is to Build a English to Assyrian - the ones that fall under [Northeastern Neo-Aramaic](https://en.wikipedia.org/wiki/Northeastern_Neo-Aramaic) - the current model mostly provides translation for Classical Syriac. This model is a good initial step, but I hope future work will make it more inline with Assyrian dialects.
|
Helsinki-NLP/opus-mt-tc-big-en-ar | e2140a8272b3ea1e147084a35117649263b4408d | 2022-06-01T13:02:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-ar | 95 | null | transformers | 4,705 | ---
language:
- ar
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-ar
results:
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: flores101-devtest
type: flores_101
args: eng ara devtest
metrics:
- name: BLEU
type: bleu
value: 29.4
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: tatoeba-test-v2020-07-28
type: tatoeba_mt
args: eng-ara
metrics:
- name: BLEU
type: bleu
value: 20.0
- task:
name: Translation eng-ara
type: translation
args: eng-ara
dataset:
name: tico19-test
type: tico19-test
args: eng-ara
metrics:
- name: BLEU
type: bleu
value: 30.0
---
# opus-mt-tc-big-en-ar
Neural machine translation model for translating from English (en) to Arabic (ar).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): afb ara
* valid target language labels: >>afb<< >>ara<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-ara README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afb<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ara<< I can't help you because I'm busy.",
">>ara<< I have to write a letter. Do you have some paper?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-ar"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# لا أستطيع مساعدتك لأنني مشغول.
# يجب أن أكتب رسالة هل لديك بعض الأوراق؟
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ar")
print(pipe(">>ara<< I can't help you because I'm busy."))
# expected output: لا أستطيع مساعدتك لأنني مشغول.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-ara | tatoeba-test-v2021-08-07 | 0.48813 | 19.8 | 10305 | 61356 |
| eng-ara | flores101-devtest | 0.61154 | 29.4 | 1012 | 21357 |
| eng-ara | tico19-test | 0.60075 | 30.0 | 2100 | 51339 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:37:31 EEST 2022
* port machine: LM0-400-22516.local
|
north/t5_xl_NCC_lm | b445c8c4dd5958f859de7f9a6587758a49b575db | 2022-06-01T19:41:43.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | north | null | north/t5_xl_NCC_lm | 95 | null | transformers | 4,706 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|✔|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xl/norwegian_NCC_plus_English_pluss100k_lm_t5x_xl/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
RohanJoshi28/twitter_sentiment_analysisv1 | cc7bd2e55a7f39dea523a6f50b75005ede4120ab | 2022-05-29T20:14:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | RohanJoshi28 | null | RohanJoshi28/twitter_sentiment_analysisv1 | 95 | null | transformers | 4,707 | Entry not found |
FigoMe/news-gpt-neo-1.3B-keywords-line-by-line-reverse | 4ef8e25b4d39ba58612e36e48c8becd9785e6fee | 2022-06-01T17:15:31.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | FigoMe | null | FigoMe/news-gpt-neo-1.3B-keywords-line-by-line-reverse | 95 | null | transformers | 4,708 | Entry not found |
Peltarion/dnabert-distilbert | eb65755814f5e0e934ecf74a573e7bac2b661ef3 | 2022-07-02T11:28:16.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"DNA",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | Peltarion | null | Peltarion/dnabert-distilbert | 95 | null | transformers | 4,709 | ---
tags:
- DNA
license: mit
---
## DistilDNA model
This is a distilled version of [DNABERT](https://github.com/jerryji1993/DNABERT) by using DistilBERT technique. It has a BERT architecture with 6 layers and 768 hidden units, pre-trained on 6-mer DNA sequences. For more details on the pre-training scheme and methods, please check the original [thesis report](http://www.diva-portal.org/smash/record.jsf?dswid=846&pid=diva2%3A1676068&c=1&searchType=SIMPLE&language=en&query=joana+palés&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all).
## How to Use
The model can be used to fine-tune on a downstream genomic task, e.g. promoter identification.
```python
import torch
from transformers import DistilBertForSequenceClassification
model = DistilBertForSequenceClassification.from_pretrained('Peltarion/dnabert-distilbert')
```
More details on how to fine-tune the model, dataset and additional source codes are available on [github.com/joanaapa/Distillation-DNABERT-Promoter](https://github.com/joanaapa/Distillation-DNABERT-Promoter).
|
ckiplab/bert-base-han-chinese | 274f25f098e41e7fe2d1a8f032cde0460c7dc8c8 | 2022-07-04T08:04:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"transformers",
"lm-head",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | ckiplab | null | ckiplab/bert-base-han-chinese | 95 | null | transformers | 4,710 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Base Han Chinese
Pretrained model on Ancient Chinese language using a masked language modeling (MLM) objective.
## Homepage
* [ckiplab/han-transformers](https://github.com/ckiplab/han-transformers)
## Training Datasets
The copyright of the datasets belongs to the Institute of Linguistics, Academia Sinica.
* [中央研究院上古漢語標記語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/akiwi/kiwi.sh)
* [中央研究院中古漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/dkiwi/kiwi.sh)
* [中央研究院近代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/pkiwi/kiwi.sh)
* [中央研究院現代漢語語料庫](http://asbc.iis.sinica.edu.tw)
## Contributors
* Chin-Tung Lin at [CKIP](https://ckip.iis.sinica.edu.tw)
## Usage
* Using our model in your script
```python
from transformers import (
AutoTokenizer,
AutoModel,
)
tokenizer = AutoTokenizer.from_pretrained("ckiplab/bert-base-han-chinese")
model = AutoModel.from_pretrained("ckiplab/bert-base-han-chinese")
```
* Using our model for inference
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ckiplab/bert-base-han-chinese')
>>> unmasker("黎[MASK]於變時雍。")
[{'sequence': '黎 民 於 變 時 雍 。',
'score': 0.14885780215263367,
'token': 3696,
'token_str': '民'},
{'sequence': '黎 庶 於 變 時 雍 。',
'score': 0.0859643816947937,
'token': 2433,
'token_str': '庶'},
{'sequence': '黎 氏 於 變 時 雍 。',
'score': 0.027848130092024803,
'token': 3694,
'token_str': '氏'},
{'sequence': '黎 人 於 變 時 雍 。',
'score': 0.023678112775087357,
'token': 782,
'token_str': '人'},
{'sequence': '黎 生 於 變 時 雍 。',
'score': 0.018718384206295013,
'token': 4495,
'token_str': '生'}]
``` |
furrutiav/beto_coherence | c0f23f38c50776b194ada97ca3784077b6b0a402 | 2022-07-12T00:29:30.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:gpl-2.0"
] | feature-extraction | false | furrutiav | null | furrutiav/beto_coherence | 95 | null | transformers | 4,711 | ---
license: gpl-2.0
---
|
neulab/gpt2-large-finetuned-wikitext103 | 8ad278e42033da88bd34b5e810390c88bef565c3 | 2022-07-14T15:38:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"transformers"
] | text-generation | false | neulab | null | neulab/gpt2-large-finetuned-wikitext103 | 95 | null | transformers | 4,712 | This is a `gpt2-large` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **10.56** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| + kNN-LM | 15.03 | 12.57 |
| + RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
``` |
Amalq/schizophrenia-roberta-large | 5914d6c8121f0339694317d3d9f321d29e649d16 | 2022-07-26T21:38:32.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:SMHD",
"dataset:Schizophrenia Reddit",
"arxiv:1806.05258",
"transformers",
"Transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Amalq | null | Amalq/schizophrenia-roberta-large | 95 | null | transformers | 4,713 | ---
language: en
tags:
- Transformers
license: apache-2.0
datasets:
- SMHD
- Schizophrenia Reddit
---
# SchizophreniaRoberta model
is a model initialized with [roberta-large](https://huggingface.co/roberta-large) and trained with Schizophrenia Reddit, a subset of [Self-Reported Mental Health Diagnoses (SMHD) dataset](https://arxiv.org/pdf/1806.05258.pdf) which consists of Reddit posts by patients with schizophrenia only or schizophrenia with other mental disorders and matched control. We follow the standard pretraining protocols of RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
## Usage Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
from transformers import AutoTokenizer,
AutoModel tokenizer = AutoTokenizer.from_pretrained("Amalq/schizophrenia-roberta-large")
model = AutoModel.from_pretrained("Amalq/schizophrenia-roberta-large")
Perplexity of this model is: 4.43 |
Helsinki-NLP/opus-mt-cs-de | f5a1b1443dc5381df3a0a83d790b3c2eb16cf811 | 2021-09-09T21:29:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-de | 94 | null | transformers | 4,714 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-de
* source languages: cs
* target languages: de
* OPUS readme: [cs-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.cs.de | 22.0 | 0.525 |
| news-test2008.cs.de | 21.1 | 0.520 |
| newstest2009.cs.de | 22.2 | 0.525 |
| newstest2010.cs.de | 22.1 | 0.527 |
| newstest2011.cs.de | 21.6 | 0.515 |
| newstest2012.cs.de | 22.2 | 0.516 |
| newstest2013.cs.de | 24.8 | 0.538 |
| newstest2019-csde.cs.de | 23.6 | 0.530 |
| Tatoeba.cs.de | 51.6 | 0.687 |
|
Helsinki-NLP/opus-mt-ko-es | 6a5a499d1635016abfe1c289a26dd039b55cf5ae | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ko",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ko-es | 94 | null | transformers | 4,715 | ---
language:
- ko
- es
tags:
- translation
license: apache-2.0
---
### kor-spa
* source group: Korean
* target group: Spanish
* OPUS readme: [kor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md)
* model: transformer-align
* source language(s): kor kor_Hang kor_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kor.spa | 31.3 | 0.521 |
### System Info:
- hf_name: kor-spa
- source_languages: kor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ko', 'es']
- src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt
- src_alpha3: kor
- tgt_alpha3: spa
- short_pair: ko-es
- chrF2_score: 0.521
- bleu: 31.3
- brevity_penalty: 0.95
- ref_len: 6805.0
- src_name: Korean
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: ko
- tgt_alpha2: es
- prefer_old: False
- long_pair: kor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nl-es | 2b106c525d9a4b17769f562fde0aac3997aad530 | 2021-09-10T13:59:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nl-es | 94 | null | transformers | 4,716 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-es
* source languages: nl
* target languages: es
* OPUS readme: [nl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.es | 51.6 | 0.698 |
|
funnel-transformer/intermediate-base | 40356a7e0969916d0b958333c61ba21f611bcab8 | 2020-12-11T21:40:21.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/intermediate-base | 94 | null | transformers | 4,717 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer intermediate model (B6-6-6 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `intermediate` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad | 35d84905f0e8a5f6ee25104ed20fbed73c299103 | 2021-05-19T20:06:17.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | huggingface | null | huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad | 94 | 2 | transformers | 4,718 | Entry not found |
junnyu/roformer_chinese_sim_char_ft_small | c2f6b597902a58686723b5bee929f150e51fa011 | 2022-04-15T03:51:50.000Z | [
"pytorch",
"roformer",
"text-generation",
"zh",
"transformers",
"tf2.0"
] | text-generation | false | junnyu | null | junnyu/roformer_chinese_sim_char_ft_small | 94 | 2 | transformers | 4,719 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
inference: False
---
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
``` |
madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1 | 1c1e994ef2a74026daeb86cb7a562bbf9475f645 | 2021-06-16T17:12:46.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1 | 94 | null | transformers | 4,720 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
-
-
datasets:
- squad_v2
metrics:
- squad_v2
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 16.0%** of the original weights.
The model contains **24.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **2.63x as fast as bert-large-uncased-whole-word-masking** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/density_info.js" id="0e65059e-a61d-4561-947e-b8f47b818bb8"></script></div>
In terms of accuracy, its **F1 is 82.57**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 3.28**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 190 heads were removed on a total of 384 (49.5%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/pruning_info.js" id="f7ae9ec9-d050-46d0-b237-3025165e9504"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD 2.0 | train | 130.0K |
| SQuAD 2.0 | eval | 11.9k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `1084MB` (original BERT: `1228.0MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **79.70** | **82.83** | **-4.13**|
| **F1** | **82.57** | **85.85** | **-3.28**|
```
{
"HasAns_exact": 74.8144399460189,
"HasAns_f1": 80.555306012496,
"HasAns_total": 5928,
"NoAns_exact": 84.57527333894029,
"NoAns_f1": 84.57527333894029,
"NoAns_total": 5945,
"best_exact": 79.70184452118251,
"best_exact_thresh": 0.0,
"best_f1": 82.56816761071966,
"best_f1_thresh": 0.0,
"exact": 79.70184452118251,
"f1": 82.56816761071981,
"total": 11873
}
```
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1",
tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1"
)
print("bert-large-uncased-whole-word-masking parameters: 445.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` |
microsoft/trocr-large-stage1 | 263390badaa806a561702715213cae4a5f059267 | 2022-07-01T07:39:08.000Z | [
"pytorch",
"vision-encoder-decoder",
"arxiv:2109.10282",
"transformers",
"trocr",
"image-to-text"
] | image-to-text | false | microsoft | null | microsoft/trocr-large-stage1 | 94 | 2 | transformers | 4,721 | ---
tags:
- trocr
- image-to-text
---
# TrOCR (large-sized model, pre-trained only)
TrOCR pre-trained only model. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-stage1')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-stage1')
# training
pixel_values = processor(image, return_tensors="pt").pixel_values # Batch size 1
decoder_input_ids = torch.tensor([[model.config.decoder.decoder_start_token_id]])
outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids)
```
### BibTeX entry and citation info
```bibtex
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mrm8488/t5-base-finetuned-qasc | 7c26f8e64578318f9e0c3223880a1cc68739ddc7 | 2020-12-11T21:55:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:qasc",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-qasc | 94 | 1 | transformers | 4,722 | ---
language: en
datasets:
- qasc
---
# T5-base fine-tuned on QASC
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QASC](https://allenai.org/data/qasc) for **QA** (via *sentence composition*) downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the dataset 📚
**Question Answering via Sentence Composition** (QASC) is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The **context** passed to the *encoder* is the combination of the 2 *facts* (`fact1` and `fact2`). The **question** is just the `formatted_question` field. The **answer** passed to the *decoder* is the`text` right answer instead of the `label` (A, B, C... See `choices` field). More details about the dataset format/fields [here](https://huggingface.co/nlp/viewer/?dataset=qasc)
## Metrics on validation set 📋
| Metric | Score |
|--------|-------|
|Accuracy (EM) | **97.73**|
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-qasc")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-qasc")
def get_response(question, context, max_length=64):
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0])
fact_1 = 'a watch is used for measuring time'
fact_2 = 'Times are measured in seconds.'
context = fact_1 + ' ' + fact_2
question = 'What can be used to measure seconds? (A) Watch (B) seconds (C) fluid (D) Ruler (E) goggles (F) glasses (G) Drill (H) Scale'
get_response(question, context)
# output: 'Watch'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
speechbrain/sepformer-whamr16k | 0b73df532deb3215baf372ca5a90512ba0c75c2a | 2021-11-30T00:53:21.000Z | [
"en",
"dataset:WHAMR!",
"arxiv:2010.13154",
"arxiv:2106.04624",
"speechbrain",
"audio-to-audio",
"audio-source-separation",
"Source Separation",
"Speech Separation",
"WHAM!",
"SepFormer",
"Transformer",
"pytorch",
"license:apache-2.0"
] | audio-to-audio | false | speechbrain | null | speechbrain/sepformer-whamr16k | 94 | 1 | speechbrain | 4,723 | ---
language: "en"
thumbnail:
tags:
- audio-to-audio
- audio-source-separation
- Source Separation
- Speech Separation
- WHAM!
- SepFormer
- Transformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- WHAMR!
metrics:
- SI-SNRi
- SDRi
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WHAMR! (16k sampling frequency)
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAMR!](http://wham.whisper.ai/) dataset with 16k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 16k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 13.5 dB SI-SNRi on the test set of WHAMR! dataset.
| Release | Test-Set SI-SNRi | Test-Set SDRi |
|:-------------:|:--------------:|:--------------:|
| 30-03-21 | 13.5 dB | 13.0 dB |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Perform source separation on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-whamr16k", savedir='pretrained_models/sepformer-whamr16k')
# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-whamr16k/test_mixture16k.wav')
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 16000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 16000)
```
The system expects input recordings sampled at 16kHz (single channel).
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (fc2eabb7).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/WHAMandWHAMR/separation/
python train.py hparams/sepformer-whamr.yaml --data_folder=your_data_folder --sample_rate=16000
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QiQhp1vi5t4UfNpNETA48_OmPiXnUy8O?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
uclanlp/plbart-java-clone-detection | f05211c8f2ef087522c5ac571c69b2e377b39371 | 2021-11-09T17:18:43.000Z | [
"pytorch",
"plbart",
"text-classification",
"transformers"
] | text-classification | false | uclanlp | null | uclanlp/plbart-java-clone-detection | 94 | null | transformers | 4,724 | Entry not found |
yoshitomo-matsubara/bert-base-uncased-cola | 106940f94f17e702ae37d740922d679677667c3c | 2021-05-29T21:40:15.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-cola | 94 | null | transformers | 4,725 | ---
language: en
tags:
- bert
- cola
- glue
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-base-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
nickmuchi/vit-base-xray-pneumonia | 7e99827336046d85c0f85884405034df80b08ebd | 2022-03-09T05:43:35.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:chest xrays",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nickmuchi | null | nickmuchi/vit-base-xray-pneumonia | 94 | null | transformers | 4,726 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- chest xrays
widget:
- src: https://drive.google.com/uc?id=1yqnhD4Wjt4Y_NGLtijTGGaaw9GL497kQ
example_title: PNEUMONIA
- src: https://drive.google.com/uc?id=1xjcIEDb8kuSd4wF44gCEgsc0PfRvs53m
example_title: NORMAL
metrics:
- accuracy
model-index:
- name: vit-base-xray-pneumonia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-xray-pneumonia
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [chest-xray-pneumonia](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3387
- Accuracy: 0.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1233 | 0.31 | 100 | 1.1662 | 0.6651 |
| 0.0868 | 0.61 | 200 | 0.3387 | 0.9006 |
| 0.1387 | 0.92 | 300 | 0.5297 | 0.8237 |
| 0.1264 | 1.23 | 400 | 0.4566 | 0.8590 |
| 0.0829 | 1.53 | 500 | 0.6832 | 0.8285 |
| 0.0734 | 1.84 | 600 | 0.4886 | 0.8157 |
| 0.0132 | 2.15 | 700 | 1.3639 | 0.7292 |
| 0.0877 | 2.45 | 800 | 0.5258 | 0.8846 |
| 0.0516 | 2.76 | 900 | 0.8772 | 0.8013 |
| 0.0637 | 3.07 | 1000 | 0.4947 | 0.8558 |
| 0.0022 | 3.37 | 1100 | 1.0062 | 0.8045 |
| 0.0555 | 3.68 | 1200 | 0.7822 | 0.8285 |
| 0.0405 | 3.99 | 1300 | 1.9288 | 0.6779 |
| 0.0012 | 4.29 | 1400 | 1.2153 | 0.7981 |
| 0.0034 | 4.6 | 1500 | 1.8931 | 0.7308 |
| 0.0339 | 4.91 | 1600 | 0.9071 | 0.8590 |
| 0.0013 | 5.21 | 1700 | 1.6266 | 0.7580 |
| 0.0373 | 5.52 | 1800 | 1.5252 | 0.7676 |
| 0.001 | 5.83 | 1900 | 1.2748 | 0.7869 |
| 0.0005 | 6.13 | 2000 | 1.2103 | 0.8061 |
| 0.0004 | 6.44 | 2100 | 1.3133 | 0.7981 |
| 0.0004 | 6.75 | 2200 | 1.2200 | 0.8045 |
| 0.0004 | 7.06 | 2300 | 1.2834 | 0.7933 |
| 0.0004 | 7.36 | 2400 | 1.3080 | 0.7949 |
| 0.0003 | 7.67 | 2500 | 1.3814 | 0.7917 |
| 0.0004 | 7.98 | 2600 | 1.2853 | 0.7965 |
| 0.0003 | 8.28 | 2700 | 1.3644 | 0.7933 |
| 0.0003 | 8.59 | 2800 | 1.3137 | 0.8013 |
| 0.0003 | 8.9 | 2900 | 1.3507 | 0.7997 |
| 0.0003 | 9.2 | 3000 | 1.3751 | 0.7997 |
| 0.0003 | 9.51 | 3100 | 1.3884 | 0.7981 |
| 0.0003 | 9.82 | 3200 | 1.3831 | 0.7997 |
## Example Images
#### Pneumonia Chest X-Ray

#### Normal Chest X-Ray

### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
UrukHan/wav2vec2-russian | c74e309d12ca9c2be2d69d51f3adb744603a00ff | 2022-04-18T10:33:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | UrukHan | null | UrukHan/wav2vec2-russian | 94 | 2 | transformers | 4,727 | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-russian
results: []
widget:
- src: https://cdn-media.huggingface.co/speech_samples/common_voice_ru_18849022.mp3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-russian
#
модель для распознания аудио. результаты модели можно потом прогнать через мою сеть исправления текстов UrukHan/t5-russian-spell
<table border="0">
<tr>
<td><b style="font-size:30px">Output wav2vec2</b></td>
<td><b style="font-size:30px">Output spell correcor</b></td>
</tr>
<tr>
<td>ывсем привет выныканалетоп армии и это двадцать пятый день спец операций на украине ет самый главной новости российские военные ракетами кинжалы калибр уничтожили крупную военную топливную базу украины ракетным ударом по населенному пункту под жетамиром уничтжены более стаукраинских военных в две тысячи двадцать втором году</td>
<td>Всем привет! Вы в курсе новостей от армии. И это 25 день спецопераций на Украине. Есть самые главные новости. Российские военные ракетами «Кинжалы» и «Кинжалы» калибра уничтожили крупную военную топливную базу Украины. Ракетным ударом по населенному пункту под Жетамиром уничтожены более ста украинских военных в 2022г.</td>
</tr>
</table>
---
Загрузите аудиофайл в формате wav для распознования. Результат можно откорректировать в другой моей сети. которая исправляет ошибки, расставляет знаки припинания и исправляет цифры. https://huggingface.co/UrukHan/t5-russian-spell
#
---
# Запуск сети пример в колабе https://colab.research.google.com/drive/1dVZvccYJq02hmEsapWgmuJ-pLdezFnn1?usp=sharing
#
```python
from transformers import AutoModelForCTC, Wav2Vec2Processor
model = AutoModelForCTC.from_pretrained("UrukHan/wav2vec2-russian")
processor = Wav2Vec2Processor.from_pretrained("UrukHan/wav2vec2-russian")
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch["input_values"]).unsqueeze(0) #, device="cuda"
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch = processor.batch_decode(pred_ids)[0]
return batch
map_to_result()
```
#
---
# Тренировка модели с обработкой данных и созданием датасета разобрать можете в колабе:
# https://colab.research.google.com/drive/1zkCA2PtKxD2acqLr55USh35OomoOwOhm?usp=sharing |
voidism/diffcse-bert-base-uncased-trans | 77046440b79536bb8d37842cf86034f69a8577bd | 2022-05-01T19:24:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | voidism | null | voidism/diffcse-bert-base-uncased-trans | 94 | 1 | transformers | 4,728 | ---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
rmihaylov/roberta-base-sentiment-bg | 47106fae3b98b8ae395c661ecd83e90eba51999f | 2022-04-19T15:58:12.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"transformers",
"torch",
"license:mit"
] | text-classification | false | rmihaylov | null | rmihaylov/roberta-base-sentiment-bg | 94 | null | transformers | 4,729 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# ROBERTA BASE (cased) trained on private Bulgarian sentiment-analysis dataset
This is a Multilingual Roberta model.
This model is cased: it does make a difference between bulgarian and Bulgarian.
### How to use
Here is how to use this model in PyTorch:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/roberta-base-sentiment-bg"
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt')
>>> outputs = model(**inputs)
>>> torch.softmax(outputs, dim=1).tolist()
[[0.0004746630438603461, 0.9995253086090088],
[0.9986956715583801, 0.0013043134240433574]]
```
|
qanastek/51-languages-classifier | 966ca1a15a30f218ad48561943f046d809d4ed26 | 2022-05-19T12:56:56.000Z | [
"pytorch",
"dataset:qanastek/MASSIVE",
"arxiv:1911.02116",
"Transformers",
"text-classification",
"multi-class-classification",
"license:cc-by-4.0"
] | text-classification | false | qanastek | null | qanastek/51-languages-classifier | 94 | 1 | null | 4,730 | ---
tags:
- Transformers
- text-classification
- multi-class-classification
languages:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
datasets:
- qanastek/MASSIVE
widget:
- text: "wake me up at five am this week"
- text: "je veux écouter la chanson de jacques brel encore une fois"
- text: "quiero escuchar la canción de arijit singh una vez más"
- text: "olly onde é que á um parque por perto onde eu possa correr"
- text: "פרק הבא בפודקאסט בבקשה"
- text: "亚马逊股价"
- text: "найди билет на поезд в санкт-петербург"
license: cc-by-4.0
---
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
## Model
XLM-Roberta : [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
Paper : [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/pdf/1911.02116.pdf)
## Demo: How to use in HuggingFace Transformers Pipeline
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'qanastek/51-languages-classifier'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("פרק הבא בפודקאסט בבקשה")
print(res)
```
Outputs:
```python
[{'label': 'he-IL', 'score': 0.9998375177383423}]
```
## Training data
[MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
### Languages
Thee model is capable of distinguish 51 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Evaluation results
```plain
precision recall f1-score support
af-ZA 0.9821 0.9805 0.9813 2974
am-ET 1.0000 1.0000 1.0000 2974
ar-SA 0.9809 0.9822 0.9815 2974
az-AZ 0.9946 0.9845 0.9895 2974
bn-BD 0.9997 0.9990 0.9993 2974
cy-GB 0.9970 0.9929 0.9949 2974
da-DK 0.9575 0.9617 0.9596 2974
de-DE 0.9906 0.9909 0.9908 2974
el-GR 0.9997 0.9973 0.9985 2974
en-US 0.9712 0.9866 0.9788 2974
es-ES 0.9825 0.9842 0.9834 2974
fa-IR 0.9940 0.9973 0.9956 2974
fi-FI 0.9943 0.9946 0.9945 2974
fr-FR 0.9963 0.9923 0.9943 2974
he-IL 1.0000 0.9997 0.9998 2974
hi-IN 1.0000 0.9980 0.9990 2974
hu-HU 0.9983 0.9950 0.9966 2974
hy-AM 1.0000 0.9993 0.9997 2974
id-ID 0.9319 0.9291 0.9305 2974
is-IS 0.9966 0.9943 0.9955 2974
it-IT 0.9698 0.9926 0.9811 2974
ja-JP 0.9987 0.9963 0.9975 2974
jv-ID 0.9628 0.9744 0.9686 2974
ka-GE 0.9993 0.9997 0.9995 2974
km-KH 0.9867 0.9963 0.9915 2974
kn-IN 1.0000 0.9993 0.9997 2974
ko-KR 0.9917 0.9997 0.9956 2974
lv-LV 0.9990 0.9950 0.9970 2974
ml-IN 0.9997 0.9997 0.9997 2974
mn-MN 0.9987 0.9966 0.9976 2974
ms-MY 0.9359 0.9418 0.9388 2974
my-MM 1.0000 0.9993 0.9997 2974
nb-NO 0.9600 0.9533 0.9566 2974
nl-NL 0.9850 0.9748 0.9799 2974
pl-PL 0.9946 0.9923 0.9934 2974
pt-PT 0.9885 0.9798 0.9841 2974
ro-RO 0.9919 0.9916 0.9918 2974
ru-RU 0.9976 0.9983 0.9980 2974
sl-SL 0.9956 0.9939 0.9948 2974
sq-AL 0.9936 0.9896 0.9916 2974
sv-SE 0.9902 0.9842 0.9872 2974
sw-KE 0.9867 0.9953 0.9910 2974
ta-IN 1.0000 1.0000 1.0000 2974
te-IN 1.0000 0.9997 0.9998 2974
th-TH 1.0000 0.9983 0.9992 2974
tl-PH 0.9929 0.9899 0.9914 2974
tr-TR 0.9869 0.9872 0.9871 2974
ur-PK 0.9983 0.9929 0.9956 2974
vi-VN 0.9993 0.9973 0.9983 2974
zh-CN 0.9812 0.9832 0.9822 2974
zh-TW 0.9832 0.9815 0.9823 2974
accuracy 0.9889 151674
macro avg 0.9889 0.9889 0.9889 151674
weighted avg 0.9889 0.9889 0.9889 151674
```
Keywords : language identification ; language identification ; multilingual ; classification |
josh-oo/german-gpt2-easy-new-padding | 5d8da72b39d8dbe27496005c8d8aa1800cf51adc | 2022-06-29T09:45:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | josh-oo | null | josh-oo/german-gpt2-easy-new-padding | 94 | null | transformers | 4,731 | Entry not found |
Evelyn18/distilbert-base-uncased-becas-7 | 90be144d188753450444f9cdd08d4388ccd85428 | 2022-07-01T20:39:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becas-7 | 94 | null | transformers | 4,732 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becas-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becas-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.4980 |
| No log | 2.0 | 10 | 5.0383 |
| No log | 3.0 | 15 | 4.6244 |
| No log | 4.0 | 20 | 4.2090 |
| No log | 5.0 | 25 | 4.0156 |
| No log | 6.0 | 30 | 3.8638 |
| No log | 7.0 | 35 | 4.0836 |
| No log | 8.0 | 40 | 4.1302 |
| No log | 9.0 | 45 | 4.2543 |
| No log | 10.0 | 50 | 4.3059 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pineappleSoup/DialoGPT-medium-707 | 80f8be57a6d8fd8a58d21a5db2f6fc463668ffe0 | 2022-07-28T10:24:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pineappleSoup | null | pineappleSoup/DialoGPT-medium-707 | 94 | null | transformers | 4,733 | ---
tags:
- conversational
---
# 707 DialoGPT Model
Chatbot for the character 707 from Mystic Messenger. |
autoevaluate/distilbert-base-cased-distilled-squad | d62c5ac3e62b8d308d894fab57e5ec5b88a44040 | 2022-07-20T13:17:25.000Z | [
"pytorch",
"tf",
"rust",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | autoevaluate | null | autoevaluate/distilbert-base-cased-distilled-squad | 94 | null | transformers | 4,734 | ---
language: "en"
datasets:
- squad
metrics:
- squad
license: apache-2.0
---
# DistilBERT base cased distilled SQuAD
> Note: This model is a clone of [`distilbert-base-cased-distilled-squad`](https://huggingface.co/distilbert-base-cased-distilled-squad) for internal testing.
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
Using the question answering `Evaluator` from evaluate gives:
```
{'exact_match': 79.54588457899716,
'f1': 86.81181300991533,
'latency_in_seconds': 0.008683730778997168,
'samples_per_second': 115.15787689073015,
'total_time_in_seconds': 91.78703433400005}
```
which is roughly consistent with the official score. |
BSC-TeMU/RoBERTalex | 2a1c89fd468a362368463b4126751fdd51c4d847 | 2021-10-26T10:10:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:legal_ES",
"dataset:temu_legal",
"arxiv:2110.12201",
"transformers",
"legal",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | BSC-TeMU | null | BSC-TeMU/RoBERTalex | 93 | 4 | transformers | 4,735 | ---
language:
- es
license: apache-2.0
tags:
- legal
- spanish
datasets:
- legal_ES
- temu_legal
metrics:
- ppl
widget:
- text: "La ley fue <mask> finalmente."
- text: "El Tribunal <mask> desestimó el recurso de amparo."
- text: "Hay base legal dentro del marco <mask> actual."
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/RoBERTalex
# Spanish Legal-domain RoBERTa
There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora.
## Citing
```
@misc{gutierrezfandino2021legal,
title={Spanish Legalese Language Model and Corpora},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Aitor Gonzalez-Agirre and Marta Villegas},
year={2021},
eprint={2110.12201},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
For more information visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-legal-es)
## Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. |
Hate-speech-CNERG/dehatebert-mono-french | 7c0e8c45e9176581e57d4ae7e52327258116f969 | 2021-09-25T13:51:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"fr",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-french | 93 | 2 | transformers | 4,736 | ---
language: fr
license: apache-2.0
---
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Helsinki-NLP/opus-mt-tr-ar | 9883a63af0aef0043dfce9a04a231ea9b6f3d722 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-ar | 93 | null | transformers | 4,737 | ---
language:
- tr
- ar
tags:
- translation
license: apache-2.0
---
### tur-ara
* source group: Turkish
* target group: Arabic
* OPUS readme: [tur-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md)
* model: transformer
* source language(s): tur
* target language(s): apc_Latn ara ara_Latn arq_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.ara | 14.9 | 0.455 |
### System Info:
- hf_name: tur-ara
- source_languages: tur
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'ar']
- src_constituents: {'tur'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt
- src_alpha3: tur
- tgt_alpha3: ara
- short_pair: tr-ar
- chrF2_score: 0.455
- bleu: 14.9
- brevity_penalty: 0.988
- ref_len: 6944.0
- src_name: Turkish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: tr
- tgt_alpha2: ar
- prefer_old: False
- long_pair: tur-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
QianWeiTech/GPT2-News | ebcc0d7d17deb00af1f256d05d1b61228819062a | 2021-05-21T11:02:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | QianWeiTech | null | QianWeiTech/GPT2-News | 93 | null | transformers | 4,738 | Entry not found |
allegro/plt5-large | b446ea4115bc01549ab832e01d138203cfe1324a | 2021-08-19T17:01:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pl",
"dataset:ccnet",
"dataset:nkjp",
"dataset:wikipedia",
"dataset:open subtitles",
"dataset:free readings",
"transformers",
"T5",
"translation",
"summarization",
"question answering",
"reading comprehension",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | allegro | null | allegro/plt5-large | 93 | 2 | transformers | 4,739 | ---
language: pl
tags:
- T5
- translation
- summarization
- question answering
- reading comprehension
datasets:
- ccnet
- nkjp
- wikipedia
- open subtitles
- free readings
license: cc-by-4.0
---
# plT5 Large
**plT5** models are T5-based language models trained on Polish corpora. The models were optimized for the original T5 denoising target.
## Corpus
plT5 was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a sentencepiece unigram model with
vocabulary size of 50k tokens.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/plt5-large")
model = AutoModel.from_pretrained("allegro/plt5-large")
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:[email protected]">[email protected]</a> |
edixo/road_good_damaged_condition | 70171ea85efe8c9105d793440b8aa62be857e8e6 | 2021-07-05T14:43:15.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | edixo | null | edixo/road_good_damaged_condition | 93 | null | transformers | 4,740 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: road_good_damaged_condition
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9583333134651184
---
# road_good_damaged_condition
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### damaged road

#### good road
 |
frgfm/rexnet1_0x | 9f2b2d7e23dcf64a13a4df8abe4b8ca2afa973cb | 2022-07-20T00:53:57.000Z | [
"pytorch",
"onnx",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/rexnet1_0x | 93 | null | transformers | 4,741 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ReXNet-1.0x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
huggingface-course/mt5-small-finetuned-amazon-en-es | 7e7155d1e44ced6b274adcd33223f698af16d185 | 2021-11-11T17:26:47.000Z | [
"pytorch",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | huggingface-course | null | huggingface-course/mt5-small-finetuned-amazon-en-es | 93 | 1 | transformers | 4,742 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0285
- Rouge1: 16.9728
- Rouge2: 8.2969
- Rougel: 16.8366
- Rougelsum: 16.8510
- Gen Len: 10.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 6.4205 | 1.0 | 1209 | 3.3904 | 7.3124 | 2.1083 | 7.0649 | 7.0966 | 4.7269 |
| 3.7818 | 2.0 | 2418 | 3.1762 | 10.5437 | 3.0706 | 10.4618 | 10.4713 | 5.3697 |
| 3.4672 | 3.0 | 3627 | 3.1304 | 10.4674 | 3.0531 | 10.2156 | 10.2549 | 5.9748 |
| 3.3179 | 4.0 | 4836 | 3.1170 | 11.2847 | 3.3152 | 11.1387 | 11.146 | 6.1723 |
| 3.2048 | 5.0 | 6045 | 3.1069 | 11.5212 | 3.1957 | 11.2117 | 11.2044 | 6.042 |
| 3.1211 | 6.0 | 7254 | 3.1028 | 11.8104 | 3.6482 | 11.5535 | 11.5259 | 6.0462 |
| 3.0724 | 7.0 | 8463 | 3.1001 | 11.7336 | 3.6575 | 11.4403 | 11.4738 | 5.9454 |
| 3.0476 | 8.0 | 9672 | 3.0983 | 11.8061 | 3.6575 | 11.4999 | 11.5414 | 5.9286 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
munggok/mt5-translate-en-id | 2b56f07d3f29fde43ff8cf1e09ca0976c4039965 | 2021-01-25T12:40:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:OPUS",
"dataset:CC-aligned",
"transformers",
"translation",
"license:mit",
"autotrain_compatible"
] | translation | false | munggok | null | munggok/mt5-translate-en-id | 93 | null | transformers | 4,743 | ---
tags:
- translation
language: "id"
license: "mit"
datasets:
- OPUS
- CC-aligned
widget:
- text: "I love you"
---
## MT5-Large-Translate-en-id
## Prefix use
Use prefix "translate:" before input to generate the translation
e.g
"translate: i love you"
## Training data
Opus (Open Subtittle and Wikimatrix)
CCaligned (en-id sentence pair)
|
ncduy/phobert-large-finetuned-vietnamese_students_feedback | b99e157e2beae2c3b8bcaa6175102b647ca320ba | 2022-01-06T05:55:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:vietnamese_students_feedback",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ncduy | null | ncduy/phobert-large-finetuned-vietnamese_students_feedback | 93 | null | transformers | 4,744 | ---
tags:
- generated_from_trainer
datasets:
- vietnamese_students_feedback
metrics:
- accuracy
model-index:
- name: phobert-large-finetuned-vietnamese_students_feedback
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: vietnamese_students_feedback
type: vietnamese_students_feedback
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9463044851547694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-large-finetuned-vietnamese_students_feedback
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on the vietnamese_students_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 477 | 0.2088 | 0.9375 |
| 0.3231 | 2.0 | 954 | 0.2463 | 0.9444 |
| 0.1805 | 3.0 | 1431 | 0.2285 | 0.9463 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nguyenvulebinh/vi-mrc-large | 732c3096bbc2b9c7360e46ffb93c4f89692dafdb | 2022-03-13T20:53:44.000Z | [
"pytorch",
"roberta",
"question-answering",
"vi",
"vn",
"en",
"dataset:squad",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | question-answering | false | nguyenvulebinh | null | nguyenvulebinh/vi-mrc-large | 93 | null | transformers | 4,745 | ---
language:
- vi
- vn
- en
tags:
- question-answering
- pytorch
datasets:
- squad
license: cc-by-nc-4.0
pipeline_tag: question-answering
metrics:
- squad
widget:
- text: "Bình là chuyên gia về gì ?"
context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
- text: "Bình được công nhận với danh hiệu gì ?"
context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
---
## Model Description
- Language model: [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)
- Fine-tune: [MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc)
- Language: Vietnamese, Englsih
- Downstream-task: Extractive QA
- Dataset (combine English and Vietnamese):
- [Squad 2.0](https://rajpurkar.github.io/SQuAD-explorer/)
- [mailong25](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset)
- [VLSP MRC 2021](https://vlsp.org.vn/vlsp2021/eval/mrc)
- [MultiLingual Question Answering](https://github.com/facebookresearch/MLQA)
This model is intended to be used for QA in the Vietnamese language so the valid set is Vietnamese only (but English works fine). The evaluation result below uses the VLSP MRC 2021 test set. This experiment achieves TOP 1 on the leaderboard.
| Model | EM | F1 |
| ------------- | ------------- | ------------- |
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) public_test_set | 85.847 | 83.826 |
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) private_test_set | 82.072 | 78.071 |
Public leaderboard | Private leaderboard
:-------------------------:|:-------------------------:
 | 
[MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) using [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) as a pre-trained language model. By default, XLM-RoBERTa will split word in to sub-words. But in my implementation, I re-combine sub-words representation (after encoded by BERT layer) into word representation using sum strategy.
## Using pre-trained model
[](https://colab.research.google.com/drive/1Yqgdfaca7L94OyQVnq5iQq8wRTFvVZjv?usp=sharing)
- Hugging Face pipeline style (**NOT using sum features strategy**).
```python
from transformers import pipeline
# model_checkpoint = "nguyenvulebinh/vi-mrc-large"
model_checkpoint = "nguyenvulebinh/vi-mrc-base"
nlp = pipeline('question-answering', model=model_checkpoint,
tokenizer=model_checkpoint)
QA_input = {
'question': "Bình là chuyên gia về gì ?",
'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
}
res = nlp(QA_input)
print('pipeline: {}'.format(res))
#{'score': 0.5782045125961304, 'start': 45, 'end': 68, 'answer': 'xử lý ngôn ngữ tự nhiên'}
```
- More accurate infer process ([**Using sum features strategy**](https://github.com/nguyenvulebinh/extractive-qa-mrc))
```python
from infer import tokenize_function, data_collator, extract_answer
from model.mrc_model import MRCQuestionAnswering
from transformers import AutoTokenizer
model_checkpoint = "nguyenvulebinh/vi-mrc-large"
#model_checkpoint = "nguyenvulebinh/vi-mrc-base"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = MRCQuestionAnswering.from_pretrained(model_checkpoint)
QA_input = {
'question': "Bình được công nhận với danh hiệu gì ?",
'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020"
}
inputs = [tokenize_function(*QA_input)]
inputs_ids = data_collator(inputs)
outputs = model(**inputs_ids)
answer = extract_answer(inputs, outputs, tokenizer)
print(answer)
# answer: Google Developer Expert. Score start: 0.9926977753639221, Score end: 0.9909810423851013
```
## About
*Built by Binh Nguyen*
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
For more details, visit the project repository.
[](https://github.com/nguyenvulebinh/extractive-qa-mrc) |
openclimatefix/dgmr-discriminator | d0d6e85d81d3524b52295668788bccfa47ac9327 | 2022-06-20T08:19:22.000Z | [
"pytorch",
"transformers"
] | null | false | openclimatefix | null | openclimatefix/dgmr-discriminator | 93 | null | transformers | 4,746 | Entry not found |
sentence-transformers/roberta-large-nli-mean-tokens | 122c0aac5edb08467a65bd04bce837ea1208efd1 | 2021-08-05T08:30:37.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/roberta-large-nli-mean-tokens | 93 | null | sentence-transformers | 4,747 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/roberta-large-nli-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/roberta-large-nli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/roberta-large-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/roberta-large-nli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/roberta-large-nli-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
chinhon/pegasus-newsroom-rewriter | 8f2eef17627f17af9100dd10a19e243722a88ecb | 2022-03-18T10:51:57.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/pegasus-newsroom-rewriter | 93 | 1 | transformers | 4,748 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-rewriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-rewriter
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3424
- Rouge1: 46.6856
- Rouge2: 31.6377
- Rougel: 33.2741
- Rougelsum: 44.5003
- Gen Len: 126.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 450 | 1.4020 | 47.0593 | 32.2065 | 33.9168 | 44.901 | 126.32 |
| 1.9944 | 2.0 | 900 | 1.3567 | 46.2635 | 30.9959 | 32.933 | 44.1659 | 126.48 |
| 1.6511 | 3.0 | 1350 | 1.3449 | 46.1544 | 30.7257 | 32.693 | 43.9977 | 126.4 |
| 1.5951 | 4.0 | 1800 | 1.3424 | 46.6856 | 31.6377 | 33.2741 | 44.5003 | 126.58 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Salesforce/codegen-6B-multi | ba33ebe5dd88700dfedd924a4417df39d7a75627 | 2022-06-28T17:44:08.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
] | text-generation | false | Salesforce | null | Salesforce/codegen-6B-multi | 93 | null | transformers | 4,749 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 6B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 6B* and further pre-trained on a dataset of multiple programming languages, and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 6B) was firstly initialized with *CodeGen-NL 6B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
patrickvonplaten/wav2vec2-base-timit-demo-google-colab | 38b281e77efc6a390bf8b4473d8f4d744ecd2f5c | 2022-05-10T12:33:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-timit-demo-google-colab | 93 | null | transformers | 4,750 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5185
- Wer: 0.3370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5137 | 1.0 | 500 | 1.6719 | 0.9580 |
| 0.8324 | 2.01 | 1000 | 0.5546 | 0.5341 |
| 0.4365 | 3.01 | 1500 | 0.4567 | 0.4635 |
| 0.3058 | 4.02 | 2000 | 0.4429 | 0.4454 |
| 0.2284 | 5.02 | 2500 | 0.4734 | 0.4186 |
| 0.1892 | 6.02 | 3000 | 0.4191 | 0.4030 |
| 0.1542 | 7.03 | 3500 | 0.4522 | 0.3985 |
| 0.1364 | 8.03 | 4000 | 0.4749 | 0.3922 |
| 0.1239 | 9.04 | 4500 | 0.4950 | 0.3977 |
| 0.1092 | 10.04 | 5000 | 0.4468 | 0.3779 |
| 0.0956 | 11.04 | 5500 | 0.4897 | 0.3789 |
| 0.0897 | 12.05 | 6000 | 0.4927 | 0.3718 |
| 0.0792 | 13.05 | 6500 | 0.5242 | 0.3699 |
| 0.0731 | 14.06 | 7000 | 0.5202 | 0.3772 |
| 0.0681 | 15.06 | 7500 | 0.5046 | 0.3637 |
| 0.062 | 16.06 | 8000 | 0.5336 | 0.3664 |
| 0.0556 | 17.07 | 8500 | 0.5017 | 0.3633 |
| 0.0556 | 18.07 | 9000 | 0.5466 | 0.3736 |
| 0.0461 | 19.08 | 9500 | 0.5489 | 0.3566 |
| 0.0439 | 20.08 | 10000 | 0.5399 | 0.3559 |
| 0.0397 | 21.08 | 10500 | 0.5154 | 0.3539 |
| 0.0346 | 22.09 | 11000 | 0.5170 | 0.3513 |
| 0.0338 | 23.09 | 11500 | 0.5236 | 0.3492 |
| 0.0342 | 24.1 | 12000 | 0.5288 | 0.3493 |
| 0.0282 | 25.1 | 12500 | 0.5147 | 0.3449 |
| 0.0251 | 26.1 | 13000 | 0.5092 | 0.3442 |
| 0.0268 | 27.11 | 13500 | 0.5093 | 0.3413 |
| 0.021 | 28.11 | 14000 | 0.5310 | 0.3399 |
| 0.022 | 29.12 | 14500 | 0.5185 | 0.3370 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ai4bharat/IndicBART-XLSum | 0d527c39c318282e7c8cbbcd479b6bad9f46a599 | 2022-05-14T15:09:17.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:csebuetnlp/xlsum",
"arxiv:2109.02903",
"transformers",
"multilingual",
"nlp",
"indicnlp",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/IndicBART-XLSum | 93 | null | transformers | 4,751 |
---
languages:
- bn
- gu
- hi
- mr
- pa
- ta
- te
datasets:
- csebuetnlp/xlsum
tags:
- multilingual
- nlp
- indicnlp
widget:
- टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे में स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्यालय स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi>
---
IndicBART-XLSum is a multilingual separate script [IndicBART](https://huggingface.co/ai4bharat/IndicBARTSS) based, sequence-to-sequence pre-trained model focusing on Indic languages. It currently supports 7 Indian languages and is based on the mBART architecture. Some salient features of the IndicBART-XLSum are:
<ul>
<li >Supported languages: Bengali, Gujarati, Hindi, Marathi, Punjabi, Tamil and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Trained on Indic portion of <a href="https://huggingface.co/datasets/csebuetnlp/xlsum">XLSum corpora</a>. </li>
<li> Each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
You can read about IndicBARTSS in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
# Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XLSum", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART-XLSum")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART-XLSum")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2bn>', '<2gu>', '<2hi>', '<2mr>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART-XLSum was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("टेसा जॉवल का कहना है कि मृतकों और लापता लोगों के परिजनों की मदद के लिए एक केंद्र स्थापित किया जा रहा है. उन्होंने इस हादसे के तीन के बाद भी मृतकों की सूची जारी करने में हो रही देरी के बारे में स्पष्टीकरण देते हुए कहा है शवों की ठीक पहचान होना ज़रूरी है. पुलिस के अनुसार धमाकों में मारे गए लोगों की संख्या अब 49 हो गई है और अब भी 20 से ज़्यादा लोग लापता हैं. पुलिस के अनुसार लंदन पर हमले योजनाबद्ध तरीके से हुए और भूमिगत रेलगाड़ियों में विस्फोट तो 50 सैकेंड के भीतर हुए. पहचान की प्रक्रिया किंग्स क्रॉस स्टेशन के पास सुरंग में धमाके से क्षतिग्रस्त रेल कोचों में अब भी पड़े शवों के बारे में स्थिति साफ नहीं है और पुलिस ने आगाह किया है कि हताहतों की संख्या बढ़ सकती है. पुलिस, न्यायिक अधिकारियों, चिकित्सकों और अन्य विशेषज्ञों का एक आयोग बनाया गया है जिसकी देख-रेख में शवों की पहचान की प्रक्रिया पूरी होगी. महत्वपूर्ण है कि गुरुवार को लंदन में मृतकों के सम्मान में सार्वजनिक समारोह होगा जिसमें उन्हें श्रद्धाँजलि दी जाएगी और दो मिनट का मौन रखा जाएगा. पुलिस का कहना है कि वह इस्लामी चरमपंथी संगठन अबू हफ़्स अल-मासरी ब्रिगेड्स का इन धमाकों के बारे में किए गए दावे को गंभीरता से ले रही है. 'धमाके पचास सेकेंड में हुए' पुलिस के अनुसार लंदन पर हुए हमले योजनाबद्ध तरीके से किए गए थे. पुलिस के अनुसार भूमिगत रेलों में तीन बम अलग-अलग जगहों लगभग अचानक फटे थे. इसलिए पुलिस को संदेह है कि धमाकों में टाइमिंग उपकरणों का उपयोग किया गया होगा. यह भी तथ्य सामने आया है कि धमाकों में आधुनिक किस्म के विस्फोटकों का उपयोग किया गया था. पहले माना जा रहा था कि हमलों में देसी विस्फोटकों का इस्तेमाल किया गया होगा. पुलिस मुख्यालय स्कॉटलैंड यार्ड में सहायक उपायुक्त ब्रायन पैडिक ने बताया कि भूमिगत रेलों में तीन धमाके 50 सेकेंड के अंतराल के भीतर हुए थे. धमाके गुरुवार सुबह आठ बजकर पचास मिनट पर हुए थे. लंदन अंडरग्राउंड से मिली विस्तृत तकनीकी सूचनाओं से यह तथ्य सामने आया है. इससे पहले बम धमाकों में अच्छे खासे अंतराल की बात की जा रही थी.</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2hi>परिजनों की मदद की ज़िम्मेदारी मंत्री पर </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # लंदन धमाकों में मारे गए लोगों की सूची जारी
```
# Benchmarks
Scores on the `IndicBART-XLSum` test sets are as follows:
Language | Rouge-1 / Rouge-2 / Rouge-L
---------|----------------------------
bn | 0.172331 / 0.051777 / 0.160245
gu | 0.143240 / 0.039993 / 0.133981
hi | 0.220394 / 0.065464 / 0.198816
mr | 0.172568 / 0.062591 / 0.160403
pa | 0.218274 / 0.066087 / 0.192010
ta | 0.177317 / 0.058636 / 0.166324
te | 0.156386 / 0.041042 / 0.144179
average | 0.180073 / 0.055084 / 0.165137
# Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
|
ncfrey/ChemGPT-4.7M | 7438a282460b3038e17a27e25b85b1376e9a23e2 | 2022-06-15T15:17:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"chemistry"
] | text-generation | false | ncfrey | null | ncfrey/ChemGPT-4.7M | 93 | null | transformers | 4,752 | ---
tags:
- chemistry
---
# ChemGPT 4.7M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
LooksLikeIveLost/DialoGPT-medium-me | a01a8a16183cc43e250c676aab0540c54e4ab1fa | 2022-05-20T02:16:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | LooksLikeIveLost | null | LooksLikeIveLost/DialoGPT-medium-me | 93 | null | transformers | 4,753 | ---
tags:
- conversational
---
#Me Bot |
bigscience-catalogue-lm-data/sgpt-nli-bloom-1b3 | 24a4ede287f478b4217b79b434528ba35b43316a | 2022-07-10T15:23:17.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | bigscience-catalogue-lm-data | null | bigscience-catalogue-lm-data/sgpt-nli-bloom-1b3 | 93 | 2 | sentence-transformers | 4,754 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# sgpt-nli-bloom-1b3
## Usage
For usage instructions, refer to: https://github.com/Muennighoff/sgpt#symmetric-semantic-search
The model was trained with the command
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch examples/training/nli/training_nli_v2.py --model_name bigscience/bloom-1b3 --freezenonbias --train_batch_size 128 --lr 32e-5 --pooling weightedmean --wandb --wandbwatchlog gradients --gradcache --chunksize 4
```
## Evaluation Results
`{'askubuntu': 57.44, 'cqadupstack': 14.18, 'twitterpara': 73.99, 'scidocs': 74.74, 'avg': 55.087500000000006}`
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4403 with parameters:
```
{'batch_size': 128}
```
The model uses BitFit, weighted-mean pooling & GradCache, for details see: https://arxiv.org/abs/2202.08904
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MNRLGradCache`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 440,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.00032
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 441,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BloomModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
bloom-testing/test-bloomd-350m-fix-master-ci | 4d0c88371a597294e0d01c76b754c2659acb3f6c | 2022-07-16T00:58:37.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-fix-master-ci | 93 | null | transformers | 4,755 | Entry not found |
Cinnamon/electra-small-japanese-discriminator | 556f337383b3421fa3276a6787e88c5cc2e3a0cd | 2020-12-11T21:26:13.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
] | null | false | Cinnamon | null | Cinnamon/electra-small-japanese-discriminator | 92 | 1 | transformers | 4,756 | ---
language: ja
license: apache-2.0
---
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
## How to use the discriminator in `transformers`
```
from transformers import BertJapaneseTokenizer, ElectraForPreTraining
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator')
```
|
PaulLerner/dpr_question_encoder_triviaqa_without_viquae | 0e4feb8b7a09fee824989b911f7f83cc8b5fa6b7 | 2022-02-18T13:55:05.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
] | feature-extraction | false | PaulLerner | null | PaulLerner/dpr_question_encoder_triviaqa_without_viquae | 92 | null | transformers | 4,757 | Entry not found |
assemblyai/distilbert-base-uncased-qqp | c6a84d6432d9eccfa1850c252322fb1e0e77fcb4 | 2021-06-14T22:13:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.01108",
"transformers"
] | text-classification | false | assemblyai | null | assemblyai/distilbert-base-uncased-qqp | 92 | null | transformers | 4,758 | # DistilBERT-Base-Uncased for Duplicate Question Detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) dataset; part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for duplicate question detection please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
tokenized_segments = tokenizer(["How many hours does it take to fly from California to New York?"], ["What is the flight time from New York to Seattle?"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Duplicate probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Non-duplicate probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! |
facebook/wav2vec2-large-it-voxpopuli | 06983d0205d75ad2b6ff6b31ef0cff420091ec85 | 2021-07-06T02:18:35.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"it",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-it-voxpopuli | 92 | null | transformers | 4,759 | ---
language: it
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the it unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
huggingtweets/girlmeat5557 | e8bbda0e4e4ff17aa2acaaae5caa91d1d3f0a424 | 2021-05-22T05:31:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/girlmeat5557 | 92 | null | transformers | 4,760 | ---
language: en
thumbnail: https://www.huggingtweets.com/girlmeat5557/1617790352329/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1373592959380242432/Vw_88RqG_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">a scared animal bites 🧷 vtuber 🤖 AI Bot </div>
<div style="font-size: 15px">@girlmeat5557 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@girlmeat5557's tweets](https://twitter.com/girlmeat5557).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 871 |
| Short tweets | 489 |
| Tweets kept | 1882 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wthiey09/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @girlmeat5557's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/io5hvymh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/io5hvymh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/girlmeat5557')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/huxijin_gt | 5875c1a5dac528529f57cd66c6cb788b38bf69f9 | 2021-05-22T07:20:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/huxijin_gt | 92 | null | transformers | 4,761 | ---
language: en
thumbnail: https://www.huggingtweets.com/huxijin_gt/1603826688877/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/504912656256352256/swrUCKHO_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Hu Xijin 胡锡进 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@huxijin_gt bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@huxijin_gt's tweets](https://twitter.com/huxijin_gt).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>2167</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>4</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2149</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3s1czwb3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @huxijin_gt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3h7d51hp) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3h7d51hp/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/huxijin_gt'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
ml6team/gpt-2-medium-conditional-quote-generator | d24ccfffa33b49ff47ed4474622231f26dc66f73 | 2021-05-23T09:38:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ml6team | null | ml6team/gpt-2-medium-conditional-quote-generator | 92 | 6 | transformers | 4,762 | This model has been finetuned on the [`Quotes-500K`](https://github.com/ShivaliGoel/Quotes-500K) dataset to generate quotes based on given topics. To generate a quote, use the following input prompt:
`Given Topics: topic 1 | topic 2 | ... | topic n. Related Quote: ` |
mrm8488/gpt2-finetuned-recipes-cooking_v2 | c7b08a08939ef24841e4b3d756b3bc75a81faffa | 2021-05-23T10:25:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/gpt2-finetuned-recipes-cooking_v2 | 92 | null | transformers | 4,763 | ---
language: en
thumbnail:
widget:
- text: "HuggingFace Cake:"
---
|
navteca/quora-roberta-base | 0919bfda01351c5074b84550e96cbe4207234b60 | 2021-03-25T16:10:08.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:quora",
"transformers",
"license:mit"
] | text-classification | false | navteca | null | navteca/quora-roberta-base | 92 | null | transformers | 4,764 | ---
datasets:
- quora
language: en
license: mit
pipeline_tag: text-classification
tags:
- roberta
- text-classification
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [roberta-base](https://huggingface.co/roberta-base).
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset.
The model will predict a score between 0 and 1: How likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
print(scores)
```
|
nielsr/coref-roberta-large | b228bea5218e3575342d440385dcd2d7bd809738 | 2021-01-21T10:07:15.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
] | null | false | nielsr | null | nielsr/coref-roberta-large | 92 | null | transformers | 4,765 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefRoBERTa large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefRoBERTa did not write a model card for this model so this model card has been written by me.
## Model description
CorefRoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefRoBERTa model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
persiannlp/wikibert-base-parsinlu-multiple-choice | b52a5055ac9977d7fef340cabc743ceddf54b574 | 2021-09-23T16:20:58.000Z | [
"pytorch",
"jax",
"bert",
"multiple-choice",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"wikibert",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"text-classification"
] | text-classification | false | persiannlp | null | persiannlp/wikibert-base-parsinlu-multiple-choice | 92 | null | transformers | 4,766 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- wikibert
- persian
- farsi
pipeline_tag: text-classification
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a wikibert-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from typing import List
import torch
from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer
model_name = "persiannlp/wikibert-base-parsinlu-multiple-choice"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config)
def run_model(question: str, candicates: List[str]):
assert len(candicates) == 4, "you need four candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = question + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
print(output)
return output
run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"])
run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"])
run_model(
question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ",
candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"])
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/ |
trituenhantaoio/bert-base-vietnamese-diacritics-uncased | c9451f959df56b17cfce1cc14ee80951577874f5 | 2021-05-20T08:05:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | null | false | trituenhantaoio | null | trituenhantaoio/bert-base-vietnamese-diacritics-uncased | 92 | null | transformers | 4,767 | ## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased")
tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-diacritics-uncased")
```
### References
```
@article{ttnt2020bertdiacritics,
title={Vietnamese BERT Diacritics: Pretrained on News and Wiki},
author={trituenhantao.io},
year = {2020},
publisher = {Hugging Face},
journal = {Hugging Face repository}
}
```
[trituenhantao.io](https://trituenhantao.io) |
pysentimiento/robertuito-ner | e3c0367de51f0f91899a94c43a20de9a0913c7c0 | 2022-07-21T11:22:07.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"arxiv:2106.09462",
"arxiv:2111.09453",
"transformers",
"twitter",
"sentiment-analysis",
"autotrain_compatible"
] | token-classification | false | pysentimiento | null | pysentimiento/robertuito-ner | 92 | null | transformers | 4,768 | ---
language:
- es
tags:
- twitter
- sentiment-analysis
---
# Named Entity Recognition model for Spanish/English
## robertuito-ner
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with the Spanish/English split of the [LinCE NER corpus](https://ritual.uh.edu/lince/), a code-switched benchmark . Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets.
## Results
Results are taken from the LinCE leaderboard
| Model | Sentiment | NER | POS |
|:-----------------------|:----------------|:-------------------|:--------|
| RoBERTuito | **60.6** | 68.5 | 97.2 |
| XLM Large | -- | **69.5** | **97.2** |
| XLM Base | -- | 64.9 | 97.0 |
| C2S mBERT | 59.1 | 64.6 | 96.9 |
| mBERT | 56.4 | 64.0 | 97.1 |
| BERT | 58.4 | 61.1 | 96.9 |
| BETO | 56.5 | -- | -- |
## Citation
If you use this model in your research, please cite pysentimiento, RoBERTuito and LinCE papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{perez2021robertuito,
title={RoBERTuito: a pre-trained language model for social media text in Spanish},
author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque},
year={2021},
eprint={2111.09453},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{aguilar2020lince,
title={LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation},
author={Aguilar, Gustavo and Kar, Sudipta and Solorio, Thamar},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1803--1813},
year={2020}
}
``` |
xlm-mlm-xnli15-1024 | c86c766c25685d110275169e45babb27636d89c2 | 2022-07-22T08:10:39.000Z | [
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"arxiv:1901.07291",
"arxiv:1910.09700",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-mlm-xnli15-1024 | 91 | null | transformers | 4,769 | ---
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
license: cc-by-nc-4.0
---
# xlm-mlm-xnli15-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the [XNLI data card](https://huggingface.co/datasets/xnli) for further information on XNLI).
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English; evaluated in 15 languages (see [XNLI data card](https://huggingface.co/datasets/xnli))
- **License:** CC-BY-NC-4.0
- **Related Models:** [XLM models](https://huggingface.co/models?sort=downloads&search=xlm)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo for XLM](https://github.com/facebookresearch/XLM)
- [GitHub Repo for XNLI](https://github.com/facebookresearch/XNLI)
- [XNLI data card](https://huggingface.co/datasets/xnli)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see [Evaluation](#evaluation)).
## Downstream Use
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
Training details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Training Data
The model developers write:
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
> - Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> - We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> - For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> - For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
For fine-tuning, the developers used the English NLI dataset (see the [XNLI data card](https://huggingface.co/datasets/xnli)).
## Training Procedure
### Preprocessing
The model developers write:
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
### Speeds, Sizes, Times
The model developers write:
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Evaluation
## Testing Data, Factors & Metrics
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
## Results
|Language| en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|
|Accuracy|83.2|76.5 |76.3|74.2|73.1|74.0|73.1 |67.8|68.5|71.2|69.2|71.9 |65.7|64.6|63.4|
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 64 Volta GPUs
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
Details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Model Architecture and Objective
xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write:
> We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1.
## Compute Infrastructure
### Hardware and Software
The developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. |
Helsinki-NLP/opus-mt-cs-fr | 3040852ec5404c1da928602fa1ec636b6ddf9a2e | 2021-09-09T21:29:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-fr | 91 | null | transformers | 4,770 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-fr
* source languages: cs
* target languages: fr
* OPUS readme: [cs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.cs.fr | 21.0 | 0.488 |
|
Helsinki-NLP/opus-mt-en-et | f696ce2db3f802cf4dd723ea97b2af1eda90c7e9 | 2021-09-09T21:35:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"et",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-et | 91 | null | transformers | 4,771 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-et
* source languages: en
* target languages: et
* OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2018-enet.en.et | 21.8 | 0.540 |
| newstest2018-enet.en.et | 23.3 | 0.556 |
| Tatoeba.en.et | 54.0 | 0.717 |
|
Helsinki-NLP/opus-mt-sl-ru | f537bd30d780082fffad4b80036fca19c87a67a8 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sl",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sl-ru | 91 | null | transformers | 4,772 | ---
language:
- sl
- ru
tags:
- translation
license: apache-2.0
---
### slv-rus
* source group: Slovenian
* target group: Russian
* OPUS readme: [slv-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.rus | 37.3 | 0.504 |
### System Info:
- hf_name: slv-rus
- source_languages: slv
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'ru']
- src_constituents: {'slv'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: rus
- short_pair: sl-ru
- chrF2_score: 0.504
- bleu: 37.3
- brevity_penalty: 0.988
- ref_len: 2101.0
- src_name: Slovenian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: ru
- prefer_old: False
- long_pair: slv-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary | 5f8fd46cd438d48ce4ff9fb9a01024b857f6204c | 2021-05-18T20:56:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary | 91 | 1 | transformers | 4,773 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
KoichiYasuoka/bert-base-japanese-char-extended | ec39844667602ffc6fc2fa1958ee683b667421f8 | 2022-06-20T22:21:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-japanese-char-extended | 91 | null | transformers | 4,774 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "wikipedia"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "酸素ボンベを充[MASK]する。"
---
# bert-base-japanese-char-extended
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-base-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-base-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-base-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
```
|
allenai/unifiedqa-v2-t5-base-1363200 | 48d92192cfceb184fc6593c1e60b9752a5877cc3 | 2022-02-22T00:26:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/unifiedqa-v2-t5-base-1363200 | 91 | 1 | transformers | 4,775 | # Further details: https://github.com/allenai/unifiedqa
|
cointegrated/rut5-small-chitchat | 6a8dd478cfecbb26a4637be2c101c131dd931fde | 2021-07-18T21:50:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"dialogue",
"russian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-small-chitchat | 91 | 3 | transformers | 4,776 | ---
language: "ru"
tags:
- dialogue
- russian
license: mit
---
This is a version of the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) model fine-tuned on some Russian dialogue data. It is not very smart and creative, but it is small and fast, and can serve as a fallback response generator for some chatbot or can be fine-tuned to imitate the style of someone.
The input of the model is the previous dialogue utterances separated by `'\n\n'`, and the output is the next utterance.
The model can be used as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-chitchat")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-chitchat")
text = 'Привет! Расскажи, как твои дела?'
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.5, num_return_sequences=3,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
# Как обычно.
# Сейчас - в порядке.
# Хорошо.
# Wall time: 363 ms
```
|
dtomas/roberta-base-bne-irony | 772b696b754dc0c279b8ae569b3604907034268c | 2021-12-22T13:55:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"irony",
"sarcasm",
"spanish"
] | text-classification | false | dtomas | null | dtomas/roberta-base-bne-irony | 91 | null | transformers | 4,777 | ---
language:
- es
tags:
- irony
- sarcasm
- spanish
widget:
- text: "¡Cómo disfruto peleándome con los Transformers!"
example_title: "Ironic"
- text: "Madrid es la capital de España"
example_title: "Non ironic"
---
# RoBERTa base finetuned for Spanish irony detection
## Model description
Model to perform irony detection in Spanish. This is a finetuned version of the [RoBERTa-base-bne model](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [IroSvA](https://www.autoritas.net/IroSvA2019/) corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic.
|
flax-community/wav2vec2-spanish | bd4d4e898c994eecd8df48e21a6c3abd316a26d6 | 2021-07-19T05:02:39.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"es",
"dataset:common_voice",
"arxiv:2006.11477",
"transformers",
"audio",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | flax-community | null | flax-community/wav2vec2-spanish | 91 | null | transformers | 4,778 | ---
language: es
tags:
- audio
- automatic-speech-recognition
datasets:
- common_voice
---
# Wav2Vec2 Spanish
Wav2Vec2 model pre-trained using the Spanish portion of the Common Voice dataset. The model is trained with Flax and using TPUs sponsored by Google since this is part of the [Flax/Jax Community Week](https://discss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organised by HuggingFace.
## Model description
The model used for training is [Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by FacebookAI. It was introduced in the paper
"wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations" by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli (https://arxiv.org/abs/2006.11477).
This model is available in the 🤗 [Model Hub](https://huggingface.co/facebook/wav2vec2-base-960h).
## Training data
Spanish portion of [Common Voice](https://commonvoice.mozilla.org/en/datasets). Common Voice is an open source, multi-language dataset of voices part of Mozilla's initiative to help teach machines how real people speak.
The dataset is also available in the 🤗 [Datasets](https://huggingface.co/datasets/common_voice) library.
## Team members
- María Grandury ([@mariagrandury](https://github.com/mariagrandury))
- Manuel Romero ([@mrm8488](https://huggingface.co/mrm8488))
- Eduardo González Ponferrada ([@edugp](https://huggingface.co/edugp))
- pcuenq ([@pcuenq](https://huggingface.co/pcuenq)) |
llange/xlm-roberta-large-spanish-clinical | 3ef89eb7322ba99a1125efb69eea06c94def53e4 | 2021-12-17T10:27:39.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2112.08754",
"transformers",
"autotrain_compatible"
] | fill-mask | false | llange | null | llange/xlm-roberta-large-spanish-clinical | 91 | null | transformers | 4,779 | # CLIN-X-ES: a pre-trained language model for the Spanish clinical domain
Details on the model, the pre-training corpus and the downstream task performance are given in the paper: "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain" by Lukas Lange, Heike Adel, Jannik Strötgen and Dietrich Klakow.
The paper can be found [here](https://arxiv.org/abs/2112.08754).
In case of questions, please contact the authors as listed on the paper.
Please cite the above paper when reporting, reproducing or extending the results.
@misc{lange-etal-2021-clin-x,
author = {Lukas Lange and
Heike Adel and
Jannik Str{\"{o}}tgen and
Dietrich Klakow},
title = {CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain},
year={2021},
eprint={2112.08754},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2112.08754}
}
## Training details
The model is based on the multilingual XLM-R transformer `(xlm-roberta-large)`, which was trained on 100 languages and showed superior performance in many different tasks across languages and can even outperform monolingual models in certain settings (Conneau et al. 2020).
Even though XLM-R was pre-trained on 53GB of Spanish documents, this was only 2% of the overall training data. To steer this model towards the Spanish clinical domain, we sample documents from the Scielo archive (https://scielo.org/)
and the MeSpEn resources (Villegas et al. 2018). The resulting corpus has a size of 790MB and is highly specific for the clinical domain.
We initialize CLIN-X using the pre-trained XLM-R weights and train masked language modeling (MLM) on the Spanish clinical corpus for 3 epochs which roughly corresponds to 32k steps. This allows researchers and practitioners to address
the Spanish clinical domain with an out-of-the-box tailored model.
## Results for Spanish concept extraction
We apply CLIN-X-ES to five Spanish concept extraction tasks from the clinical domain in a standard sequence labeling architecture similar to Devlin et al. 2019 and compare to a Spanish BERT model called BETO. In addition, we perform experiments with an improved architecture `(+ OurArchitecture)` as described in the paper linked above. The code for our model architecture can be found [here](https://github.com/boschresearch/clin_x).
| | Cantemist | Meddocan | Meddoprof (NER) | Meddoprof (CLASS) | Pharmaconer |
|------------------------------------------|-----------|----------|-----------------|-------------------|-------------|
| BETO (Spanish BERT) | 81.30 | 96.81 | 79.19 | 74.59 | 87.70 |
| CLIN-X (ES) | 83.22 | 97.08 | 79.54 | 76.95 | 90.05 |
| CLIN-X (ES) + OurArchitecture | **88.24** | **98.00** | **81.68** | **80.54** | **92.27** |
### Results for English concept extraction
As the CLIN-X-ES model is based on XLM-R, the model is still multilingual and we demonstrate the positive impact of cross-language domain adaptation by applying this model to five different English sequence labeling tasks from i2b2.
We found that further transfer from related concept extraction is particularly helpful in this cross-language setting. For a detailed description of the transfer process and all other models, we refer to our paper.
| | i2b2 2006 | i2b2 2010 | i2b2 2012 (Concept) | i2b2 2012 (Time) | i2b2 2014 |
|------------------------------------------|-----------|-----------|---------------|---------------|-----------|
| BERT | 94.80 | 85.25 | 76.51 | 75.28 | 94.86 |
| ClinicalBERT | 94.8 | 87.8 | 78.9 | 76.6 | 93.0 |
| CLIN-X (ES) | 95.49 | 87.94 | 79.58 | 77.57 | 96.80 |
| CLIN-X (ES) + OurArchitecture | 98.30 | 89.10 | 80.42 | 78.48 | **97.62** |
| CLIN-X (ES) + OurArchitecture + Transfer | **89.50** | **89.74** | **80.93** | **79.60** | 97.46 |
## Purpose of the project
This software is a research prototype, solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
## License
The CLIN-X models are open-sourced under the CC-BY 4.0 license.
See the [LICENSE](LICENSE) file for details. |
philschmid/tiny-distilbert-classification | 2ec87b1f823ed23236b016ad3f7c767222021877 | 2021-09-02T07:43:52.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | philschmid | null | philschmid/tiny-distilbert-classification | 91 | null | transformers | 4,780 | # Test model
> ## This model is used to run tests for the Hugging Face DLCs |
reshinthadith/BashGPTNeo | 2088f79bd44b8a2c2e77cf98d08d91798eb3d05e | 2021-09-01T15:22:29.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"English",
"Bash",
"dataset:nlc2cmd",
"transformers",
"code-representation-learning",
"program-synthesis"
] | text-generation | false | reshinthadith | null | reshinthadith/BashGPTNeo | 91 | null | transformers | 4,781 | ---
language:
- English
- Bash
thumbnail: "Neural Program Synthesis for Bash"
tags:
- code-representation-learning
- program-synthesis
datasets:
- nlc2cmd
metrics:
- metric1
- metric2
---
# BashGPT-Neo
## What is it ?
BashGPT-Neo is a [Neural Program Synthesis](https://www.microsoft.com/en-us/research/project/neural-program-synthesis/) Model for Bash Commands and Shell Scripts. Trained on the data provided by [NLC2CMD](https://nlc2cmd.us-east.mybluemix.net/). It is fine-tuned version of GPTNeo-125M by EleutherAI.
## Usage
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("reshinthadith/BashGPTNeo")
model = AutoModelForCausalLM.from_pretrained("reshinthadith/BashGPTNeo")
```
## Core Contributors 👥
- [Reshinth Adithyan](https://github.com/reshinthadithyan)
- [Aditya Thuruvas](https://github.com/dhuruvasaditya) |
rexoscare/string_instrument_detector | 5054f6cf64bdeabdd60001bd4c87d35842cde21e | 2021-07-03T17:54:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | rexoscare | null | rexoscare/string_instrument_detector | 91 | null | transformers | 4,782 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: string_instrument_detector
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7395833134651184
---
# string_instrument_detector
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Banjo

#### Guitar

#### Mandolin

#### Ukulele
 |
chitanda/merit-roberta-large-v1 | ff498af4c27005fbfdecba7cabefc9e64eb3e5a8 | 2022-02-26T12:26:41.000Z | [
"pytorch",
"roberta",
"transformers",
"license:mit"
] | null | false | chitanda | null | chitanda/merit-roberta-large-v1 | 91 | null | transformers | 4,783 | ---
license: mit
---
|
KES/T5-TTParser | 88a9c1f519008c4ca705a9ad861e79ae66c7bb07 | 2022-06-04T20:11:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:Custom dataset",
"dataset:Creolised JFLEG",
"arxiv:1702.04066",
"transformers",
"Trinidad and Tobago English Parser",
"Caribe",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | KES | null | KES/T5-TTParser | 91 | 1 | transformers | 4,784 | ---
language: en
tags:
- Trinidad and Tobago English Parser
- text2text-generation
- Caribe
license: cc-by-nc-sa-4.0
datasets:
- Custom dataset
- Creolised JFLEG
---
# Trinidad English Creole Parser
This model was trained as a parser to Trinidad English Creole.
---
# Model
This model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised [JFLEG](https://arxiv.org/abs/1702.04066) dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/T5-TTParser")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/T5-TTParser")
txt = "Ah have live with mi paremnts en London"
inputs = tokenizer("grammar:"+txt, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
correction=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(correction)) #Correction: Ah live with meh parents in London.
``` |
IDEA-CCNL/Taiyi-vit-87M-D | 967326d6b96e1b1cb65f7e1e4377a367b309699a | 2022-05-12T02:52:56.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"license:apache-2.0"
] | image-classification | false | IDEA-CCNL | null | IDEA-CCNL/Taiyi-vit-87M-D | 91 | null | transformers | 4,785 | ---
license: apache-2.0
---
# Taiyi-vit-87M-D (base-sized model)
Based on pre-trained clip-vit-base **(patch 16, resolution 224x224)**, we introduce multimodal information. For multimodal pre-training tasks, we design several special training objectives in our paper. Our code and details of pre-training tasks will be made publicly available upon paper acceptance.
The pre-training datasets are MSCOCO and VG. "D" implies a special training method.
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies.
# Usage
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('IDEA-CCNL/Taiyi-vit-87M-D')
model = ViTForImageClassification.from_pretrained('IDEA-CCNL/Taiyi-vit-87M-D')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
# Predicted class: Egyptian cat
```
# Benchmark
| | CIFAR10 | ImageNet1k |
|--------------------------------------|:-------:|:----------:|
| clip-vit-base-patch16-224 (official) | 96.2 | 80.2 |
| Taiyi-vit-87M-D (local) | 98.7 | 82.4 |
The local test settings are:
learning_rate=2e-5,
batch_size=128,
num_train_epochs=5,
weight_decay=0.01
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
ckiplab/bert-tiny-chinese-ws | f4a11d4b00b06502c260d8134a375a4000b09d7b | 2022-05-10T03:28:12.000Z | [
"pytorch",
"bert",
"token-classification",
"zh",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | token-classification | false | ckiplab | null | ckiplab/bert-tiny-chinese-ws | 91 | null | transformers | 4,786 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ws')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
bongsoo/sentencebert_v1.0 | 25a2079e0331317c8bc9059c2427ef234a66d851 | 2022-07-28T03:04:28.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"en",
"ko"
] | sentence-similarity | false | bongsoo | null | bongsoo/sentencebert_v1.0 | 91 | 2 | sentence-transformers | 4,787 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- en
- ko
---
# sentencebert_v1.0
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
- 이 모델은 **bert-base-multiligual-cased** 모델에 **kowiki_20200920** 말뭉치를 가지고, 한글 단어들을 추가 학습 시킨 후
<br>distilbert 만들고, 다시 distilbert를 sentencebert로 만든 후,추가적으로 NLI/STS Tearch-student 증류 학습 시켜 만든 모델 입니다.
- 모델 제작 과정에 대한 자세한 내용은 [여기](https://github.com/kobongsoo/BERT/tree/master)를 참조 하세요.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["오늘은 비가 올것 같다", "내일은 춥고 눈이 올거다"]
model = SentenceTransformer('bongsoo/sentencebert_v1.0')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/sentencebert_v1.0')
model = AutoModel.from_pretrained('bongsoo/sentencebert_v1.0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
- 성능 측정을 위한 말뭉치는, **korsts(1,379쌍문장)** 와 **klue-sts(519쌍문장)** 를 이용함.
|모델 |korsts|klue-sts|korsts+klue-sts|
|:--------|------:|--------:|--------------:|
|bongsoo/sentencebert_v1.0|0.743|0.799|0.638|
|bongsoo/sentencebert_v1.1|0.806|0.749|0.633|
|distiluse-base-multilingual-cased-v2|0.747|0.785|0.644|
|paraphrase-multilingual-mpnet-base-v2|0.820|0.799|0.721|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18432 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 9216,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "warmupconstant",
"steps_per_epoch": null,
"warmup_steps": 9216,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
bongsoo |
ismail-lucifer011/autotrain-name_all-904029577 | 7a0ea3a6b0a2785686b27a4154e26236e7548d5f | 2022-05-24T15:43:22.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:ismail-lucifer011/autotrain-data-name_all",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | ismail-lucifer011 | null | ismail-lucifer011/autotrain-name_all-904029577 | 91 | null | transformers | 4,788 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ismail-lucifer011/autotrain-data-name_all
co2_eq_emissions: 0.8375653425894861
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 904029577
- CO2 Emissions (in grams): 0.8375653425894861
## Validation Metrics
- Loss: 0.0035200684797018766
- Accuracy: 0.9989316041363876
- Precision: 0.9877899024589919
- Recall: 0.9933336010601984
- F1: 0.9905539954046464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ismail-lucifer011/autotrain-name_all-904029577
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ismail-lucifer011/autotrain-name_all-904029577", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ismail-lucifer011/autotrain-name_all-904029577", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
alibaba-pai/pai-dkplm-financial-base-zh | 1ce4f42ff8d5073d61886c0e3c6df501e694c815 | 2022-06-10T06:49:32.000Z | [
"pytorch",
"bert",
"pretraining",
"zh",
"arxiv:2205.00258",
"arxiv:2112.01047",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | alibaba-pai | null | alibaba-pai/pai-dkplm-financial-base-zh | 91 | 1 | transformers | 4,789 | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "根据新闻报道,三大[MASK]数午后集体涨超1%。"
- text: "用各种途径支持中小[MASK]企业融资。"
tags:
- bert
license: apache-2.0
---
## Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model) for the financial domain
For Chinese natural language processing in specific domains, we provide **Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model)** for the financial domain named **pai-dkplm-financial-base-zh**, from our AAAI 2021 paper named **DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding**.
This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP ) developed by the Alibaba PAI team.
## Citation
If you find the resource is useful, please cite the following papers in your work.
- For the EasyNLP framework:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
```
- For DKPLM:
```
@article{dkplm,
title = {DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding},
author = {Zhang, Taolin and Wang, Chengyu and Hu, Nan and Qiu, Minghui and Tang, Chengguang and He, Xiaofeng and Huang, Jun},
url = {https://arxiv.org/abs/2112.01047},
publisher = {arXiv},
year = {2021}
}
``` |
docketanalyzer/distilroberta-base-ddcl | 09d99f735e5a3bdde0d74411a6c6c95ff9788f57 | 2021-05-20T16:12:23.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | docketanalyzer | null | docketanalyzer/distilroberta-base-ddcl | 90 | null | transformers | 4,790 | Entry not found |
facebook/wav2vec2-large-fr-voxpopuli | 3a2f030a11d4d1cabf1a62e3c3c55239c6b59b96 | 2021-07-06T02:11:48.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-fr-voxpopuli | 90 | null | transformers | 4,791 | ---
language: fr
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the fr unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
federicopascual/finetuned-sentiment-analysis-model | 7190a2735c09fd9b62e9da30466cb7e382ff2645 | 2021-12-28T15:57:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | federicopascual | null | federicopascual/finetuned-sentiment-analysis-model | 90 | null | transformers | 4,792 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- precision
- recall
model-index:
- name: finetuned-sentiment-analysis-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.909
- name: Precision
type: precision
value: 0.8899803536345776
- name: Recall
type: recall
value: 0.9282786885245902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-sentiment-analysis-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2868
- Accuracy: 0.909
- Precision: 0.8900
- Recall: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
google/tapas-tiny-finetuned-wtq | 7bc868b0c8c0ff220769a2e78b6306870fb80d8b | 2021-11-29T10:45:11.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wtq",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-tiny-finetuned-wtq | 90 | null | transformers | 4,793 | ---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wtq
---
# TAPAS tiny model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_tiny` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
**TINY** | **noreset** | **0.0823** | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
**TINY** | **reset** | **0.1039** | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
julien-c/flair-ner | 9b28741e755f4f34f588d50e812ae590a3b6e511 | 2020-11-26T22:01:14.000Z | [
"pytorch",
"en",
"dataset:conll2003",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | julien-c | null | julien-c/flair-ner | 90 | null | flair | 4,794 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
inference: false
---
## Flair NER model `en-ner-conll03-v0.4.pt`
Imported from https://nlp.informatik.hu-berlin.de/resources/models/ner/
### Demo: How to use in Flair
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence(
"My name is Julien, I currently live in Paris, I work at Hugging Face, Inc."
)
tagger = SequenceTagger.load("julien-c/flair-ner")
# predict NER tags
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
```
yields the following output:
> `My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .`
### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
|
liam168/chat-DialoGPT-small-zh | cf12fe8e8d5a7f4ca6c26ba47c249751597c34f8 | 2021-08-04T09:01:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers",
"license:apache-2.0"
] | text-generation | false | liam168 | null | liam168/chat-DialoGPT-small-zh | 90 | 1 | transformers | 4,795 | ---
language: zh
widget:
- text: "你们宿舍都是这么厉害的人吗"
license: apache-2.0
---
# liam168/chat-DialoGPT-small-zh
## Model description
用中文聊天数据训练的模型;
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
mode_name = 'liam168/chat-DialoGPT-small-zh'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
model = AutoModelForCausalLM.from_pretrained(mode_name)
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("Answer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
orzhan/rugpt3-simplify-large | 96bbd89cc7bb5eb4646216fa869ae7f49a7f3432 | 2021-05-31T14:31:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | orzhan | null | orzhan/rugpt3-simplify-large | 90 | null | transformers | 4,796 | Text simplification model for Russian. Fine-tuned ruGPT3-large
https://github.com/orzhan/rusimscore
---
language: ru
|
allenai/aspire-sentence-embedder | 0379fd17fb957625b414a392646cb2406a070424 | 2022-03-09T00:03:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | allenai | null | allenai/aspire-sentence-embedder | 90 | null | transformers | 4,797 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `cosentbert` and represents a baseline sentence encoder for scientific text. The paper trains two versions of `cosentbert`, one for biomedical scientific text and another one for computer science text. This released model is trained on a union of all available data across scientific domains in the Semantic Scholar Open Research Corpus (S2ORC) dataset. This difference in training data leads to different, though close, evaluation performance than in the paper.
## Model Card
**Model description:** This model represents a SciBERT based sentence encoder pre-trained for scientific text similarity. The model represents a sentence with a single vector obtained by reading the CLS token for the sentence.
**Training data:** The model is trained on sets of co-citation context sentences referencing the same set of papers in a contrastive learning setup. These sentences can often be considered as paraphrases since co-citation sentences citing the same papers often describe similar aspects of the co-cited papers. The model is trained on 4.3 million sentence pairs of this type. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. An example pair of sentences used for training is as follows:
> "The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base."
>
> "Distant supervision [31, 43, 21, 49] generates training data automatically by aligning texts and a knowledge base (KB) (see Fig. 1 )."
**Training procedure:** The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-citation context pairs. All the training data used was in English.
**Intended uses & limitations:** This model is trained for sentence similarity tasks in scientific text and is best used as a sentence encoder. However with appropriate fine-tuning the model can also be used for other tasks such as classification. Note that about 50% of the training data consists of text from biomedical text and performance may be superior on text from bio-medicine and similar domains.
**How to use:** This model can be used as a BERT model via the `transformers` library:
```
from transformers import AutoModel, AutoTokenizer
aspire_sent = AutoModel.from_pretrained('allenai/aspire-sentence-embedder')
aspire_tok = AutoTokenizer.from_pretrained('allenai/aspire-sentence-embedder')
s='We present a new scientific document similarity model based on matching fine-grained aspects of texts.'
inputs = aspire_tok(s, padding=True, truncation=True, return_tensors="pt", max_length=512)
result = aspire_sent(\*\*inputs)
clsrep = result.last_hidden_state[:,0,:]
```
OR via the `sentence_transformers` library:
```
from sentence_transformers import SentenceTransformer, models
word_embedding_model = models.Transformer('allenai/aspire-sentence-embedder', max_seq_length=512)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode='cls')
aspire_sb = SentenceTransformer(modules=[word_embedding_model, pooling_model])
clsrep_sb = sentbert_model.encode([s])
```
**Variable and metrics:**
Since the paper this model was trained for proposes methods for similarity of scientific abstracts, this model is evaluated on information retrieval datasets with document level queries. The datasets used for the paper include RELISH (biomedical/English), TRECCOVID (biomedical/English), and CSFCube (computer science/English). These are all detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). RELISH and TRECCOVID represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. CSFCube presents a slightly different task and presents a set of finer-grained sentences in the abstract based on which a finer-grained retrieval must be made. This task represents the closest task to a sentence similarity task.
In using this sentence level model for abstract level retrieval we rank documents by the minimal L2 distance between the sentences in the query and candidate abstract.
**Evaluation results:**
The released model `aspire-sentence-embedder` is compared against 1) `all-mpnet-base-v2` a sentence-bert model trained on ~1 billion training examples, 2) `paraphrase-TinyBERT-L6-v2` a sentence-bert model trained on paraphrase pairs, and 3) the `cosentbert` models used in our paper.
| | CSFCube aggregated | CSFCube aggregated | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:------------------:|:-------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 | MAP | NDCG%20 |
| `all-mpnet-base-v2` | 34.64 | 54.94 | 17.35 | 43.87 | 52.92 | 69.69 |
| `paraphrase-TinyBERT-L6-v2` | 26.77 | 48.57 | 11.12 | 34.85 | 50.80 | 67.35 |
| `cosentbert` | 28.95 | 50.68 | 12.80 | 38.07 | 50.04 | 66.35 |
| `aspire-sentence-embedder` | 30.58 | 53.86 | 11.64 | 36.50 | 50.36 | 66.63 |
The released model sees similar performance across datasets to the per-domain `cosentbert` models used in our paper (and reported above). |
Helsinki-NLP/opus-mt-tc-big-zle-en | 2481ba3f65e32ebd1131528c2bc76ed6fe330c43 | 2022-06-01T13:09:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"en",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-en | 90 | null | transformers | 4,798 | ---
language:
- be
- en
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-en
results:
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: flores101-devtest
type: flores_101
args: rus eng devtest
metrics:
- name: BLEU
type: bleu
value: 35.2
- task:
name: Translation ukr-eng
type: translation
args: ukr-eng
dataset:
name: flores101-devtest
type: flores_101
args: ukr eng devtest
metrics:
- name: BLEU
type: bleu
value: 39.2
- task:
name: Translation bel-eng
type: translation
args: bel-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-eng
metrics:
- name: BLEU
type: bleu
value: 48.1
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 57.4
- task:
name: Translation ukr-eng
type: translation
args: ukr-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-eng
metrics:
- name: BLEU
type: bleu
value: 56.9
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: tico19-test
type: tico19-test
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 33.3
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2012
type: wmt-2012-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 39.2
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2013
type: wmt-2013-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 31.3
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2014
type: wmt-2014-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 40.5
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2015
type: wmt-2015-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 36.1
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2016
type: wmt-2016-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 35.7
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2017
type: wmt-2017-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 40.8
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2018
type: wmt-2018-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 35.2
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2019
type: wmt-2019-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 41.6
- task:
name: Translation rus-eng
type: translation
args: rus-eng
dataset:
name: newstest2020
type: wmt-2020-news
args: rus-eng
metrics:
- name: BLEU
type: bleu
value: 36.9
---
# opus-mt-tc-big-zle-en
Neural machine translation model for translating from East Slavic languages (zle) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): bel rus ukr
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT zle-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Скільки мені слід купити пива?",
"Я клієнтка."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# How much beer should I buy?
# I'm a client.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-en")
print(pipe("Скільки мені слід купити пива?"))
# expected output: How much beer should I buy?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-eng | tatoeba-test-v2021-08-07 | 0.65221 | 48.1 | 2500 | 18571 |
| rus-eng | tatoeba-test-v2021-08-07 | 0.71452 | 57.4 | 19425 | 147872 |
| ukr-eng | tatoeba-test-v2021-08-07 | 0.71162 | 56.9 | 13127 | 88607 |
| bel-eng | flores101-devtest | 0.51689 | 18.1 | 1012 | 24721 |
| rus-eng | flores101-devtest | 0.62581 | 35.2 | 1012 | 24721 |
| ukr-eng | flores101-devtest | 0.65001 | 39.2 | 1012 | 24721 |
| rus-eng | newstest2012 | 0.63724 | 39.2 | 3003 | 72812 |
| rus-eng | newstest2013 | 0.57641 | 31.3 | 3000 | 64505 |
| rus-eng | newstest2014 | 0.65667 | 40.5 | 3003 | 69190 |
| rus-eng | newstest2015 | 0.61747 | 36.1 | 2818 | 64428 |
| rus-eng | newstest2016 | 0.61414 | 35.7 | 2998 | 69278 |
| rus-eng | newstest2017 | 0.65365 | 40.8 | 3001 | 69025 |
| rus-eng | newstest2018 | 0.61386 | 35.2 | 3000 | 71291 |
| rus-eng | newstest2019 | 0.65476 | 41.6 | 2000 | 42642 |
| rus-eng | newstest2020 | 0.64878 | 36.9 | 991 | 20217 |
| rus-eng | newstestB2020 | 0.65685 | 39.3 | 991 | 20423 |
| rus-eng | tico19-test | 0.63280 | 33.3 | 2100 | 56323 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 22:17:11 EET 2022
* port machine: LM0-400-22516.local
|
cambridgeltl/simctg_rocstories | 61b674aa71e858af99ff2fbd77f91d4a3c807cbb | 2022-06-25T19:33:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2202.06417",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/simctg_rocstories | 90 | null | transformers | 4,799 | This model provides a GPT-2 language model trained with SimCTG on the ROCStories benchmark [(Mostafazadeh et al., 2016)](https://aclanthology.org/N16-1098.pdf) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_rocstories'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prompt = r"Accident in the Lab <|endoftext|>"
print ('Prefix is: {}'.format(prompt))
tokens = model.tokenizer.tokenize(prompt)
input_ids = model.tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.65, 45
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output).split(model.tokenizer.eos_token)[1].strip())
'''
Prefix is: Accident in the Lab <|endoftext|>
Output:
----------------------------------------------------------------------------------------------------
Tom went to work one day. He noticed a lab accident in the lab. Tom was worried about his safety at work.
Unfortunately the accident didn't go well. Tom wound up leaving early to get back on the job.
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.