modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hyunwoongko/ctrlsum-cnndm | 3f8f0a6caf964a79f13ba9cbb28a25757b72b4cd | 2021-03-21T15:55:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hyunwoongko | null | hyunwoongko/ctrlsum-cnndm | 1,307 | 2 | transformers | 1,600 | Entry not found |
dennlinger/roberta-cls-consec | 26d06e22b97525aa959aaa5dfdaf4e3ab8bcd387 | 2021-06-14T13:07:40.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"arxiv:2012.03619",
"transformers"
] | text-classification | false | dennlinger | null | dennlinger/roberta-cls-consec | 1,304 | 1 | transformers | 1,601 | # About this model: Topical Change Detection in Documents
This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper [here](https://github.com/dennlinger/TopicalChange), or read the [paper itself](https://arxiv.org/abs/2012.03619). The weights are based on RoBERTa-base.
# Load the model
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('dennlinger/roberta-cls-consec')
model = AutoModelForSequenceClassification.from_pretrained('dennlinger/roberta-cls-consec')
```
# Input Format
The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia.
# Training objective
The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments.
If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s:
* `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic.
* `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic.
# Performance
The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.
Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs. |
pedropei/sentence-level-certainty | 57bb19e0804a77689ca02f2b1d408d162413cdc2 | 2021-09-29T05:35:19.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pedropei | null | pedropei/sentence-level-certainty | 1,303 | null | transformers | 1,602 | Entry not found |
johngiorgi/declutr-small | d899ea3e95e6a65499184647d080379e6c477208 | 2022-03-11T14:47:48.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:2006.03659",
"transformers",
"autotrain_compatible"
] | fill-mask | false | johngiorgi | null | johngiorgi/declutr-small | 1,302 | 2 | transformers | 1,603 | # DeCLUTR-small
## Model description
The "DeCLUTR-small" model from our paper: [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-small")
# Prepare some text to embed
texts = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small")
model = AutoModel.from_pretrained("johngiorgi/declutr-small")
# Prepare some text to embed
text = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@article{Giorgi2020DeCLUTRDC,
title={DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations},
author={John M Giorgi and Osvald Nitski and Gary D. Bader and Bo Wang},
journal={ArXiv},
year={2020},
volume={abs/2006.03659}
}
``` |
monologg/koelectra-base-discriminator | c7005c19e7e523a86c96ad67fbd49c888ebbf287 | 2021-10-20T16:55:57.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers",
"korean",
"license:apache-2.0"
] | null | false | monologg | null | monologg/koelectra-base-discriminator | 1,298 | null | transformers | 1,604 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
anton-l/wav2vec2-base-ft-keyword-spotting | 30629617f4408a39489bec210f6b5127b6fbaafc | 2021-10-27T22:16:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/wav2vec2-base-ft-keyword-spotting | 1,294 | 1 | transformers | 1,605 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 |
| 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 |
| 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 |
| 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 |
| 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
moussaKam/barthez-orangesum-abstract | 2f4969c2f16bf27aaddb87bf9b862ccead48135b | 2021-11-15T13:03:03.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"fr",
"arxiv:2010.12321",
"transformers",
"summarization",
"bart",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | moussaKam | null | moussaKam/barthez-orangesum-abstract | 1,294 | 1 | transformers | 1,606 | ---
tags:
- summarization
- bart
language:
- fr
license: apache-2.0
widget:
- text: Citant les préoccupations de ses clients dénonçant des cas de censure après la suppression du compte de Trump, un fournisseur d'accès Internet de l'État de l'Idaho a décidé de bloquer Facebook et Twitter. La mesure ne concernera cependant que les clients mécontents de la politique de ces réseaux sociaux.
---
### Barthez model finetuned on orangeSum (abstract generation)
finetuning: examples/seq2seq (as of Feb 08 2021)
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
uclanlp/plbart-python-en_XX | 48bf6e4889bdb9bafd12381a4e9a9a1e0fe224eb | 2021-11-09T17:09:27.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-python-en_XX | 1,292 | 1 | transformers | 1,607 | Entry not found |
valhalla/gpt-neo-random-tiny | 6e358e9d007d3bf2f592832a2e1c4dce15fe409a | 2021-04-07T16:38:40.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"transformers"
] | feature-extraction | false | valhalla | null | valhalla/gpt-neo-random-tiny | 1,292 | null | transformers | 1,608 | **This model is uploaded for testing purpose. It's random model not trained on anything** |
Helsinki-NLP/opus-mt-ka-en | f6f4a42415aa81a926f6596654cfcbd37cefc214 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ka",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ka-en | 1,288 | null | transformers | 1,609 | ---
language:
- ka
- en
tags:
- translation
license: apache-2.0
---
### kat-eng
* source group: Georgian
* target group: English
* OPUS readme: [kat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-eng/README.md)
* model: transformer-align
* source language(s): kat
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat.eng | 37.9 | 0.538 |
### System Info:
- hf_name: kat-eng
- source_languages: kat
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.test.txt
- src_alpha3: kat
- tgt_alpha3: eng
- short_pair: ka-en
- chrF2_score: 0.5379999999999999
- bleu: 37.9
- brevity_penalty: 0.991
- ref_len: 5992.0
- src_name: Georgian
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: ka
- tgt_alpha2: en
- prefer_old: False
- long_pair: kat-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
allegro/plt5-small | 5c65ab3ab269dda279491e7e685f0adf1dadef61 | 2021-08-19T16:59:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"pl",
"dataset:ccnet",
"dataset:nkjp",
"dataset:wikipedia",
"dataset:open subtitles",
"dataset:free readings",
"transformers",
"T5",
"translation",
"summarization",
"question answering",
"reading comprehension",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | allegro | null | allegro/plt5-small | 1,281 | 2 | transformers | 1,610 | ---
language: pl
tags:
- T5
- translation
- summarization
- question answering
- reading comprehension
datasets:
- ccnet
- nkjp
- wikipedia
- open subtitles
- free readings
license: cc-by-4.0
---
# plT5 Small
**plT5** models are T5-based language models trained on Polish corpora. The models were optimized for the original T5 denoising target.
## Corpus
plT5 was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a sentencepiece unigram model with
vocabulary size of 50k tokens.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/plt5-small")
model = AutoModel.from_pretrained("allegro/plt5-small")
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:[email protected]">[email protected]</a> |
facebook/wav2vec2-lv-60-espeak-cv-ft | 7718bdd728dde297e1e69d61fc782d147bac21a6 | 2021-12-08T21:03:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"multi-lingual",
"dataset:common_voice",
"arxiv:2109.11680",
"transformers",
"speech",
"audio",
"phoneme-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-lv-60-espeak-cv-ft | 1,281 | 2 | transformers | 1,611 | ---
language: multi-lingual
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
- phoneme-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
license: apache-2.0
---
# Wav2Vec2-Large-LV60 finetuned on multi-lingual Common Voice
This checkpoint leverages the pretrained checkpoint [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60)
and is fine-tuned on [CommonVoice](https://huggingface.co/datasets/common_voice) to recognize phonetic labels in multiple languages.
When using the model make sure that your speech input is sampled at 16kHz.
Note that the model outputs a string of phonetic labels. A dictionary mapping phonetic labels to words
has to be used to map the phonetic output labels to output words.
[Paper: Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680)
Authors: Qiantong Xu, Alexei Baevski, Michael Auli
**Abstract**
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-lv-60-espeak-cv-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
# retrieve logits
with torch.no_grad():
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
# => should give ['m ɪ s t ɚ k w ɪ l t ɚ ɹ ɪ z ð ɪ ɐ p ɑː s əl ʌ v ð ə m ɪ d əl k l æ s ᵻ z æ n d w iː ɑːɹ ɡ l æ d t ə w ɛ l k ə m h ɪ z ɡ ɑː s p əl']
``` |
pdelobelle/robBERT-dutch-books | 04eab2e04d08d4f62df7f769135bcece4f907606 | 2021-05-20T19:17:17.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pdelobelle | null | pdelobelle/robBERT-dutch-books | 1,281 | null | transformers | 1,612 | Entry not found |
JamesStratford/Pidrow-bot-DialoGPT-Medium-v2 | 0fb0a99a49c249fdaf3335bf14ad62c71709b373 | 2022-06-29T07:02:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JamesStratford | null | JamesStratford/Pidrow-bot-DialoGPT-Medium-v2 | 1,280 | null | transformers | 1,613 | ---
tags:
- conversational
---
# Pidrow bot - medium |
dandelin/vilt-b32-mlm-itm | a94469664a838bf855b40144f638ba9b3e791c89 | 2021-11-27T10:13:10.000Z | [
"pytorch",
"vilt",
"arxiv:2102.03334",
"transformers",
"license:apache-2.0"
] | null | false | dandelin | null | dandelin/vilt-b32-mlm-itm | 1,279 | 1 | transformers | 1,614 | ---
license: apache-2.0
tags:
---
# Vision-and-Language Transformer (ViLT), pre-trained only
Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
(to do)
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
(to do)
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
``` |
speechbrain/asr-crdnn-rnnlm-librispeech | d9760a0bef6c6718d30ad1271f7d05980d435677 | 2021-11-30T00:37:56.000Z | [
"en",
"dataset:librispeech",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-crdnn-rnnlm-librispeech | 1,276 | 7 | speechbrain | 1,615 | ---
language: "en"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- librispeech
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:-------------:|:--------------:| :--------:|
| 20-05-22 | 3.09 | 1xV100 32GB |
## Pipeline description
This ASR system is composed with 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Neural language model (RNNLM) trained on the full 10M words dataset.
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-rnnlm-librispeech", savedir="pretrained_models/asr-crdnn-rnnlm-librispeech")
asr_model.transcribe_file('speechbrain/asr-crdnn-rnnlm-librispeech/example.wav')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '2abd9f01').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LibriSpeech/ASR/seq2seq/
python train.py hparams/train_BPE_1000.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1SAndjcThdkO-YQF8kvwPOXlQ6LMT71vt?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | 38a0fbdddcb26bedfc182590a24ebc9a843832c3 | 2022-02-02T21:30:47.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | DaisyMak | null | DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | 1,275 | null | transformers | 1,616 | Entry not found |
facebook/wav2vec2-base-100h | 9c1fef36b62a428a658e5b022ef9f21b38f47e0b | 2022-05-27T16:32:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-100h | 1,268 | 1 | transformers | 1,617 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
# Wav2Vec2-Base-100h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 6.1 | 13.5 |
|
pritamdeka/S-BioBert-snli-multinli-stsb | 3ab11e57f285f37c31648373a5cb6bf0da5c7362 | 2022-03-11T12:35:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | pritamdeka | null | pritamdeka/S-BioBert-snli-multinli-stsb | 1,268 | 0 | sentence-transformers | 1,618 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# S-BioBert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-BioBert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-BioBert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-BioBert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
google/bert_uncased_L-8_H-768_A-12 | 3f3d093c8dd66e4776c0286f0b52b8dea5865ece | 2021-05-19T17:36:32.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-8_H-768_A-12 | 1,266 | null | transformers | 1,619 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
hf-internal-testing/tiny-random-imagegpt | 8291cd3a0461602decb3fa68263f4ca3b278c8f9 | 2021-12-24T10:48:44.000Z | [
"pytorch",
"imagegpt",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-imagegpt | 1,266 | null | transformers | 1,620 | Entry not found |
ainize/kobart-news | 4b95cf0288646bf92bcdf7429b6f462b71db5eeb | 2021-06-29T02:51:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ko",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | ainize | null | ainize/kobart-news | 1,265 | 2 | transformers | 1,621 | ---
language: ko
license: mit
tags:
- summarization
- bart
---
# kobart-news
- This model is a [kobart](https://huggingface.co/hyunwoongko/kobart) fine-tuned on the [문서요약 텍스트/신문기사](https://aihub.or.kr/aidata/8054) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp).
## Usage
### Python Code
```python
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news")
model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news")
# Encode Input Text
input_text = '국내 전반적인 경기침체로 상가 건물주의 수익도 전국적인 감소세를 보이고 있는 것으로 나타났다. 수익형 부동산 연구개발기업 상가정보연구소는 한국감정원 통계를 분석한 결과 전국 중대형 상가 순영업소득(부동산에서 발생하는 임대수입, 기타수입에서 제반 경비를 공제한 순소득)이 1분기 ㎡당 3만4200원에서 3분기 2만5800원으로 감소했다고 17일 밝혔다. 수도권, 세종시, 지방광역시에서 순영업소득이 가장 많이 감소한 지역은 3분기 1만3100원을 기록한 울산으로, 1분기 1만9100원 대비 31.4% 감소했다. 이어 대구(-27.7%), 서울(-26.9%), 광주(-24.9%), 부산(-23.5%), 세종(-23.4%), 대전(-21%), 경기(-19.2%), 인천(-18.5%) 순으로 감소했다. 지방 도시의 경우도 비슷했다. 경남의 3분기 순영업소득은 1만2800원으로 1분기 1만7400원 대비 26.4% 감소했으며 제주(-25.1%), 경북(-24.1%), 충남(-20.9%), 강원(-20.9%), 전남(-20.1%), 전북(-17%), 충북(-15.3%) 등도 감소세를 보였다. 조현택 상가정보연구소 연구원은 "올해 내수 경기의 침체된 분위기가 유지되며 상가, 오피스 등을 비롯한 수익형 부동산 시장의 분위기도 경직된 모습을 보였고 오피스텔, 지식산업센터 등의 수익형 부동산 공급도 증가해 공실의 위험도 늘었다"며 "실제 올 3분기 전국 중대형 상가 공실률은 11.5%를 기록하며 1분기 11.3% 대비 0.2% 포인트 증가했다"고 말했다. 그는 "최근 소셜커머스(SNS를 통한 전자상거래), 음식 배달 중개 애플리케이션, 중고 물품 거래 애플리케이션 등의 사용 증가로 오프라인 매장에 영향을 미쳤다"며 "향후 지역, 콘텐츠에 따른 상권 양극화 현상은 심화될 것으로 보인다"고 덧붙였다.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
```
### API and Demo
You can experience this model through [ainize-api](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main) and [ainize-demo](https://main-summarize-torchserve-gkswjdzz.endpoint.ainize.ai/).
|
hf-internal-testing/tiny-random-speech_to_text | 0edd349ecdb54044ad27ca4cde3136252e3503c1 | 2021-09-17T19:26:03.000Z | [
"pytorch",
"speech_to_text",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-speech_to_text | 1,260 | null | transformers | 1,622 | Entry not found |
hf-internal-testing/tiny-random-vision-encoder-decoder | 2b34c3c71aa6c25134e293c502f172ee7368eb67 | 2021-12-15T17:14:55.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-vision-encoder-decoder | 1,258 | null | transformers | 1,623 | Entry not found |
lgrobol/roberta-minuscule | 3ec7286af3b51b67bef74c29a8b9195205b532c4 | 2021-08-17T13:38:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lgrobol | null | lgrobol/roberta-minuscule | 1,256 | 1 | transformers | 1,624 | RoBERTa-minuscule
==================
A ridiculously small model for testing purposes. |
izumi-lab/bert-small-japanese | 7472b8975446df577a1820d559197075ab05f2e1 | 2022-03-19T09:37:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | izumi-lab | null | izumi-lab/bert-small-japanese | 1,252 | null | transformers | 1,625 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# BERT small Japanese finance
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as BERT small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1.45M training steps.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
castorini/afriberta_large | e74edb9488208f8a2aeb69be4c16d179ab385564 | 2022-06-10T12:05:16.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"pcm",
"so",
"sw",
"ti",
"yo",
"multilingual",
"transformers",
"autotrain_compatible"
] | fill-mask | false | castorini | null | castorini/afriberta_large | 1,251 | 2 | transformers | 1,626 | Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_large
## Model description
AfriBERTa large is a pretrained multilingual language model with around 126 million parameters.
The model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_large")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_large")
# we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
KETI-AIR/ke-t5-base-ko | fda98d3a8ddad618a447c2e3043cccca5878e986 | 2021-06-23T02:46:59.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-base-ko | 1,241 | 1 | transformers | 1,627 | Entry not found |
mdhugol/indonesia-bert-sentiment-classification | 80ccb4c2817cf976534ac491020a9572e5dae54f | 2021-09-14T08:24:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mdhugol | null | mdhugol/indonesia-bert-sentiment-classification | 1,241 | 1 | transformers | 1,628 | Indonesian BERT Base Sentiment Classifier is a sentiment-text-classification model. The model was originally the pre-trained [IndoBERT Base Model (phase1 - uncased)](https://huggingface.co/indobenchmark/indobert-base-p1) model using [Prosa sentiment dataset](https://github.com/indobenchmark/indonlu/tree/master/dataset/smsa_doc-sentiment-prosa)
## How to Use
### As Text Classifier
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained= "mdhugol/indonesia-bert-sentiment-classification"
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
tokenizer = AutoTokenizer.from_pretrained(pretrained)
sentiment_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
label_index = {'LABEL_0': 'positive', 'LABEL_1': 'neutral', 'LABEL_2': 'negative'}
pos_text = "Sangat bahagia hari ini"
neg_text = "Dasar anak sialan!! Kurang ajar!!"
result = sentiment_analysis(pos_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {pos_text} | Label : {status} ({score * 100:.3f}%)')
result = sentiment_analysis(neg_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {neg_text} | Label : {status} ({score * 100:.3f}%)')
``` |
staka/fugumt-ja-en | 8cb8ff81a8625a626c6f0f19cc5082c6181f223a | 2022-05-29T08:28:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ja",
"transformers",
"translation",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | translation | false | staka | null | staka/fugumt-ja-en | 1,239 | 2 | transformers | 1,629 | ---
license: cc-by-sa-4.0
language:
- en
- ja
tags:
- translation
widget:
- text: "猫はかわいいです。"
---
# FuguMT
This is a translation model using Marian-NMT.
For more details, please see [my repository](https://github.com/s-taka/fugumt).
* source language: ja
* target language: en
### How to use
This model uses transformers and sentencepiece.
```python
!pip install transformers sentencepiece
```
You can use this model directly with a pipeline:
```python
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-ja-en')
fugu_translator('猫はかわいいです。')
```
### Eval results
The results of the evaluation using [tatoeba](https://tatoeba.org/ja)(randomly selected 500 sentences) are as follows:
|source |target |BLEU(*1)|
|-------|-------|--------|
|ja |en |39.1 |
(*1) sacrebleu |
cmarkea/distilcamembert-base-qa | ea9c62f924a2464890c04979fa67ef28bb49d2ff | 2022-06-15T15:09:29.000Z | [
"pytorch",
"tf",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"dataset:piaf",
"transformers",
"license:cc-by-nc-sa-3.0",
"autotrain_compatible"
] | question-answering | false | cmarkea | null | cmarkea/distilcamembert-base-qa | 1,235 | 3 | transformers | 1,630 | ---
language: fr
license: cc-by-nc-sa-3.0
datasets:
- fquad
- piaf
widget:
- text: "Quand et où est sorti Toy Story ?"
context: "Pixar Animation Studios, ou simplement Pixar dans le langage courant, est une société américaine de production de films en images tridimensionnelles de synthèse. Elle a acquis sa notoriété grâce à Toy Story, premier long métrage de ce type, sorti aux États-Unis en 1995. À ce jour, le studio d'animation a remporté dix-neuf Oscars, quatre Golden Globes et trois Grammy Awards ainsi que de nombreuses autres récompenses. Le studio travaille avec PhotoRealistic RenderMan, sa propre version de l'interface de programmation de rendu RenderMan utilisée pour créer des images de haute qualité. Ses studios de production et son siège social se trouvent au Pixar Campus situé à Emeryville près de San Francisco en Californie."
- text: "Quel est le premier long métrage du studio ?"
context: "Pixar Animation Studios, ou simplement Pixar dans le langage courant, est une société américaine de production de films en images tridimensionnelles de synthèse. Elle a acquis sa notoriété grâce à Toy Story, premier long métrage de ce type, sorti aux États-Unis en 1995. À ce jour, le studio d'animation a remporté dix-neuf Oscars, quatre Golden Globes et trois Grammy Awards ainsi que de nombreuses autres récompenses. Le studio travaille avec PhotoRealistic RenderMan, sa propre version de l'interface de programmation de rendu RenderMan utilisée pour créer des images de haute qualité. Ses studios de production et son siège social se trouvent au Pixar Campus situé à Emeryville près de San Francisco en Californie."
---
DistilCamemBERT-QA
==================
We present DistilCamemBERT-QA which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Question-Answering task for the french language. This model is constructed over two datasets FQuAD v1.0 and Piaf which are composed of contexts and questions with their answers inside the context.
This modelization is close to [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue especially as in a context of cross-encoding like for this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power thanks to DistilCamemBERT.
Dataset
-------
The dataset is composed of FQuAD v1.0 and Piaf with 24'566 questions and answers for the training set and 3'188 for the evaluation set.
Evaluation results and benchmark
--------------------------------
We compare [DistilCamemBERT-QA](https://huggingface.co/cmarkea/distilcamembert-base-qa) to two other modelizations working on french language. The first one [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [fmikaelian/flaubert-base-uncased-squad](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) is based on [FlauBERT](https://huggingface.co/flaubert/flaubert_base_uncased) an other french model based on BERT architecture this time. To compare the models to each others, the exact match comparing character by character the prediected answer and the ground truth is used, f1-score which measures the quality of intersection between predicted answer words and ground truth is also used and finally inclusion score which measures if the ground truth answer is include in predicted answer. For the mean inference time measure, an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used.
| **model** | **time (ms)** | **exact match (%)** | **f1-score (%)** | **inclusion-score (%)** |
| :--------------: | :-----------: | :--------------: | :------------: | :------------: |
| [cmarkea/distilcamembert-base-qa](https://huggingface.co/cmarkea/distilcamembert-base-qa) | **216.96** | 25.66 | 62.65 | 59.82 |
| [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 432.17 | **59.76** | **79.57** | **69.23** |
| [fmikaelian/flaubert-base-uncased-squad](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) | 875.84 | 0.22 | 5.21 | 3.68 |
Do not take into account the results on the FlauBERT model, there seems to be a problem with the modelling as the results seem very low.
How to use DistilCamemBERT-QA
------------------------------
```python
from transformers import pipeline
qa_engine = pipeline(
"question-answering",
model="cmarkea/distilcamembert-base-qa",
tokenizer="cmarkea/distilcamembert-base-qa"
)
result = qa_engine(
context="David Fincher, né le 28 août 1962 à Denver (Colorado), "
"est un réalisateur et producteur américain. Il est principalement "
"connu pour avoir réalisé les films Seven, Fight Club, L'Étrange "
"Histoire de Benjamin Button, The Social Network et Gone Girl qui "
"lui ont valu diverses récompenses et nominations aux Oscars du "
"cinéma ou aux Golden Globes. Réputé pour son perfectionnisme, il "
"peut tourner un très grand nombre de prises de ses plans et "
"séquences afin d'obtenir le rendu visuel qu'il désire. Il a "
"également développé et produit les séries télévisées House of "
"Cards (pour laquelle il remporte l'Emmy Award de la meilleure "
"réalisation pour une série dramatique en 2013) et Mindhunter, "
"diffusées sur Netflix.",
question="Quel est le métier de David Fincher ?"
)
result
{'score': 0.7981914281845093,
'start': 61,
'end': 98,
'answer': ' réalisateur et producteur américain.'}
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
``` |
elgeish/wav2vec2-large-xlsr-53-arabic | b5e6df14064b879671fd242c0366cbe2a68effc9 | 2022-06-04T23:37:05.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:arabic_speech_corpus",
"dataset:mozilla-foundation/common_voice_6_1",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | elgeish | null | elgeish/wav2vec2-large-xlsr-53-arabic | 1,230 | 6 | transformers | 1,631 | ---
language: ar
datasets:
- arabic_speech_corpus
- mozilla-foundation/common_voice_6_1
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: elgeish-wav2vec2-large-xlsr-53-arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1 (Arabic)
type: mozilla-foundation/common_voice_6_1
config: ar
split: test
args:
language: ar
metrics:
- name: Test WER
type: wer
value: 26.55
- name: Validation WER
type: wer
value: 23.39
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("common_voice", "ar", split="test[:10]")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
dataset = dataset.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["sentence"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", buckwalter.untrans(predicted))
print("--")
```
Here's the output:
```
reference: ألديك قلم ؟
predicted: هلديك قالر
--
reference: ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.
predicted: ليست نالك مسافة على هذه الأرض أبعد من يوم أمس
--
reference: إنك تكبر المشكلة.
predicted: إنك تكبر المشكلة
--
reference: يرغب أن يلتقي بك.
predicted: يرغب أن يلتقي بك
--
reference: إنهم لا يعرفون لماذا حتى.
predicted: إنهم لا يعرفون لماذا حتى
--
reference: سيسعدني مساعدتك أي وقت تحب.
predicted: سيسئدني مساعد سكرأي وقت تحب
--
reference: أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة.
predicted: أحب ناضريةً علمية إلي هي أنحل قتزح المكونا بالكامل من الأمت عن المفقودة
--
reference: سأشتري له قلماً.
predicted: سأشتري له قلما
--
reference: أين المشكلة ؟
predicted: أين المشكل
--
reference: وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ
predicted: ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون
--
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import jiwer
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor
set_seed(42)
test_split = load_dataset("common_voice", "ar", split="test")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_split = test_split.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic")
model = Wav2Vec2ForCTC.from_pretrained("elgeish/wav2vec2-large-xlsr-53-arabic").to("cuda").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values.to("cuda")).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.batch_decode(predicted)
return batch
test_split = test_split.map(predict, batched=True, batch_size=16, remove_columns=["speech"])
transformation = jiwer.Compose([
# normalize some diacritics, remove punctuation, and replace Persian letters with Arabic ones
jiwer.SubstituteRegexes({
r'[auiFNKo\~_،؟»\?;:\-,\.؛«!"]': "", "\u06D6": "",
r"[\|\{]": "A", "p": "h", "ک": "k", "ی": "y"}),
# default transformation below
jiwer.RemoveMultipleSpaces(),
jiwer.Strip(),
jiwer.SentencesToListOfWords(),
jiwer.RemoveEmptyStrings(),
])
metrics = jiwer.compute_measures(
truth=[buckwalter.trans(s) for s in test_split["sentence"]], # Buckwalter transliteration
hypothesis=test_split["predicted"],
truth_transform=transformation,
hypothesis_transform=transformation,
)
print(f"WER: {metrics['wer']:.2%}")
```
**Test Result**: 26.55%
## Training
For more details, see [Fine-Tuning with Arabic Speech Corpus](https://github.com/huggingface/transformers/tree/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2#fine-tuning-with-arabic-speech-corpus).
This model represents Arabic in a format called [Buckwalter transliteration](https://en.wikipedia.org/wiki/Buckwalter_transliteration).
The Buckwalter format only includes ASCII characters, some of which are non-alpha (e.g., `">"` maps to `"أ"`).
The [lang-trans](https://github.com/kariminf/lang-trans) package is used to convert (transliterate) Arabic abjad.
[This script](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/wav2vec2/finetune_large_xlsr_53_arabic_speech_corpus.sh)
was used to first fine-tune [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the `train` split of the [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) dataset;
the `test` split was used for model selection; the resulting model at this point is saved as [elgeish/wav2vec2-large-xlsr-53-levantine-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-levantine-arabic).
Training was then resumed using the `train` split of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset;
the `validation` split was used for model selection;
training was stopped to meet the deadline of [Fine-Tune-XLSR Week](https://github.com/huggingface/transformers/blob/700229f8a4003c4f71f29275e0874b5ba58cd39d/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md):
this model is the checkpoint at 100k steps and a validation WER of **23.39%**.
<img src="https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic/raw/main/validation_wer.png" alt="Validation WER" width="100%" />
It's worth noting that validation WER is trending down, indicating the potential of further training (resuming the decaying learning rate at 7e-6).
## Future Work
One area to explore is using `attention_mask` in model input, which is recommended [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
Also, exploring data augmentation using datasets used to train models listed [here](https://paperswithcode.com/sota/speech-recognition-on-common-voice-arabic).
|
hetpandya/t5-small-tapaco | d9695bcb99a04766dbc41d636bf6b8646710b1e9 | 2021-06-30T06:36:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:tapaco",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hetpandya | null | hetpandya/t5-small-tapaco | 1,230 | null | transformers | 1,632 | ---
language: en
datasets:
- tapaco
---
# T5-small for paraphrase generation
Google's T5 small fine-tuned on [TaPaCo](https://huggingface.co/datasets/tapaco) dataset for paraphrasing.
## Model in Action 🚀
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("hetpandya/t5-small-tapaco")
model = T5ForConditionalGeneration.from_pretrained("hetpandya/t5-small-tapaco")
def get_paraphrases(sentence, prefix="paraphrase: ", n_predictions=5, top_k=120, max_length=256,device="cpu"):
text = prefix + sentence + " </s>"
encoding = tokenizer.encode_plus(
text, pad_to_max_length=True, return_tensors="pt"
)
input_ids, attention_masks = encoding["input_ids"].to(device), encoding[
"attention_mask"
].to(device)
model_output = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=max_length,
top_k=top_k,
top_p=0.98,
early_stopping=True,
num_return_sequences=n_predictions,
)
outputs = []
for output in model_output:
generated_sent = tokenizer.decode(
output, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
if (
generated_sent.lower() != sentence.lower()
and generated_sent not in outputs
):
outputs.append(generated_sent)
return outputs
paraphrases = get_paraphrases("The house will be cleaned by me every Saturday.")
for sent in paraphrases:
print(sent)
```
## Output
```
The house is cleaned every Saturday by me.
The house will be cleaned on Saturday.
I will clean the house every Saturday.
I get the house cleaned every Saturday.
I will clean this house every Saturday.
```
## Model fine-tuning
Please find my guide on fine-tuning the model here:
https://towardsdatascience.com/training-t5-for-paraphrase-generation-ab3b5be151a2
Created by [Het Pandya/@hetpandya](https://github.com/hetpandya) | [LinkedIn](https://www.linkedin.com/in/het-pandya)
Made with <span style="color: red;">♥</span> in India |
monologg/koelectra-small-v2-discriminator | f2c615617707ae5e011a94c5506d0086301afe74 | 2020-12-26T16:23:57.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | monologg | null | monologg/koelectra-small-v2-discriminator | 1,230 | null | transformers | 1,633 | Entry not found |
salti/AraElectra-base-finetuned-ARCD | ba34c8067e38d6202812a3f880fd01f2cd20761e | 2021-01-29T20:39:31.000Z | [
"pytorch",
"electra",
"question-answering",
"ar",
"dataset:arcd",
"transformers",
"autotrain_compatible"
] | question-answering | false | salti | null | salti/AraElectra-base-finetuned-ARCD | 1,229 | 1 | transformers | 1,634 | ---
language:
- ar
datasets:
- arcd
widget:
- text: "أين يعيش محمد ؟"
context: "اسمي محمد وأنا أعيش في سوريا"
- text: "ما العدد الذري للهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
- text: "ما خواص الهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
---
|
pierreguillou/ner-bert-large-cased-pt-lenerbr | d081b0eb833d418c68e3327fc16e956d4738b164 | 2021-12-29T19:33:17.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:lener_br",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | pierreguillou | null | pierreguillou/ner-bert-large-cased-pt-lenerbr | 1,227 | 2 | transformers | 1,635 | ---
language:
- pt
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: checkpoints
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
metrics:
- name: F1
type: f1
value: 0.9082022949426265
- name: Precision
type: precision
value: 0.8975220495590088
- name: Recall
type: recall
value: 0.9191397849462366
- name: Accuracy
type: accuracy
value: 0.9808310603867311
- name: Loss
type: loss
value: 0.1228889599442482
widget:
- text: "Ao Instituto Médico Legal da jurisdição do acidente ou da residência cumpre fornecer, no prazo de 90 dias, laudo à vítima (art. 5, § 5, Lei n. 6.194/74 de 19 de dezembro de 1974), função técnica que pode ser suprida por prova pericial realizada por ordem do juízo da causa, ou por prova técnica realizada no âmbito administrativo que se mostre coerente com os demais elementos de prova constante dos autos."
- text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
- text: "Todavia, entendo que extrair da aludida norma o sentido expresso na redação acima implica desconstruir o significado do texto constitucional, o que é absolutamente vedado ao intérprete. Nesse sentido, cito Dimitri Dimoulis: ‘(...) ao intérprete não é dado escolher significados que não estejam abarcados pela moldura da norma. Interpretar não pode significar violentar a norma.’ (Positivismo Jurídico. São Paulo: Método, 2006, p. 220).59. Dessa forma, deve-se tomar o sentido etimológico como limite da atividade interpretativa, a qual não pode superado, a ponto de destruir a própria norma a ser interpretada. Ou, como diz Konrad Hesse, ‘o texto da norma é o limite insuperável da atividade interpretativa.’ (Elementos de Direito Constitucional da República Federal da Alemanha, Porto Alegre: Sergio Antonio Fabris, 2003, p. 71)."
---
## (BERT large) NER model in the legal domain in Portuguese (LeNER-Br)
**ner-bert-large-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [pierreguillou/bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-large-cased-pt-lenerbr) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.
Due to the small size of the finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
- **f1**: 0.9082022949426265
- **precision**: 0.8975220495590088
- **recall**: 0.9191397849462366
- **accuracy**: 0.9808310603867311
- **loss**: 0.1228889599442482
Check as well the [base version of this model](https://huggingface.co/pierreguillou/ner-bert-base-cased-pt-lenerbr) with a f1 of 0.893.
**Note**: the model [pierreguillou/bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-large-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before finetuning on the NER task allows to get a better NER model.
## Blog post
[NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Widget & App
You can test this model into the widget of this page.
Use as well the [NER App](https://huggingface.co/spaces/pierreguillou/ner-bert-pt-lenerbr) that allows comparing the 2 BERT models (base and large) fitted in the NER task with the legal LeNER-Br dataset.
## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# parameters
model_name = "pierreguillou/ner-bert-large-cased-pt-lenerbr"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_text = "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
# tokenization
inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors="pt")
tokens = inputs.tokens()
# get predictions
outputs = model(**inputs).logits
predictions = torch.argmax(outputs, dim=2)
# print predictions
for token, prediction in zip(tokens, predictions[0].numpy()):
print((token, model.config.id2label[prediction]))
````
You can use pipeline, too. However, it seems to have an issue regarding to the max_length of the input sequence.
````
!pip install transformers
import transformers
from transformers import pipeline
model_name = "pierreguillou/ner-bert-large-cased-pt-lenerbr"
ner = pipeline(
"ner",
model=model_name
)
ner(input_text)
````
## Training procedure
### Notebook
The notebook of finetuning ([HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb)) is in github.
### Hyperparameters
# batch, learning rate...
- per_device_batch_size = 2
- gradient_accumulation_steps = 2
- learning_rate = 2e-5
- num_train_epochs = 10
- weight_decay = 0.01
- optimizer = AdamW
- betas = (0.9,0.999)
- epsilon = 1e-08
- lr_scheduler_type = linear
- seed = 42
# save model & load best model
- save_total_limit = 7
- logging_steps = 500
- eval_steps = logging_steps
- evaluation_strategy = 'steps'
- logging_strategy = 'steps'
- save_strategy = 'steps'
- save_steps = logging_steps
- load_best_model_at_end = True
- fp16 = True
# get best model through a metric
- metric_for_best_model = 'eval_f1'
- greater_is_better = True
### Training results
````
Num examples = 7828
Num Epochs = 20
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 2
Total optimization steps = 39140
Step Training Loss Validation Loss Precision Recall F1 Accuracy
500 0.250000 0.140582 0.760833 0.770323 0.765548 0.963125
1000 0.076200 0.117882 0.829082 0.817849 0.823428 0.966569
1500 0.082400 0.150047 0.679610 0.914624 0.779795 0.957213
2000 0.047500 0.133443 0.817678 0.857419 0.837077 0.969190
2500 0.034200 0.230139 0.895672 0.845591 0.869912 0.964070
3000 0.033800 0.108022 0.859225 0.887312 0.873043 0.973700
3500 0.030100 0.113467 0.855747 0.885376 0.870310 0.975879
4000 0.029900 0.118619 0.850207 0.884946 0.867229 0.974477
4500 0.022500 0.124327 0.841048 0.890968 0.865288 0.975041
5000 0.020200 0.129294 0.801538 0.918925 0.856227 0.968077
5500 0.019700 0.128344 0.814222 0.908602 0.858827 0.969250
6000 0.024600 0.182563 0.908087 0.866882 0.887006 0.968565
6500 0.012600 0.159217 0.829883 0.913763 0.869806 0.969357
7000 0.020600 0.183726 0.854557 0.893333 0.873515 0.966447
7500 0.014400 0.141395 0.777716 0.905161 0.836613 0.966828
8000 0.013400 0.139378 0.873042 0.899140 0.885899 0.975772
8500 0.014700 0.142521 0.864152 0.901505 0.882433 0.976366
9000 0.010900 0.122889 0.897522 0.919140 0.908202 0.980831
9500 0.013500 0.143407 0.816580 0.906667 0.859268 0.973395
10000 0.010400 0.144946 0.835608 0.908387 0.870479 0.974629
10500 0.007800 0.143086 0.847587 0.910108 0.877735 0.975985
11000 0.008200 0.156379 0.873778 0.884301 0.879008 0.976321
11500 0.008200 0.133356 0.901193 0.910108 0.905628 0.980328
12000 0.006900 0.133476 0.892202 0.920215 0.905992 0.980572
12500 0.006900 0.129991 0.890159 0.904516 0.897280 0.978683
````
### Validation metrics by Named Entity
````
{'JURISPRUDENCIA': {'f1': 0.8135593220338984,
'number': 657,
'precision': 0.865979381443299,
'recall': 0.7671232876712328},
'LEGISLACAO': {'f1': 0.8888888888888888,
'number': 571,
'precision': 0.8952042628774423,
'recall': 0.882661996497373},
'LOCAL': {'f1': 0.850467289719626,
'number': 194,
'precision': 0.7777777777777778,
'recall': 0.9381443298969072},
'ORGANIZACAO': {'f1': 0.8740635033892258,
'number': 1340,
'precision': 0.8373205741626795,
'recall': 0.914179104477612},
'PESSOA': {'f1': 0.9836677554829678,
'number': 1072,
'precision': 0.9841269841269841,
'recall': 0.9832089552238806},
'TEMPO': {'f1': 0.9669669669669669,
'number': 816,
'precision': 0.9481743227326266,
'recall': 0.9865196078431373},
'overall_accuracy': 0.9808310603867311,
'overall_f1': 0.9082022949426265,
'overall_precision': 0.8975220495590088,
'overall_recall': 0.9191397849462366}
```` |
Helsinki-NLP/opus-mt-hy-en | c1f5af969aee273f845a84ad3f4b149ba5435303 | 2021-09-09T22:11:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hy",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hy-en | 1,226 | null | transformers | 1,636 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hy-en
* source languages: hy
* target languages: en
* OPUS readme: [hy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hy-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hy.en | 29.5 | 0.466 |
|
Salesforce/codegen-350M-multi | 2b61ebc2f74ace34d530e8ba9501198ee27ead82 | 2022-06-28T17:47:03.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
] | text-generation | false | Salesforce | null | Salesforce/codegen-350M-multi | 1,224 | 0 | transformers | 1,637 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
aubmindlab/bert-base-arabertv01 | 59dc633c58a7a1e9b4c1e8d4f7be94cf9dc6a2e0 | 2021-05-19T11:50:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"arxiv:2003.00104",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aubmindlab | null | aubmindlab/bert-base-arabertv01 | 1,220 | null | transformers | 1,638 | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# !!! A newer version of this model is available !!! [AraBERTv02](https://huggingface.co/aubmindlab/bert-base-arabertv02)
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-base-arabertv01"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
facebook/wav2vec2-large-xlsr-53-german | 97e1c5b2b100529bbbd80d32c5b6862116beffab | 2021-07-06T02:46:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-xlsr-53-german | 1,220 | null | transformers | 1,639 | ---
language: de
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice DE Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-german"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "de", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 18.5 % |
microsoft/cocolm-base | 2832a017dd206e3de5c043a005cb76c86b8ba83d | 2022-02-07T23:01:31.000Z | [
"pytorch",
"arxiv:2102.08473",
"transformers"
] | null | false | microsoft | null | microsoft/cocolm-base | 1,220 | 2 | transformers | 1,640 | # COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
This model card contains the COCO-LM model (**base++** version) proposed in [this paper](https://arxiv.org/abs/2102.08473). The official GitHub repository can be found [here](https://github.com/microsoft/COCO-LM).
# Citation
If you find this model card useful for your research, please cite the following paper:
```
@inproceedings{meng2021coco,
title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining},
author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
booktitle={NeurIPS},
year={2021}
}
``` |
facebook/data2vec-vision-base | 72a7bdadab41d0e9a2c8d6887b9f8a50eebb8e0f | 2022-05-03T15:52:10.000Z | [
"pytorch",
"tf",
"data2vec-vision",
"feature-extraction",
"dataset:imagenet",
"dataset:imagenet-1k",
"arxiv:2202.03555",
"arxiv:2106.08254",
"transformers",
"image-classification",
"vision",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/data2vec-vision-base | 1,220 | null | transformers | 1,641 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-1k
---
# Data2Vec-Vision (base-sized model, pre-trained only)
BEiT model pre-trained in a self-supervised fashion on ImageNet-1k (1,2 million images, 1000 classes) at resolution 224x224. It was introduced in the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli and first released in [this repository](https://github.com/facebookresearch/data2vec_vision/tree/main/beit).
Disclaimer: The team releasing Facebook team did not write a model card for this model so this model card has been written by the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?other=data2vec-vision) to look for
fine-tuned versions on a task that interests you.
## Training data
The BEiT model was pretrained on [ImageNet-1k](http://www.image-net.org/), a dataset consisting of 1,2 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to the [original paper](https://arxiv.org/abs/2106.08254) and the [original codebase](https://github.com/facebookresearch/data2vec_vision/tree/main/beit)
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
cyclone/simcse-chinese-roberta-wwm-ext | 871d7039a3fccd4869d545a25b63c545341ca7f4 | 2021-09-02T03:04:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"transformers"
] | feature-extraction | false | cyclone | null | cyclone/simcse-chinese-roberta-wwm-ext | 1,219 | 6 | transformers | 1,642 | ## Cyclone SIMCSE RoBERTa WWM Ext Chinese
This model provides simplified Chinese sentence embeddings encoding based on [Simple Contrastive Learning](https://arxiv.org/abs/2104.08821).
The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding.
### Usage
Please use [SentenceTransformer](https://github.com/UKPLab/sentence-transformers) to load the model.
from sentence_transformers import SentenceTransformer
encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext') |
allenai/macaw-3b | c4d1b101bcec5de649b927bb92c4e93c311c0be2 | 2021-09-21T15:59:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/macaw-3b | 1,216 | null | transformers | 1,643 | ---
language: en
widget:
- text: $answer$ ; $mcoptions$ ; $question$ = What is the color of a cloudy sky?
license: apache-2.0
---
# macaw-3b
## Model description
Macaw (<b>M</b>ulti-<b>a</b>ngle <b>c</b>(q)uestion <b>a</b>ns<b>w</b>ering) is a ready-to-use model capable of
general question answering,
showing robustness outside the domains it was trained on. It has been trained in "multi-angle" fashion,
which means it can handle a flexible set of input and output "slots"
(question, answer, multiple-choice options, context, and explanation) .
Macaw was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in
three sizes: [macaw-11b](https://huggingface.co/allenai/macaw-11b), [macaw-3b](https://huggingface.co/allenai/macaw-3b),
and [macaw-large](https://huggingface.co/allenai/macaw-large), as well as an answer-focused version featured on
various leaderboards [macaw-answer-11b](https://huggingface.co/allenai/macaw-answer-11b).
See https://github.com/allenai/macaw for more details. |
sentence-transformers/bert-base-wikipedia-sections-mean-tokens | bfe50e68735b7f483150fd1548ddb77e04b43fa8 | 2022-06-15T22:24:35.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/bert-base-wikipedia-sections-mean-tokens | 1,216 | null | sentence-transformers | 1,644 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-base-wikipedia-sections-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-base-wikipedia-sections-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-wikipedia-sections-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-wikipedia-sections-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-wikipedia-sections-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
hf-internal-testing/tiny-random-data2vec-seq-class | 4c59e8c7dc5db8886fca7c12e9b380daefaf4aba | 2022-03-03T12:26:02.000Z | [
"pytorch",
"data2vec-audio",
"audio-classification",
"transformers"
] | audio-classification | false | hf-internal-testing | null | hf-internal-testing/tiny-random-data2vec-seq-class | 1,216 | null | transformers | 1,645 | Entry not found |
philschmid/distilbert-base-multilingual-cased-sentiment-2 | 83ff874f93aacbba79642abfe2a274a3c874232b | 2022-01-24T15:08:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | philschmid | null | philschmid/distilbert-base-multilingual-cased-sentiment-2 | 1,211 | 1 | transformers | 1,646 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: all_languages
metrics:
- name: Accuracy
type: accuracy
value: 0.7475666666666667
- name: F1
type: f1
value: 0.7475666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6067
- Accuracy: 0.7476
- F1: 0.7476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.6885 | 0.53 | 5000 | 0.6532 | 0.7217 | 0.7217 |
| 0.6411 | 1.07 | 10000 | 0.6348 | 0.7319 | 0.7319 |
| 0.6057 | 1.6 | 15000 | 0.6186 | 0.7387 | 0.7387 |
| 0.5844 | 2.13 | 20000 | 0.6236 | 0.7449 | 0.7449 |
| 0.549 | 2.67 | 25000 | 0.6067 | 0.7476 | 0.7476 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
speechbrain/spkrec-xvect-voxceleb | e2cc27f853f99bd5d539432f0cba3f124c059f71 | 2022-06-25T02:56:40.000Z | [
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"xvectors",
"TDNN",
"audio-classification",
"license:apache-2.0"
] | audio-classification | false | speechbrain | null | speechbrain/spkrec-xvect-voxceleb | 1,207 | 4 | speechbrain | 1,647 | ---
language: "en"
thumbnail:
tags:
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- xvectors
- TDNN
- speechbrain
- audio-classification
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
- min_dct
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with xvector embeddings on Voxceleb
This repository provides all the necessary tools to extract speaker embeddings with a pretrained TDNN model using SpeechBrain.
The system is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance on Voxceleb1-test set (Cleaned) is:
| Release | EER(%)
|:-------------:|:--------------:|
| 05-03-21 | 3.2 |
## Pipeline description
This system is composed of a TDNN model coupled with statistical pooling. The system is trained with Categorical Cross-Entropy Loss.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-xvect-voxceleb", savedir="pretrained_models/spkrec-xvect-voxceleb")
signal, fs =torchaudio.load('tests/samples/ASR/spk1_snt1.wav')
embeddings = classifier.encode_batch(signal)
```
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec/
python train_speaker_embeddings.py hparams/train_x_vectors.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1RtCBJ3O8iOCkFrJItCKT9oL-Q1MNCwMH?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing xvectors
```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18,
author = {David Snyder and
Daniel Garcia{-}Romero and
Alan McCree and
Gregory Sell and
Daniel Povey and
Sanjeev Khudanpur},
title = {Spoken Language Recognition using X-vectors},
booktitle = {Odyssey 2018},
pages = {105--111},
year = {2018},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
clue/roberta_chinese_clue_tiny | e51239963f4ff728b1696180a9ae86ec1d3aeff4 | 2021-05-20T15:27:44.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | clue | null | clue/roberta_chinese_clue_tiny | 1,204 | 1 | transformers | 1,648 | Entry not found |
xlm-mlm-100-1280 | dafb8ab3a39720dcdf0687658c7fbd27e45bc071 | 2022-07-22T08:09:19.000Z | [
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"es",
"fr",
"de",
"zh",
"ru",
"pt",
"it",
"ar",
"ja",
"id",
"tr",
"nl",
"pl",
"fa",
"vi",
"sv",
"ko",
"he",
"ro",
"no",
"hi",
"uk",
"cs",
"fi",
"hu",
"th",
"da",
"ca",
"el",
"bg",
"sr",
"ms",
"bn",
"hr",
"sl",
"az",
"sk",
"eo",
"ta",
"sh",
"lt",
"et",
"ml",
"la",
"bs",
"sq",
"arz",
"af",
"ka",
"mr",
"eu",
"tl",
"ang",
"gl",
"nn",
"ur",
"kk",
"be",
"hy",
"te",
"lv",
"mk",
"als",
"is",
"wuu",
"my",
"sco",
"mn",
"ceb",
"ast",
"cy",
"kn",
"br",
"an",
"gu",
"bar",
"uz",
"lb",
"ne",
"si",
"war",
"jv",
"ga",
"oc",
"ku",
"sw",
"nds",
"ckb",
"ia",
"yi",
"fy",
"scn",
"gan",
"tt",
"am",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-mlm-100-1280 | 1,201 | null | transformers | 1,649 | ---
language:
- multilingual
- en
- es
- fr
- de
- zh
- ru
- pt
- it
- ar
- ja
- id
- tr
- nl
- pl
- fa
- vi
- sv
- ko
- he
- ro
- no
- hi
- uk
- cs
- fi
- hu
- th
- da
- ca
- el
- bg
- sr
- ms
- bn
- hr
- sl
- az
- sk
- eo
- ta
- sh
- lt
- et
- ml
- la
- bs
- sq
- arz
- af
- ka
- mr
- eu
- tl
- ang
- gl
- nn
- ur
- kk
- be
- hy
- te
- lv
- mk
- als
- is
- wuu
- my
- sco
- mn
- ceb
- ast
- cy
- kn
- br
- an
- gu
- bar
- uz
- lb
- ne
- si
- war
- jv
- ga
- oc
- ku
- sw
- nds
- ckb
- ia
- yi
- fy
- scn
- gan
- tt
- am
license: cc-by-nc-4.0
---
# xlm-mlm-100-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-100-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 100 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are:
|Language| en | es | de | ar | zh | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|
| |83.7|76.6 |73.6|67.4|71.7|62.9|
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. |
hf-internal-testing/tiny-detr-mobilenetsv3 | d22336251d71ba3637c29c23808b9dfeaa442eda | 2021-09-05T15:50:14.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
] | object-detection | false | hf-internal-testing | null | hf-internal-testing/tiny-detr-mobilenetsv3 | 1,198 | null | transformers | 1,650 | Entry not found |
activebus/BERT-XD_Review | 9dbc8322c9767ac81e75e62a5a5376d948c3536f | 2021-05-19T11:38:28.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | activebus | null | activebus/BERT-XD_Review | 1,197 | null | transformers | 1,651 | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
Please visit https://github.com/howardhsu/BERT-for-RRC-ABSA for details.
`BERT-XD_Review` is a cross-domain (beyond just `laptop` and `restaurant`) language model, where each example is from a single product / restaurant with the same rating, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased`.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-XD_Review")
model = AutoModel.from_pretrained("activebus/BERT-XD_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
HooshvareLab/distilbert-fa-zwnj-base-ner | 36ccd9aa3dd64c3a83c76de0b8cc5b3f6fa3dc30 | 2021-03-21T14:32:29.000Z | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"fa",
"transformers",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/distilbert-fa-zwnj-base-ner | 1,194 | 1 | transformers | 1,652 | ---
language: fa
---
# DistilbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Distilbert | 0.994534 | 0.946326 | 0.95504 | 0.950663 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.812048 | 0.828010 | 0.819951 |
| EVE | 256 | 0.955056 | 0.996094 | 0.975143 |
| FAC | 248 | 0.972549 | 1.000000 | 0.986083 |
| LOC | 2884 | 0.968403 | 0.967060 | 0.967731 |
| MON | 98 | 0.925532 | 0.887755 | 0.906250 |
| ORG | 3216 | 0.932095 | 0.951803 | 0.941846 |
| PCT | 94 | 0.936842 | 0.946809 | 0.941799 |
| PER | 2645 | 0.959818 | 0.957278 | 0.958546 |
| PRO | 318 | 0.963526 | 0.996855 | 0.979907 |
| TIM | 43 | 0.760870 | 0.813953 | 0.786517 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/distilbert-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. |
ml6team/mt5-small-german-finetune-mlsum | c466d1eeefc34cf39b4e8411410ef1ea3bade115 | 2021-01-28T13:15:00.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"de",
"dataset:mlsum",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | ml6team | null | ml6team/mt5-small-german-finetune-mlsum | 1,193 | 9 | transformers | 1,653 | ---
language: de
tags:
- summarization
datasets:
- mlsum
---
# mT5-small fine-tuned on German MLSUM
This model was finetuned for 3 epochs with a max_len (input) of 768 tokens and target_max_len of 192 tokens.
It was fine-tuned on all German articles present in the train split of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) having less than 384 "words" after splitting on whitespace, which resulted in 80249 articles.
The exact expression to filter the dataset was the following:
```python
dataset = dataset.filter(lambda e: len(e['text'].split()) < 384)
```
## Evaluation results
The fine-tuned model was evaluated on 2000 random articles from the validation set.
Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for both the fine-tuned model and the lead-3 baseline (which simply produces the leading three sentences of the document) and are presented in the following table.
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| ------------- |:-------:| --------:| -------:|
| mt5-small | 0.399 | 0.318 | 0.392 |
| lead-3 | 0.343 | 0.263 | 0.341 | |
davanstrien/deit_flyswot | 035587aa11a00f4590f87e748a359c32efe44a76 | 2022-04-03T17:45:11.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"model-index"
] | image-classification | false | davanstrien | null | davanstrien/deit_flyswot | 1,190 | null | transformers | 1,654 | ---
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
model-index:
- name: deit_flyswot
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: F1
type: f1
value: 0.990761405263678
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit_flyswot
This model was trained from scratch on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0755
- F1: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 0.5710 | 0.8095 |
| No log | 2.0 | 104 | 0.2814 | 0.9380 |
| No log | 3.0 | 156 | 0.1719 | 0.9555 |
| No log | 4.0 | 208 | 0.1410 | 0.9692 |
| No log | 5.0 | 260 | 0.1457 | 0.9680 |
| No log | 6.0 | 312 | 0.1084 | 0.9747 |
| No log | 7.0 | 364 | 0.0892 | 0.9736 |
| No log | 8.0 | 416 | 0.0962 | 0.9831 |
| No log | 9.0 | 468 | 0.0819 | 0.9796 |
| 0.2034 | 10.0 | 520 | 0.0916 | 0.9778 |
| 0.2034 | 11.0 | 572 | 0.0793 | 0.9827 |
| 0.2034 | 12.0 | 624 | 0.0818 | 0.9894 |
| 0.2034 | 13.0 | 676 | 0.0852 | 0.9807 |
| 0.2034 | 14.0 | 728 | 0.0938 | 0.9778 |
| 0.2034 | 15.0 | 780 | 0.0814 | 0.9876 |
| 0.2034 | 16.0 | 832 | 0.0702 | 0.9892 |
| 0.2034 | 17.0 | 884 | 0.0801 | 0.9892 |
| 0.2034 | 18.0 | 936 | 0.0806 | 0.9892 |
| 0.2034 | 19.0 | 988 | 0.0769 | 0.9926 |
| 0.0115 | 20.0 | 1040 | 0.0800 | 0.9926 |
| 0.0115 | 21.0 | 1092 | 0.0794 | 0.9926 |
| 0.0115 | 22.0 | 1144 | 0.0762 | 0.9846 |
| 0.0115 | 23.0 | 1196 | 0.0789 | 0.9830 |
| 0.0115 | 24.0 | 1248 | 0.0794 | 0.9829 |
| 0.0115 | 25.0 | 1300 | 0.0770 | 0.9908 |
| 0.0115 | 26.0 | 1352 | 0.0791 | 0.9829 |
| 0.0115 | 27.0 | 1404 | 0.0813 | 0.9892 |
| 0.0115 | 28.0 | 1456 | 0.0816 | 0.9908 |
| 0.0058 | 29.0 | 1508 | 0.0774 | 0.9908 |
| 0.0058 | 30.0 | 1560 | 0.0755 | 0.9908 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
RajSang/pegasus-sports-titles | 6bfbb3f6138b4b573ca80d4051b245868a1bf84e | 2022-05-09T09:26:14.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"en",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | RajSang | null | RajSang/pegasus-sports-titles | 1,185 | 1 | transformers | 1,655 | ---
tags:
- generated_from_trainer
widget:
- text: "Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him."
language: en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-sports-titles
This model is a fine-tuned pegasus on some **sports news articles scraped from the internet. (For educational purposes only)**. The model can generate titles for sports articles. Try it out using the inference API.
## Model description
A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on **Tennis, Football (Soccer), Cricket , Athletics and Rugby** were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.
## Usage
```python
from transformers import pipeline
#Feel free to play around with the generation parameters.
#Reduce the beam width for faster inference
#Note that the maximum length for the generated titles is 64
gen_kwargs = {"length_penalty": 0.6, "num_beams":4, "num_return_sequences": 4,"num_beam_groups":4,"diversity_penalty":2.0}
pipe = pipeline("summarization", model="RajSang/pegasus-sports-titles")
#Change the article according to your wish
article="""
Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him.
"""
result=pipe(article, **gen_kwargs)[0]["summary_text"]
print(result)
''' Output
Title 1 :
Coutinho's arrival sparks Villa comeback
Title 2 :
Philippe Coutinho marked his debut for Aston Villa with a goal and an assist as Steven Gerrard's side came from two goals down to draw with Manchester United.
Title 3 :
Steven Gerrard's first game in charge of Aston Villa ended in a dramatic draw against Manchester United - but it was the arrival of Philippe Coutinho that marked the night.
Title 4 :
Liverpool loanee Philippe Coutinho marked his first appearance for Aston Villa with two goals as Steven Gerrard's side came from two goals down to draw 2-2.'''
```
## Training procedure
While training, **short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.**
##Limitations
In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
**Rouge1:38.2315**
**Rouge2: 18.6598**
**RougueL: 31.7393**
**RougeLsum: 31.7086**
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
indobenchmark/indobart-v2 | 7192ee75ba70ca247c7abfb8e7268588145c0bde | 2022-06-21T17:52:37.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"id",
"dataset:Indo4B+",
"arxiv:2104.08200",
"transformers",
"indogpt",
"indobenchmark",
"indonlg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | indobenchmark | null | indobenchmark/indobart-v2 | 1,183 | 3 | transformers | 1,656 | ---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoBART-v2 Model
[IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
textattack/xlnet-base-cased-SST-2 | 9ceeb077dcd5cf5ae790572b2bd6aec755a263be | 2020-06-09T16:56:53.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-base-cased-SST-2 | 1,183 | 2 | transformers | 1,657 | Entry not found |
facebook/mcontriever-msmarco | 9ff6abed2c2fdf32bbbd8b4e98fb10160e317375 | 2022-05-29T08:50:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | facebook | null | facebook/mcontriever-msmarco | 1,183 | null | transformers | 1,658 | Entry not found |
IDEA-CCNL/Erlangshen-Ubert-330M-Chinese | 13a559f940c1dec0d06812a453c9c79c1ba3c523 | 2022-07-02T13:41:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"transformers",
"NLU",
"Sentiment",
"Chinese",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Ubert-330M-Chinese | 1,180 | null | transformers | 1,659 | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: false
---
# Erlangshen-Ubert-110M, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
We collect 70+ datasets in the Chinese domain for finetune, with a total of 1065069 samples. Our model is mainly based on [macbert](https://huggingface.co/hfl/chinese-macbert-base)
Ubert is a solution we proposed when we were doing the [2022 AIWIN Competition](http://ailab.aiwin.org.cn/competitions/68#results), and achieved **<font color=#FF0000 > the first place in the A/B list</font>**.. Compared with the officially provided baseline, an increase of 20 percentage points. Ubert can not only complete common extraction tasks such as entity recognition and event extraction, but also classification tasks such as news classification and natural language reasoning.
**<font color=#FF0000 > more detail in our [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)</font>**
## Usage
pip install fengshen
```python
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable ./
```
run the code
```python
import argparse
from fengshen import UbertPiplines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UbertPiplines.piplines_args(total_parser)
args = total_parser.parse_args()
args.pretrained_model_path = "IDEA-CCNL/Erlangshen-Ubert-330M-Chinese"
test_data=[
{
"task_type": "抽取任务",
"subtask_type": "实体识别",
"text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
"choices": [
{"entity_type": "小区名字"},
{"entity_type": "岗位职责"}
],
"id": 0}
]
model = UbertPiplines(args)
result = model.predict(test_data)
for line in result:
print(line)
```
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
bakrianoo/sinai-voice-ar-stt | 2d226249edf809b01a0e11159d1201ae1704b63c | 2022-03-23T18:25:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bakrianoo | null | bakrianoo/sinai-voice-ar-stt | 1,179 | 7 | transformers | 1,660 | ---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: Sinai Voice Arabic Speech Recognition Model
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice ar
args: ar
metrics:
- type: wer
value: 0.181
name: Test WER
- type: cer
value: 0.049
name: Test CER
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 93.03
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 90.79
widget:
- example_title: Example 1
src: https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19077324.mp3
- example_title: Example 2
src: https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19205138.mp3
- example_title: Example 3
src: https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19331711.mp3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sinai Voice Arabic Speech Recognition Model
# نموذج **صوت سيناء** للتعرف على الأصوات العربية الفصحى و تحويلها إلى نصوص
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Wer: 0.1808
It achieves the following results on the evaluation set:
- eval_loss = 0.2141
- eval_samples = 10388
- eval_wer = 0.181
- eval_cer = 0.049
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id bakrianoo/sinai-voice-ar-stt --dataset mozilla-foundation/common_voice_8_0 --config ar --split test
```
### Inference Without LM
```python
from transformers import (Wav2Vec2Processor, Wav2Vec2ForCTC)
import torchaudio
import torch
def speech_file_to_array_fn(voice_path, resampling_to=16000):
speech_array, sampling_rate = torchaudio.load(voice_path)
resampler = torchaudio.transforms.Resample(sampling_rate, resampling_to)
return resampler(speech_array)[0].numpy(), sampling_rate
# load the model
cp = "bakrianoo/sinai-voice-ar-stt"
processor = Wav2Vec2Processor.from_pretrained(cp)
model = Wav2Vec2ForCTC.from_pretrained(cp)
# recognize the text in a sample sound file
sound_path = './my_voice.mp3'
sample, sr = speech_file_to_array_fn(sound_path)
inputs = processor([sample], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values,).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.354 | 0.64 | 1000 | 0.4109 | 0.4493 |
| 0.5886 | 1.28 | 2000 | 0.2798 | 0.3099 |
| 0.4977 | 1.92 | 3000 | 0.2387 | 0.2673 |
| 0.4253 | 2.56 | 4000 | 0.2266 | 0.2523 |
| 0.3942 | 3.2 | 5000 | 0.2171 | 0.2437 |
| 0.3619 | 3.84 | 6000 | 0.2076 | 0.2253 |
| 0.3245 | 4.48 | 7000 | 0.2088 | 0.2186 |
| 0.308 | 5.12 | 8000 | 0.2086 | 0.2206 |
| 0.2881 | 5.76 | 9000 | 0.2089 | 0.2105 |
| 0.2557 | 6.4 | 10000 | 0.2015 | 0.2004 |
| 0.248 | 7.04 | 11000 | 0.2044 | 0.1953 |
| 0.2251 | 7.68 | 12000 | 0.2058 | 0.1932 |
| 0.2052 | 8.32 | 13000 | 0.2117 | 0.1878 |
| 0.1976 | 8.96 | 14000 | 0.2104 | 0.1825 |
| 0.1845 | 9.6 | 15000 | 0.2156 | 0.1821 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0 |
CAMeL-Lab/bert-base-arabic-camelbert-msa | 277069fd3645fedb22b746caf38d111aadee0241 | 2021-09-14T14:33:41.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa | 1,178 | 3 | transformers | 1,661 | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA** (`bert-base-arabic-camelbert-msa`), a model pre-trained on the entire MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
|✔|`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.08507660031318665,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.058905381709337234,
'token': 3696, 'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.04660581797361374, 'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]',
'score': 0.04156001657247543,
'token': 12413, 'token_str': 'الربح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.03534102067351341,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
hf-internal-testing/tiny-detr-mobilenetsv3-panoptic | d7cb3c9eb87c7d7de00190ea97d48da1ba07206d | 2021-09-27T19:40:12.000Z | [
"pytorch",
"detr",
"image-segmentation",
"transformers"
] | image-segmentation | false | hf-internal-testing | null | hf-internal-testing/tiny-detr-mobilenetsv3-panoptic | 1,177 | 1 | transformers | 1,662 | Entry not found |
junnyu/roformer_chinese_sim_char_ft_base | 38c5088bbdaeeecfef68696bd2c83b16baa0fb92 | 2022-04-15T03:52:49.000Z | [
"pytorch",
"roformer",
"text-generation",
"zh",
"transformers",
"tf2.0"
] | text-generation | false | junnyu | null | junnyu/roformer_chinese_sim_char_ft_base | 1,174 | 3 | transformers | 1,663 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
inference: False
---
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
``` |
Helsinki-NLP/opus-mt-de-es | d6bff091731341b977e4ca7294d2c309a2ca11e4 | 2021-09-09T21:30:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-es | 1,171 | null | transformers | 1,664 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-es
* source languages: de
* target languages: es
* OPUS readme: [de-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.es | 48.5 | 0.676 |
|
facebook/wmt21-dense-24-wide-x-en | b5e35923f54293f03bd6072b93585124475829e0 | 2022-05-26T22:27:50.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"multilingual",
"ha",
"is",
"ja",
"cs",
"ru",
"zh",
"de",
"en",
"arxiv:2108.03265",
"transformers",
"translation",
"wmt21",
"license:mit",
"autotrain_compatible"
] | translation | false | facebook | null | facebook/wmt21-dense-24-wide-x-en | 1,166 | 6 | transformers | 1,665 | ---
language:
- multilingual
- ha
- is
- ja
- cs
- ru
- zh
- de
- en
license: mit
tags:
- translation
- wmt21
---
# WMT 21 X-En
WMT 21 X-En is a 4.7B multilingual encoder-decoder (seq-to-seq) model trained for one-to-many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2108.03265) and first released in [this](https://github.com/pytorch/fairseq/tree/main/examples/wmt21) repository.
The model can directly translate text from 7 languages: Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de) to English.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
Since the model was trained with domain tags, you should prepend them to the input as well.
* "wmtdata newsdomain": Use for sentences in the news domain
* "wmtdata otherdomain": Use for sentences in all other domain
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt21-dense-24-wide-x-en")
tokenizer = AutoTokenizer.from_pretrained("facebook/wmt21-dense-24-wide-x-en")
# translate German to English
tokenizer.src_lang = "de"
inputs = tokenizer("wmtdata newsdomain Ein Modell für viele Sprachen", return_tensors="pt")
generated_tokens = model.generate(**inputs)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "A model for many languages"
# translate Icelandic to English
tokenizer.src_lang = "is"
inputs = tokenizer("wmtdata newsdomain Ein fyrirmynd fyrir mörg tungumál", return_tensors="pt")
generated_tokens = model.generate(**inputs)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "One model for many languages"
```
See the [model hub](https://huggingface.co/models?filter=wmt21) to look for more fine-tuned versions.
## Languages covered
English (en), Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de)
## BibTeX entry and citation info
```
@inproceedings{tran2021facebook
title={Facebook AI’s WMT21 News Translation Task Submission},
author={Chau Tran and Shruti Bhosale and James Cross and Philipp Koehn and Sergey Edunov and Angela Fan},
booktitle={Proc. of WMT},
year={2021},
}
``` |
textattack/roberta-base-ag-news | 80f0a42b53970634dc15f4b59342978410585b46 | 2021-05-20T22:15:20.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/roberta-base-ag-news | 1,166 | 1 | transformers | 1,666 | ## TextAttack Model CardThis `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9469736842105263, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
CAUKiel/JavaBERT | 5028efb75040cbd2fe33e10fe5f4c232b455cee8 | 2022-07-19T18:45:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"code",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAUKiel | null | CAUKiel/JavaBERT | 1,165 | 4 | transformers | 1,667 | ---
language:
- code
license: apache-2.0
widget:
- text: 'public [MASK] isOdd(Integer num) {if (num % 2 == 0) {return "even";} else {return "odd";}}'
---
## JavaBERT
A BERT-like model pretrained on Java software code.
### Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A ```bert-base-cased``` tokenizer is used by this model.
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Usage
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='CAUKiel/JavaBERT')
output = pipe(CODE) # Replace with Java code; Use '[MASK]' to mask tokens/words in the code.
```
#### Related Model
A version of this model using an uncased tokenizer is available at [CAUKiel/JavaBERT-uncased](https://huggingface.co/CAUKiel/JavaBERT-uncased).
|
facebook/wav2vec2-large-100k-voxpopuli | ad2f1b5b6f2f0a78683b90e78ebc07af6022c6db | 2021-11-05T12:45:52.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"multilingual",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-100k-voxpopuli | 1,163 | 2 | transformers | 1,668 | ---
language: multilingual
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the 100k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
codeparrot/codeparrot-small | e7e4f5d39319551a760f07c0e1035e379617c721 | 2022-07-03T19:54:59.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"code",
"dataset:codeparrot/codeparrot-clean",
"dataset:openai_humaneval",
"transformers",
"generation",
"license:apache-2.0"
] | text-generation | false | codeparrot | null | codeparrot/codeparrot-small | 1,163 | 9 | transformers | 1,669 | ---
language:
- code
license: apache-2.0
tags:
- code
- gpt2
- generation
datasets:
- "codeparrot/codeparrot-clean"
- "openai_humaneval"
metrics:
- "evaluate-metric/code_eval"
---
# CodeParrot 🦜 (small)
CodeParrot 🦜 is a GPT-2 model (110M parameters) trained to generate Python code.
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="codeparrot/codeparrot-small")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/codeparrot/codeparrot-clean) with the following settings:
|Config|Value|
|-------|-----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 150'000|
|Gradient accumulation| 1|
|Gradient checkpointing| False|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 29 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | Value |
|-------|-----|
|pass@1 | 3.80% |
|pass@10 | 6.57% |
|pass@100 | 12.78% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/codeparrot/codeparrot-clean), [train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train), [valid](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](), [highlighting]() |
paulowoicho/t5-podcast-summarisation | 162966482402d91ce84facd36e835ad09f244a72 | 2020-11-11T10:15:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"[en]",
"dataset:Spotify Podcasts Dataset",
"arxiv:2004.04270",
"arxiv:1910.10683",
"transformers",
"summarisation",
"lm-head",
"autotrain_compatible"
] | text2text-generation | false | paulowoicho | null | paulowoicho/t5-podcast-summarisation | 1,161 | 2 | transformers | 1,670 | ---
language: "[en]"
datasets:
- Spotify Podcasts Dataset
tags:
- t5
- summarisation
- pytorch
- lm-head
metrics:
- ROUGE
pipeline:
- summarisation
---
# T5 for Automatic Podcast Summarisation
This model is the result of fine-tuning [t5-base](https://huggingface.co/t5-base) on the [Spotify Podcast Dataset](https://arxiv.org/abs/2004.04270).
It is based on [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) which was pretrained on the [C4 dataset](https://huggingface.co/datasets/c4).
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu
## Intended uses & limitations
This model is intended to be used for automatic podcast summarisation. As creator provided descriptions
were used for training, the model also learned to generate promotional material (links, hashtags, etc) in its summaries, as such
some post processing may be required on the model's outputs.
If using on Colab, the instance will crash if the number of tokens in the transcript exceeds 7000. I discovered that the model
generated reasonable summaries even when the podcast transcript was truncated to reduce the number of tokens.
#### How to use
The model can be used with the summarisation as follows:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="paulowoicho/t5-podcast-summarisation", tokenizer="paulowoicho/t5-podcast-summarisation")
summary = summarizer(podcast_transcript, min_length=5, max_length=20)
print(summary[0]['summary_text'])
```
## Training data
This model is the result of fine-tuning [t5-base](https://huggingface.co/t5-base) on the [Spotify Podcast Dataset](https://arxiv.org/abs/2004.04270).
[Pre-processing](https://github.com/paulowoicho/msc_project/blob/master/reformat.py) was done on the original data before fine-tuning.
## Training procedure
Training was largely based on [Fine-tune T5 for Summarization](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) by [Abhishek Kumar Mishra](https://github.com/abhimishra91)
|
MaRiOrOsSi/t5-base-finetuned-question-answering | 2c815b9dd13188d751e372a0d8cc9f3892087c9a | 2022-04-08T18:00:14.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:duorc",
"transformers",
"Generative Question Answering",
"autotrain_compatible"
] | text2text-generation | false | MaRiOrOsSi | null | MaRiOrOsSi/t5-base-finetuned-question-answering | 1,161 | null | transformers | 1,671 | ---
language: en
datasets:
- duorc
widget:
- text: "question: Is Giacomo Italian? context: Giacomo is 25 years old and he was born in Tuscany"
- text: "question: Where does Christian come from? context: Christian is a student of UNISI but he come from Caserta"
- text: "question: Is the dog coat grey? context: You have a beautiful dog with a brown coat"
tags:
- Generative Question Answering
---
# T5 for Generative Question Answering
This model is the result produced by Christian Di Maio and Giacomo Nunziati for the Language Processing Technologies exam.
Reference for [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [DuoRC](https://huggingface.co/datasets/duorc) for **Generative Question Answering** by just prepending the *question* to the *context*.
## Code
The code used for T5 training is available at this [repository](https://github.com/nunziati/bert-vs-t5-for-question-answering/blob/main/train_t5_selfrc.py).
## Results
The results are evaluated on:
- DuoRC/SelfRC -> Test Subset
- DuoRC/ParaphraseRC -> Test Subset
- SQUADv1 -> Validation Subset
Removing all tokens not related to dictionary words from the evaluation metrics.
The model used as reference is BERT finetuned on SQUAD v1.
| Model | SelfRC | ParaphraseRC | SQUAD
|--|--|--|--|
| T5-BASE-FINETUNED | **F1**: 49.00 **EM**: 31.38 | **F1**: 28.75 **EM**: 15.18 | **F1**: 63.28 **EM**: 37.24 |
| BERT-BASE-FINETUNED | **F1**: 47.18 **EM**: 30.76 | **F1**: 21.20 **EM**: 12.62 | **F1**: 77.19 **EM**: 57.81 |
## How to use it 🚀
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model_name = "MaRiOrOsSi/t5-base-finetuned-question-answering"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
question = "What is 42?"
context = "42 is the answer to life, the universe and everything"
input = f"question: {question} context: {context}"
encoded_input = tokenizer([input],
return_tensors='pt',
max_length=512,
truncation=True)
output = model.generate(input_ids = encoded_input.input_ids,
attention_mask = encoded_input.attention_mask)
output = tokenizer.decode(output[0], skip_special_tokens=True)
print(output)
```
## Citation
Created by [Christian Di Maio](https://it.linkedin.com/in/christiandimaio) and [Giacomo Nunziati](https://it.linkedin.com/in/giacomo-nunziati-b19572185)
> Made with <span style="color: #e25555;">♥</span> in Italy
|
PlanTL-GOB-ES/RoBERTalex | bedf21ecb3a6beec20f1e68d88b7dbb041991dfb | 2021-11-09T09:30:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:legal_ES",
"dataset:temu_legal",
"arxiv:2110.12201",
"transformers",
"legal",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/RoBERTalex | 1,160 | 4 | transformers | 1,672 | ---
language:
- es
license: apache-2.0
tags:
- legal
- spanish
datasets:
- legal_ES
- temu_legal
metrics:
- ppl
widget:
- text: "La ley fue <mask> finalmente."
- text: "El Tribunal <mask> desestimó el recurso de amparo."
- text: "Hay base legal dentro del marco <mask> actual."
---
# Spanish Legal-domain RoBERTa
There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora.
## Citing
```
@misc{gutierrezfandino2021legal,
title={Spanish Legalese Language Model and Corpora},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Aitor Gonzalez-Agirre and Marta Villegas},
year={2021},
eprint={2110.12201},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
For more information visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-legal-es)
## Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
peterhsu/marian-finetuned-kde4-en-to-zh_TW-accelerate | 57bd8aa1bbf04ec9234d74caabdd329a9927c942 | 2022-02-28T09:36:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | peterhsu | null | peterhsu/marian-finetuned-kde4-en-to-zh_TW-accelerate | 1,159 | null | transformers | 1,673 | ---
license: apache-2.0
tags:
- translation
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh_TW-accelerate
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-zh_TW
metrics:
- name: Bleu
type: bleu
value: 40.07
---
# marian-finetuned-kde4-en-to-zh_TW-accelerate
## Model description
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Bleu: 40.70
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0 |
remotejob/tweetsGPT2fi_v0 | 34abb218bb8e6f61bec9a47c0db81e776229f1a6 | 2022-05-27T22:22:53.000Z | [
"pytorch",
"rust",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | remotejob | null | remotejob/tweetsGPT2fi_v0 | 1,157 | null | transformers | 1,674 | Entry not found |
setu4993/smaller-LaBSE | abd4e324cf0850b32f1dbf4b08fad6022ab47c0b | 2021-12-05T06:13:27.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"ar",
"de",
"en",
"es",
"fr",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"th",
"tr",
"zh",
"dataset:CommonCrawl",
"dataset:Wikipedia",
"arxiv:2010.05609",
"arxiv:2007.01852",
"transformers",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"labse",
"license:apache-2.0"
] | feature-extraction | false | setu4993 | null | setu4993/smaller-LaBSE | 1,156 | 4 | transformers | 1,675 | ---
language:
- ar
- de
- en
- es
- fr
- it
- ja
- ko
- nl
- pl
- pt
- ru
- th
- tr
- zh
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
- labse
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LaBSE
## Model description
Smaller Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model distilled from the [original LaBSE model](https://huggingface.co/setu4993/LaBSE) to 15 languages (from the original 109 languages) using the techniques described in the paper ['Load What You Need: Smaller Versions of Multilingual BERT'](https://arxiv.org/abs/2010.05609) by [Ukjae Jeong](https://github.com/jeongukjae/).
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/smaller-LaBSE).
- Original model: [TensorFlow Hub](https://tfhub.dev/jeongukjae/smaller_LaBSE_15lang/1).
- Distillation source: [GitHub](https://github.com/jeongukjae/smaller-labse).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/smaller-LaBSE")
model = BertModel.from_pretrained("setu4993/smaller-LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852).
### BibTeX entry and citation info
```bibtex
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Helsinki-NLP/opus-mt-az-en | d5618bb9172d2400a504d8b95baf144517ac6b48 | 2021-01-18T07:48:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"az",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-az-en | 1,155 | null | transformers | 1,676 | ---
language:
- az
- en
tags:
- translation
license: apache-2.0
---
### aze-eng
* source group: Azerbaijani
* target group: English
* OPUS readme: [aze-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.eng | 31.9 | 0.490 |
### System Info:
- hf_name: aze-eng
- source_languages: aze
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'en']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: eng
- short_pair: az-en
- chrF2_score: 0.49
- bleu: 31.9
- brevity_penalty: 0.997
- ref_len: 16165.0
- src_name: Azerbaijani
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: en
- prefer_old: False
- long_pair: aze-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
facebook/nllb-200-distilled-1.3B | b14baa07325b1cea23404c4d374d7eb469b1973d | 2022-07-19T15:45:28.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug",
"bg",
"ca",
"ceb",
"cs",
"cjk",
"ckb",
"crh",
"cy",
"da",
"de",
"dik",
"dyu",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fj",
"fi",
"fon",
"fr",
"fur",
"fuv",
"gaz",
"gd",
"ga",
"gl",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hne",
"hr",
"hu",
"hy",
"ig",
"ilo",
"id",
"is",
"it",
"jv",
"ja",
"kab",
"kac",
"kam",
"kn",
"ks",
"ka",
"kk",
"kbp",
"kea",
"khk",
"km",
"ki",
"rw",
"ky",
"kmb",
"kmr",
"knc",
"kg",
"ko",
"lo",
"lij",
"li",
"ln",
"lt",
"lmo",
"ltg",
"lb",
"lua",
"lg",
"luo",
"lus",
"lvs",
"mag",
"mai",
"ml",
"mar",
"min",
"mk",
"mt",
"mni",
"mos",
"mi",
"my",
"nl",
"nn",
"nb",
"npi",
"nso",
"nus",
"ny",
"oc",
"ory",
"pag",
"pa",
"pap",
"pbt",
"pes",
"plt",
"pl",
"pt",
"prs",
"quy",
"ro",
"rn",
"ru",
"sg",
"sa",
"sat",
"scn",
"shn",
"si",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"st",
"es",
"sc",
"sr",
"ss",
"su",
"sv",
"swh",
"szl",
"ta",
"taq",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"tpi",
"tn",
"ts",
"tk",
"tum",
"tr",
"tw",
"tzm",
"ug",
"uk",
"umb",
"ur",
"uzn",
"vec",
"vi",
"war",
"wo",
"xh",
"ydd",
"yo",
"yue",
"zh",
"zsm",
"zu",
"dataset:flores-200",
"transformers",
"nllb",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | text2text-generation | false | facebook | null | facebook/nllb-200-distilled-1.3B | 1,155 | 2 | transformers | 1,677 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"
tags:
- nllb
license: "cc-by-nc-4.0"
datasets:
- flores-200
metrics:
- bleu
- spbleu
- chrf++
---
# NLLB-200
This is the model card of NLLB-200's distilled 1.3B variant.
Here are the [metrics](https://tinyurl.com/nllb200densedst1bmetrics) for that particular checkpoint.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
- License: CC-BY-NC
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
## Intended Use
- Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data.
- Primary intended users: Primary users are researchers and machine translation research community.
- Out-of-scope use cases: NLLB-200 is a research model and is not released for production deployment. NLLB-200 is trained on general domain text data and is not intended to be used with domain specific texts, such as medical domain or legal domain. The model is not intended to be used for document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation. NLLB-200 translations can not be used as certified translations.
## Metrics
• Model performance measures: NLLB-200 model was evaluated using BLEU, spBLEU, and chrF++ metrics widely adopted by machine translation community. Additionally, we performed human evaluation with the XSTS protocol and measured the toxicity of the generated translations.
## Evaluation Data
- Datasets: Flores-200 dataset is described in Section 4
- Motivation: We used Flores-200 as it provides full evaluation coverage of the languages in NLLB-200
- Preprocessing: Sentence-split raw text data was preprocessed using SentencePiece. The
SentencePiece model is released along with NLLB-200.
## Training Data
• We used parallel multilingual data from a variety of sources to train the model. We provide detailed report on data selection and construction process in Section 5 in the paper. We also used monolingual data constructed from Common Crawl. We provide more details in Section 5.2.
## Ethical Considerations
• In this work, we took a reflexive approach in technological development to ensure that we prioritize human users and minimize risks that could be transferred to them. While we reflect on our ethical considerations throughout the article, here are some additional points to highlight. For one, many languages chosen for this study are low-resource languages, with a heavy emphasis on African languages. While quality translation could improve education and information access in many in these communities, such an access could also make groups with lower levels of digital literacy more vulnerable to misinformation or online scams. The latter scenarios could arise if bad actors misappropriate our work for nefarious activities, which we conceive as an example of unintended use. Regarding data acquisition, the training data used for model development were mined from various publicly available sources on the web. Although we invested heavily in data cleaning, personally identifiable information may not be entirely eliminated. Finally, although we did our best to optimize for translation quality, mistranslations produced by the model could remain. Although the odds are low, this could have adverse impact on those who rely on these translations to make important decisions (particularly when related to health and safety).
## Caveats and Recommendations
• Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
## Carbon Footprint Details
• The carbon dioxide (CO2e) estimate is reported in Section 8.8. |
cambridgeltl/magic_mscoco | e0cfb935df539629d5abb2ecdc925aef3ecf35fa | 2022-04-08T14:39:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/magic_mscoco | 1,154 | null | transformers | 1,678 | Entry not found |
NlpHUST/vibert4news-base-cased | d0926f978504f72d29bea14d7315b9e3ef09f292 | 2021-08-10T03:13:56.000Z | [
"pytorch",
"fill-mask",
"vn",
"transformers",
"autotrain_compatible"
] | fill-mask | false | NlpHUST | null | NlpHUST/vibert4news-base-cased | 1,149 | 1 | transformers | 1,679 | ---
language: vn
---
# BERT for Vietnamese is trained on more 20 GB news dataset
Apply for task sentiment analysis on using [AIViVN's comments dataset](https://www.aivivn.com/contests/6)
The model achieved 0.90268 on the public leaderboard, (winner's score is 0.90087)
Bert4news is used for a toolkit Vietnames(segmentation and Named Entity Recognition) at ViNLPtoolkit(https://github.com/bino282/ViNLP)
We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False.
You can download trained model:
- [tensorflow](https://drive.google.com/file/d/1X-sRDYf7moS_h61J3L79NkMVGHP-P-k5/view?usp=sharing).
- [pytorch](https://drive.google.com/file/d/11aFSTpYIurn-oI2XpAmcCTccB_AonMOu/view?usp=sharing).
Use with huggingface/transformers
``` bash
import torch
from transformers import BertTokenizer,BertModel
tokenizer= BertTokenizer.from_pretrained("NlpHUST/vibert4news-base-cased")
bert_model = BertModel.from_pretrained("NlpHUST/vibert4news-base-cased")
line = "Tôi là sinh viên trường Bách Khoa Hà Nội ."
input_id = tokenizer.encode(line,add_special_tokens = True)
att_mask = [int(token_id > 0) for token_id in input_id]
input_ids = torch.tensor([input_id])
att_masks = torch.tensor([att_mask])
with torch.no_grad():
features = bert_model(input_ids,att_masks)
print(features)
```
# Vietnamese toolkit with bert
ViNLP is a system annotation for Vietnamese, it use pretrain [Bert4news](https://github.com/bino282/bert4news/) to fine-turning to NLP problems in Vietnamese components of wordsegmentation,Named entity recognition (NER) and achieve high accuravy.
### Installation
```bash
git clone https://github.com/bino282/ViNLP.git
cd ViNLP
python setup.py develop build
```
### Test Segmentation
The model achieved F1 score : 0.984 on VLSP 2013 dataset
|Model | F1 |
|--------|-----------|
| **BertVnTokenizer** | 98.40 |
| **DongDu** | 96.90 |
| **JvnSegmenter-Maxent** | 97.00 |
| **JvnSegmenter-CRFs** | 97.06 |
| **VnTokenizer** | 97.33 |
| **UETSegmenter** | 97.87 |
| **VnTokenizer** | 97.33 |
| **VnCoreNLP (i.e. RDRsegmenter)** | 97.90 |
``` bash
from ViNLP import BertVnTokenizer
tokenizer = BertVnTokenizer()
sentences = tokenizer.split(["Tổng thống Donald Trump ký sắc lệnh cấm mọi giao dịch của Mỹ với ByteDance và Tecent - chủ sở hữu của 2 ứng dụng phổ biến TikTok và WeChat sau 45 ngày nữa."])
print(sentences[0])
```
``` bash
Tổng_thống Donald_Trump ký sắc_lệnh cấm mọi giao_dịch của Mỹ với ByteDance và Tecent - chủ_sở_hữu của 2 ứng_dụng phổ_biến TikTok và WeChat sau 45 ngày nữa .
```
### Test Named Entity Recognition
The model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786
|Model | F1 |
|--------|-----------|
| **BertVnNer** | 78.60 |
| **VNER Attentive Neural Network** | 77.52 |
| **vietner CRF (ngrams + word shapes + cluster + w2v)** | 76.63 |
| **ZA-NER BiLSTM** | 74.70 |
``` bash
from ViNLP import BertVnNer
bert_ner_model = BertVnNer()
sentence = "Theo SCMP, báo cáo của CSIS với tên gọi Định hình Tương lai Chính sách của Mỹ với Trung Quốc cũng cho thấy sự ủng hộ tương đối rộng rãi của các chuyên gia về việc cấm Huawei, tập đoàn viễn thông khổng lồ của Trung Quốc"
entities = bert_ner_model.annotate([sentence])
print(entities)
```
``` bash
[{'ORGANIZATION': ['SCMP', 'CSIS', 'Huawei'], 'LOCATION': ['Mỹ', 'Trung Quốc']}]
```
Run training with base config
``` bash
python train_pytorch.py \\\\
--model_path=bert4news.pytorch \\\\
--max_len=200 \\\\
--batch_size=16 \\\\
--epochs=6 \\\\
--lr=2e-5
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
monologg/koelectra-base-v2-discriminator | b87e70eb7b3ea33b24fc2e7a85b2cc8321b9dd28 | 2021-10-20T16:54:30.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers",
"korean",
"license:apache-2.0"
] | null | false | monologg | null | monologg/koelectra-base-v2-discriminator | 1,149 | 1 | transformers | 1,680 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v2 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v2-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 5084, 16248, 3770, 19059, 29965, 2259, 10431, 5, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v2-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v2-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
Hate-speech-CNERG/dehatebert-mono-english | 25d0e4d9122d2a5c283e07405a325e3dfd4a73b3 | 2021-09-25T13:55:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-english | 1,146 | 2 | transformers | 1,681 | ---
language: en
license: apache-2.0
---
This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
TencentGameMate/chinese-wav2vec2-base | 3991242c806928916fff4a8c0e4f76acf661b743 | 2022-06-24T01:53:18.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers",
"license:mit"
] | null | false | TencentGameMate | null | TencentGameMate/chinese-wav2vec2-base | 1,145 | 3 | transformers | 1,682 | ---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from fairseq import checkpoint_utils
from transformers import (
Wav2Vec2FeatureExtractor,
Wav2Vec2ForPreTraining,
Wav2Vec2Model,
)
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
model_path=""
wav_path=""
mask_prob=0.0
mask_length=10
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = Wav2Vec2Model.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
# for Wav2Vec2ForPreTraining
# batch_size, raw_sequence_length = input_values.shape
# sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
# mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2)
# mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
# for Wav2Vec2ForPreTraining
# outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True)
# last_hidden_state = outputs.hidden_states[-1]
``` |
flair/ner-dutch-large | 44c285912a9d6eec4d0858580f3cb13b7b8c9959 | 2021-05-08T15:36:03.000Z | [
"pytorch",
"nl",
"dataset:conll2003",
"arxiv:2011.06993",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | flair | null | flair/ner-dutch-large | 1,144 | 3 | flair | 1,683 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: nl
datasets:
- conll2003
widget:
- text: "George Washington ging naar Washington"
---
## Dutch NER in Flair (large model)
This is the large 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **95,25** (CoNLL-03 Dutch)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch-large")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_DUTCH
corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-dutch-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
Averium/DialoGPT-medium-TailsBot1.1 | 462a773376d390ff76c8e078388a2afde248b9de | 2022-06-17T00:29:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Averium | null | Averium/DialoGPT-medium-TailsBot1.1 | 1,136 | null | transformers | 1,684 | ---
tags:
- conversational
---
# Miles Prower DialoGPT Model |
sentence-transformers/sentence-t5-xl | e0976ba9afd18be963c22c680367a3928c44fd22 | 2022-02-09T14:02:31.000Z | [
"pytorch",
"t5",
"en",
"arxiv:2108.08877",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/sentence-t5-xl | 1,135 | 1 | sentence-transformers | 1,685 | ---
pipeline_tag: sentence-similarity
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/sentence-t5-xl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks.
This model was converted from the Tensorflow model [st5-3b-1](https://tfhub.dev/google/sentence-t5/st5-3b/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-3B model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/sentence-t5-xl')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/sentence-t5-xl)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877)
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased | 0ab00895c22312978e0a8abd16bbec3fbf7f2bc8 | 2021-11-08T12:58:09.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"bg",
"cs",
"pl",
"ru",
"transformers"
] | feature-extraction | false | DeepPavlov | null | DeepPavlov/bert-base-bg-cs-pl-ru-cased | 1,131 | null | transformers | 1,686 | ---
language:
- bg
- cs
- pl
- ru
---
# bert-base-bg-cs-pl-ru-cased
SlavicBERT\[1\] \(Slavic \(bg, cs, pl, ru\), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. \(2019\). [Tuning Multilingual Transformers for Language-Specific Named Entity Recognition](https://www.aclweb.org/anthology/W19-3712/). ACL anthology W19-3712.
|
voidful/dpr-ctx_encoder-bert-base-multilingual | c7a3dc617754e93efe785aa88dc1f52b4f7cb688 | 2021-02-21T09:00:44.000Z | [
"pytorch",
"dpr",
"multilingual",
"dataset:NQ",
"dataset:Trivia",
"dataset:SQuAD",
"dataset:MLQA",
"dataset:DRCD",
"arxiv:2004.04906",
"transformers"
] | null | false | voidful | null | voidful/dpr-ctx_encoder-bert-base-multilingual | 1,130 | 4 | transformers | 1,687 | ---
language: multilingual
datasets:
- NQ
- Trivia
- SQuAD
- MLQA
- DRCD
---
# dpr-ctx_encoder-bert-base-multilingual
## Description
Multilingual DPR Model base on bert-base-multilingual-cased.
[DPR model](https://arxiv.org/abs/2004.04906)
[DPR repo](https://github.com/facebookresearch/DPR)
## Data
1. [NQ](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
2. [Trivia](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
3. [SQuAD](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
4. [DRCD*](https://github.com/DRCKnowledgeTeam/DRCD)
5. [MLQA*](https://github.com/facebookresearch/MLQA)
`question pairs for train`: 644,217
`question pairs for dev`: 73,710
*DRCD and MLQA are converted using script from haystack [squad_to_dpr.py](https://github.com/deepset-ai/haystack/blob/master/haystack/retriever/squad_to_dpr.py)
## Training Script
I use the script from [haystack](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial9_DPR_training.ipynb)
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
model = DPRContextEncoder.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
Follow the tutorial from `haystack`:
[Better Retrievers via "Dense Passage Retrieval"](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial6_Better_Retrieval_via_DPR.ipynb)
```
from haystack.retriever.dense import DensePassageRetriever
retriever = DensePassageRetriever(document_store=document_store,
query_embedding_model="voidful/dpr-question_encoder-bert-base-multilingual",
passage_embedding_model="voidful/dpr-ctx_encoder-bert-base-multilingual",
max_seq_len_query=64,
max_seq_len_passage=256,
batch_size=16,
use_gpu=True,
embed_title=True,
use_fast_tokenizers=True)
```
|
Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit | 7853d0d3eef3dd556b99ae342e7461c61d8faed5 | 2022-06-18T20:51:30.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit | 1,128 | null | sentence-transformers | 1,688 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-1.3B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 62398 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
allenai/t5-small-squad2-question-generation | 7e7d6d8a68f96223a5cdaaf063e55293d52f1aef | 2021-06-23T11:56:56.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/t5-small-squad2-question-generation | 1,128 | 12 | transformers | 1,689 | A simple question-generation model built based on SQuAD 2.0 dataset.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-squad2-question-generation"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
```
which should result in the following:
```
['What is the name of the man who is a brotherly love?']
['What did He thank all fellow bloggers and organizations that showed support?']
['Where is the Veliefendi Hippodrome located?']
```
|
diptanu/fBERT | 7bd599f887e294a43afb6b4c3f611d66af2f94ae | 2021-09-01T19:57:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diptanu | null | diptanu/fBERT | 1,128 | 3 | transformers | 1,690 | fBERT: A Neural Transformer for Identifying Offensive Content [Accepted at EMNLP 2021]
Authors: Diptanu Sarkar, Marcos Zampieri, Tharindu Ranasinghe and Alexander Ororbia
About:
Transformer-based models such as BERT, ELMO, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. Previous studies have shown that domain-specific fine-tuning or retraining of models before attempting to solve downstream tasks can lead to excellent results in multiple domains. Fine-tuning/retraining a complex models to identify offensive language has not been substantially explored before and we address this gap by proposing fBERT, a bert-base-uncased model that has been learned using over 1.4 million offensive instances from the SOLID dataset. The shifted fBERT model better incorporates domain-specific offensive language and social media features. The fBERT model achieves better results in both OffensEval and HatEval tasks and in the HS & O dataset over BERT and HateBERT.
|
sentence-transformers/distilroberta-base-msmarco-v2 | f273032139d26a1e54280e0b7d2f4a2193de4feb | 2022-06-15T21:50:52.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/distilroberta-base-msmarco-v2 | 1,128 | null | sentence-transformers | 1,691 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/distilroberta-base-msmarco-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilroberta-base-msmarco-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilroberta-base-msmarco-v2')
model = AutoModel.from_pretrained('sentence-transformers/distilroberta-base-msmarco-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilroberta-base-msmarco-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
akdeniz27/roberta-base-cuad | 94a24c27b5d8bf9c2fa89cf80729814cfb002e7b | 2021-11-14T08:42:48.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"transformers",
"autotrain_compatible"
] | question-answering | false | akdeniz27 | null | akdeniz27/roberta-base-cuad | 1,124 | null | transformers | 1,692 | ---
language: en
datasets:
- cuad
---
# RoBERTa Base Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Base"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo |
tdopierre/ProtAugment-ParaphraseGenerator | d389c0e6ca11d0add1eaaecf6d8848fa76e6ab46 | 2021-07-07T14:15:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:Quora",
"dataset:MSR",
"dataset:Google-PAWS",
"arxiv:2105.12995",
"transformers",
"Paraphase Generation",
"Data Augmentation",
"autotrain_compatible"
] | text2text-generation | false | tdopierre | null | tdopierre/ProtAugment-ParaphraseGenerator | 1,123 | 4 | transformers | 1,693 | ---
language: "en"
tags:
- Paraphase Generation
- Data Augmentation
datasets:
- Quora
- MSR
- Google-PAWS
---
[](https://arxiv.org/abs/2105.12995)
This model is used to generate paraphrases. It has been trained on a mix of 3 different paraphrase detection datasets: MSR, Quora, Google-PAWS.
We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995)
Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)!
If you use this model, please consider citing our paper.
```
@article{Dopierre2021ProtAugmentUD,
title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning},
author={Thomas Dopierre and C. Gravier and Wilfried Logerais},
journal={ArXiv},
year={2021},
volume={abs/2105.12995}
}
```
|
valhalla/distilt5-qa-qg-hl-12-6 | f865250f90ada38bcb43602dd5254e4c166e6b8e | 2021-09-23T16:42:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/distilt5-qa-qg-hl-12-6 | 1,119 | null | transformers | 1,694 | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: 'generate question: <hl> 42 <hl> is the answer to life, the universe and everything.
</s>'
- text: 'question: What is 42 context: 42 is the answer to life, the universe and
everything. </s>'
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qa-qg-hl](https://huggingface.co/valhalla/t5-base-qa-qg-hl) model trained for question answering and answer aware question generation tasks.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything.`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="valhalla/distilt5-qa-qg-hl-12-6")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
``` |
huggingface/CodeBERTa-language-id | 386451c69a3cb8722b742e66987d888db858b33c | 2022-06-27T15:49:04.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"code",
"dataset:code_search_net",
"arxiv:1909.09436",
"transformers"
] | text-classification | false | huggingface | null | huggingface/CodeBERTa-language-id | 1,118 | 12 | transformers | 1,695 | ---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
datasets:
- code_search_net
---
# CodeBERTa-language-id: The World’s fanciest programming language identification algo 🤯
To demonstrate the usefulness of our CodeBERTa pretrained model on downstream tasks beyond language modeling, we fine-tune the [`CodeBERTa-small-v1`](https://huggingface.co/huggingface/CodeBERTa-small-v1) checkpoint on the task of classifying a sample of code into the programming language it's written in (*programming language identification*).
We add a sequence classification head on top of the model.
On the evaluation dataset, we attain an eval accuracy and F1 > 0.999 which is not surprising given that the task of language identification is relatively easy (see an intuition why, below).
## Quick start: using the raw model
```python
CODEBERTA_LANGUAGE_ID = "huggingface/CodeBERTa-language-id"
tokenizer = RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID)
model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID)
input_ids = tokenizer.encode(CODE_TO_IDENTIFY)
logits = model(input_ids)[0]
language_idx = logits.argmax() # index for the resulting label
```
## Quick start: using Pipelines 💪
```python
from transformers import TextClassificationPipeline
pipeline = TextClassificationPipeline(
model=RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID),
tokenizer=RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID)
)
pipeline(CODE_TO_IDENTIFY)
```
Let's start with something very easy:
```python
pipeline("""
def f(x):
return x**2
""")
# [{'label': 'python', 'score': 0.9999965}]
```
Now let's probe shorter code samples:
```python
pipeline("const foo = 'bar'")
# [{'label': 'javascript', 'score': 0.9977546}]
```
What if I remove the `const` token from the assignment?
```python
pipeline("foo = 'bar'")
# [{'label': 'javascript', 'score': 0.7176245}]
```
For some reason, this is still statistically detected as JS code, even though it's also valid Python code. However, if we slightly tweak it:
```python
pipeline("foo = u'bar'")
# [{'label': 'python', 'score': 0.7638422}]
```
This is now detected as Python (Notice the `u` string modifier).
Okay, enough with the JS and Python domination already! Let's try fancier languages:
```python
pipeline("echo $FOO")
# [{'label': 'php', 'score': 0.9995257}]
```
(Yes, I used the word "fancy" to describe PHP 😅)
```python
pipeline("outcome := rand.Intn(6) + 1")
# [{'label': 'go', 'score': 0.9936151}]
```
Why is the problem of language identification so easy (with the correct toolkit)? Because code's syntax is rigid, and simple tokens such as `:=` (the assignment operator in Go) are perfect predictors of the underlying language:
```python
pipeline(":=")
# [{'label': 'go', 'score': 0.9998052}]
```
By the way, because we trained our own custom tokenizer on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset, and it handles streams of bytes in a very generic way, syntactic constructs such `:=` are represented by a single token:
```python
self.tokenizer.encode(" :=", add_special_tokens=False)
# [521]
```
<br>
## Fine-tuning code
<details>
```python
import gzip
import json
import logging
import os
from pathlib import Path
from typing import Dict, List, Tuple
import numpy as np
import torch
from sklearn.metrics import f1_score
from tokenizers.implementations.byte_level_bpe import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.dataset import Dataset
from torch.utils.tensorboard.writer import SummaryWriter
from tqdm import tqdm, trange
from transformers import RobertaForSequenceClassification
from transformers.data.metrics import acc_and_f1, simple_accuracy
logging.basicConfig(level=logging.INFO)
CODEBERTA_PRETRAINED = "huggingface/CodeBERTa-small-v1"
LANGUAGES = [
"go",
"java",
"javascript",
"php",
"python",
"ruby",
]
FILES_PER_LANGUAGE = 1
EVALUATE = True
# Set up tokenizer
tokenizer = ByteLevelBPETokenizer("./pretrained/vocab.json", "./pretrained/merges.txt",)
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
# Set up Tensorboard
tb_writer = SummaryWriter()
class CodeSearchNetDataset(Dataset):
examples: List[Tuple[List[int], int]]
def __init__(self, split: str = "train"):
"""
train | valid | test
"""
self.examples = []
src_files = []
for language in LANGUAGES:
src_files += list(
Path("../CodeSearchNet/resources/data/").glob(f"{language}/final/jsonl/{split}/*.jsonl.gz")
)[:FILES_PER_LANGUAGE]
for src_file in src_files:
label = src_file.parents[3].name
label_idx = LANGUAGES.index(label)
print("🔥", src_file, label)
lines = []
fh = gzip.open(src_file, mode="rt", encoding="utf-8")
for line in fh:
o = json.loads(line)
lines.append(o["code"])
examples = [(x.ids, label_idx) for x in tokenizer.encode_batch(lines)]
self.examples += examples
print("🔥🔥")
def __len__(self):
return len(self.examples)
def __getitem__(self, i):
# We’ll pad at the batch level.
return self.examples[i]
model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_PRETRAINED, num_labels=len(LANGUAGES))
train_dataset = CodeSearchNetDataset(split="train")
eval_dataset = CodeSearchNetDataset(split="test")
def collate(examples):
input_ids = pad_sequence([torch.tensor(x[0]) for x in examples], batch_first=True, padding_value=1)
labels = torch.tensor([x[1] for x in examples])
# ^^ uncessary .unsqueeze(-1)
return input_ids, labels
train_dataloader = DataLoader(train_dataset, batch_size=256, shuffle=True, collate_fn=collate)
batch = next(iter(train_dataloader))
model.to("cuda")
model.train()
for param in model.roberta.parameters():
param.requires_grad = False
## ^^ Only train final layer.
print(f"num params:", model.num_parameters())
print(f"num trainable params:", model.num_parameters(only_trainable=True))
def evaluate():
eval_loss = 0.0
nb_eval_steps = 0
preds = np.empty((0), dtype=np.int64)
out_label_ids = np.empty((0), dtype=np.int64)
model.eval()
eval_dataloader = DataLoader(eval_dataset, batch_size=512, collate_fn=collate)
for step, (input_ids, labels) in enumerate(tqdm(eval_dataloader, desc="Eval")):
with torch.no_grad():
outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda"))
loss = outputs[0]
logits = outputs[1]
eval_loss += loss.mean().item()
nb_eval_steps += 1
preds = np.append(preds, logits.argmax(dim=1).detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, labels.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
acc = simple_accuracy(preds, out_label_ids)
f1 = f1_score(y_true=out_label_ids, y_pred=preds, average="macro")
print("=== Eval: loss ===", eval_loss)
print("=== Eval: acc. ===", acc)
print("=== Eval: f1 ===", f1)
# print(acc_and_f1(preds, out_label_ids))
tb_writer.add_scalars("eval", {"loss": eval_loss, "acc": acc, "f1": f1}, global_step)
### Training loop
global_step = 0
train_iterator = trange(0, 4, desc="Epoch")
optimizer = torch.optim.AdamW(model.parameters())
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration")
for step, (input_ids, labels) in enumerate(epoch_iterator):
optimizer.zero_grad()
outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda"))
loss = outputs[0]
loss.backward()
tb_writer.add_scalar("training_loss", loss.item(), global_step)
optimizer.step()
global_step += 1
if EVALUATE and global_step % 50 == 0:
evaluate()
model.train()
evaluate()
os.makedirs("./models/CodeBERT-language-id", exist_ok=True)
model.save_pretrained("./models/CodeBERT-language-id")
```
</details>
<br>
## CodeSearchNet citation
<details>
```bibtex
@article{husain_codesearchnet_2019,
title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
shorttitle = {{CodeSearchNet} {Challenge}},
url = {http://arxiv.org/abs/1909.09436},
urldate = {2020-03-12},
journal = {arXiv:1909.09436 [cs, stat]},
author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
month = sep,
year = {2019},
note = {arXiv: 1909.09436},
}
```
</details>
|
voidful/bart-distractor-generation-both | 33dac39b96071b8fb44fe0bab40b89c2057aae27 | 2021-04-04T16:20:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:race",
"transformers",
"distractor",
"generation",
"seq2seq",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/bart-distractor-generation-both | 1,117 | null | transformers | 1,696 | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation-both
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
This model trained with Parallel MLM & Answer Negative Regularization refer to the [Paper](https://www.aclweb.org/anthology/2020.findings-emnlp.393/).
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For details, please see https://github.com/voidful/BDG.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
snunlp/KR-SBERT-V40K-klueNLI-augSTS | f06554f8087e15a6ffc279ef812ba8fed57e81d5 | 2022-05-03T03:53:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"ko",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | snunlp | null | snunlp/KR-SBERT-V40K-klueNLI-augSTS | 1,116 | 2 | sentence-transformers | 1,697 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ko
---
# snunlp/KR-SBERT-V40K-klueNLI-augSTS
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
model = AutoModel.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=snunlp/KR-SBERT-V40K-klueNLI-augSTS)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Application for document classification
Tutorial in Google Colab: https://colab.research.google.com/drive/1S6WSjOx9h6Wh_rX1Z2UXwx9i_uHLlOiM
|Model|Accuracy|
|-|-|
|KR-SBERT-Medium-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-augSTS|0.8511|
|KR-SBERT-V40K-klueNLI-augSTS|**0.8628**|
## Citation
```bibtex
@misc{kr-sbert,
author = {Park, Suzi and Hyopil Shin},
title = {KR-SBERT: A Pre-trained Korean-specific Sentence-BERT model},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snunlp/KR-SBERT}}
}
``` |
facebook/xglm-1.7B | a1060a08b652653f6c0dece48f53bb785538e4d6 | 2022-02-15T01:29:52.000Z | [
"pytorch",
"xglm",
"text-generation",
"arxiv:2112.10668",
"transformers",
"license:mit"
] | text-generation | false | facebook | null | facebook/xglm-1.7B | 1,112 | null | transformers | 1,698 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
inference: false
---
# XGLM-1.7B
XGLM-1.7B is a multilingual autoregressive language model (with 1.7 billion parameters) trained on a balanced corpus of a diverse set of languages totaling 500 billion sub-tokens. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm).
## Training Data Statistics
The training data statistics of XGLM-1.7B is shown in the table below.
| ISO-639-1| family | name | # tokens | ratio | ratio w/ lowRes upsampling |
|:--------|:-----------------|:------------------------|-------------:|------------:|-------------:|
| en | Indo-European | English | 803526736124 | 0.489906 | 0.3259 |
| ru | Indo-European | Russian | 147791898098 | 0.0901079 | 0.0602 |
| zh | Sino-Tibetan | Chinese | 132770494630 | 0.0809494 | 0.0483 |
| de | Indo-European | German | 89223707856 | 0.0543992 | 0.0363 |
| es | Indo-European | Spanish | 87303083105 | 0.0532282 | 0.0353 |
| fr | Indo-European | French | 77419639775 | 0.0472023 | 0.0313 |
| ja | Japonic | Japanese | 66054364513 | 0.040273 | 0.0269 |
| it | Indo-European | Italian | 41930465338 | 0.0255648 | 0.0171 |
| pt | Indo-European | Portuguese | 36586032444 | 0.0223063 | 0.0297 |
| el | Indo-European | Greek (modern) | 28762166159 | 0.0175361 | 0.0233 |
| ko | Koreanic | Korean | 20002244535 | 0.0121953 | 0.0811 |
| fi | Uralic | Finnish | 16804309722 | 0.0102455 | 0.0681 |
| id | Austronesian | Indonesian | 15423541953 | 0.00940365 | 0.0125 |
| tr | Turkic | Turkish | 12413166065 | 0.00756824 | 0.0101 |
| ar | Afro-Asiatic | Arabic | 12248607345 | 0.00746791 | 0.0099 |
| vi | Austroasiatic | Vietnamese | 11199121869 | 0.00682804 | 0.0091 |
| th | Tai–Kadai | Thai | 10842172807 | 0.00661041 | 0.044 |
| bg | Indo-European | Bulgarian | 9703797869 | 0.00591635 | 0.0393 |
| ca | Indo-European | Catalan | 7075834775 | 0.0043141 | 0.0287 |
| hi | Indo-European | Hindi | 3448390110 | 0.00210246 | 0.014 |
| et | Uralic | Estonian | 3286873851 | 0.00200399 | 0.0133 |
| bn | Indo-European | Bengali, Bangla | 1627447450 | 0.000992245 | 0.0066 |
| ta | Dravidian | Tamil | 1476973397 | 0.000900502 | 0.006 |
| ur | Indo-European | Urdu | 1351891969 | 0.000824241 | 0.0055 |
| sw | Niger–Congo | Swahili | 907516139 | 0.000553307 | 0.0037 |
| te | Dravidian | Telugu | 689316485 | 0.000420272 | 0.0028 |
| eu | Language isolate | Basque | 105304423 | 6.42035e-05 | 0.0043 |
| my | Sino-Tibetan | Burmese | 101358331 | 6.17976e-05 | 0.003 |
| ht | Creole | Haitian, Haitian Creole | 86584697 | 5.27902e-05 | 0.0035 |
| qu | Quechuan | Quechua | 3236108 | 1.97304e-06 | 0.0001 |
## Model card
For intended usage of the model, please refer to the [model card](https://github.com/pytorch/fairseq/blob/main/examples/xglm/model_card.md) released by the XGLM-1.7B development team.
## Example (COPA)
The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi.
```python
import torch
import torch.nn.functional as F
from transformers import XGLMTokenizer, XGLMForCausalLM
tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-1.7B")
model = XGLMForCausalLM.from_pretrained("facebook/xglm-1.7B")
data_samples = {
'en': [
{
"premise": "I wanted to conserve energy.",
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"question": "effect",
"label": "1"
},
{
"premise": "The flame on the candle went out.",
"choice1": "I blew on the wick.",
"choice2": "I put a match to the wick.",
"question": "cause",
"label": "0"
}
],
'zh': [
{
"premise": "我想节约能源。",
"choice1": "我在空着的房间里扫了地板。",
"choice2": "我把空房间里的灯关了。",
"question": "effect",
"label": "1"
},
{
"premise": "蜡烛上的火焰熄灭了。",
"choice1": "我吹灭了灯芯。",
"choice2": "我把一根火柴放在灯芯上。",
"question": "cause",
"label": "0"
}
],
'hi': [
{
"premise": "M te vle konsève enèji.",
"choice1": "Mwen te fin baleye chanm lib la.",
"choice2": "Mwen te femen limyè nan chanm lib la.",
"question": "effect",
"label": "1"
},
{
"premise": "Flam bouji a te etenn.",
"choice1": "Mwen te soufle bouji a.",
"choice2": "Mwen te limen mèch bouji a.",
"question": "cause",
"label": "0"
}
]
}
def get_logprobs(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:]
outputs = model(**inputs, labels=input_ids)
logits = outputs.logits
logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2))
return logprobs
# Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task.
# A return value of 0 indicates that the first alternative is more plausible,
# while 1 indicates that the second alternative is more plausible.
def COPA_eval(prompt, alternative1, alternative2):
lprob1 = get_logprobs(prompt + "\n" + alternative1).sum()
lprob2 = get_logprobs(prompt + "\n" + alternative2).sum()
return 0 if lprob1 > lprob2 else 1
for lang in data_samples_long:
for idx, example in enumerate(data_samples_long[lang]):
predict = COPA_eval(example["premise"], example["choice1"], example["choice2"])
print(f'{lang}-{idx}', predict, example['label'])
# en-0 1 1
# en-1 0 0
# zh-0 1 1
# zh-1 0 0
# hi-0 1 1
# hi-1 0 0
``` |
svalabs/cross-electra-ms-marco-german-uncased | 34a0bc5aee354593b64f1c2cfe173356ced6e90f | 2021-06-10T07:20:46.000Z | [
"pytorch",
"electra",
"text-classification",
"arxiv:1908.10084",
"arxiv:1611.09268",
"arxiv:2104.08663",
"arxiv:2104.12741",
"arxiv:2010.02666",
"transformers"
] | text-classification | false | svalabs | null | svalabs/cross-electra-ms-marco-german-uncased | 1,112 | 3 | transformers | 1,699 | # SVALabs - German Uncased Electra Cross-Encoder
In this repository, we present our german, uncased cross-encoder for Passage Retrieval.
This model was trained on the basis of the german electra uncased model from the [german-nlp-group](https://huggingface.co/german-nlp-group/electra-base-german-uncased) and finetuned as a cross-encoder for Passage Retrieval using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) package.
For this purpose, we translated the [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset using the [fairseq-wmt19-en-de](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) translation model.
### Model Details
| | Description or Link |
|---|---|
|**Base model** | [```german-nlp-group/electra-base-german-uncased```](https://huggingface.co/german-nlp-group/electra-base-german-uncased) |
|**Finetuning task**| Passage Retrieval / Semantic Search |
|**Source dataset**| [```MSMARCO-Passage-Ranking```](https://github.com/microsoft/MSMARCO-Passage-Ranking) |
|**Translation model**| [```fairseq-wmt19-en-de```](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) |
### Performance
We evaluated our model on the [GermanDPR testset](https://deepset.ai/germanquad) and followed the benchmark framework of [BEIR](https://github.com/UKPLab/beir).
In order to compare our results, we conducted an evaluation on the same test data with BM25 and presented the results in the table below.
We took every paragraph with negative and positive context out of the testset and deduplicated them. The resulting corpus size is 2871 against 1025 queries.
| Model | NDCG@1 | NDCG@5 | NDCG@10 | Recall@1 | Recall@5 | Recall@10 |
|:-------------------:|:------:|:------:|:-------:|:--------:|:--------:|:---------:|
| BM25 | 0.1463 | 0.3451 | 0.4097 | 0.1463 | 0.5424 | 0.7415 |
| BM25(Top 100) +Ours | 0.6410 | 0.7885 | 0.7943 | 0.6410 | 0.8576 | 0.9024 |
### How to Use
With ```sentence-transformers``` package (see [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers) on GitHub for more details):
```python
from sentence_transformers.cross_encoder import CrossEncoder
cross_model = CrossEncoder("svalabs/cross-electra-ms-marco-german-uncased")
```
### Semantic Search Example
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
K = 3 # number of top ranks to retrieve
docs = [
"Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.",
"Der Gepard jagt seine Beute.",
"Wir haben in der Agentur ein neues System für Zeiterfassung.",
"Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.",
"Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.",
"Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.",
"Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.",
"Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.",
"Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.",
"Bei ALDI sind die Bananen gerade im Angebot.",
"Die Entstehung der Erde ist 4,5 milliarden jahre her.",
"Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.",
"DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main."
]
queries = [
"dax steigt",
"dax sinkt",
"probleme mit knieschmerzen",
"software für urlaubsstunden",
"raubtier auf der jagd",
"alter der erde",
"wie alt ist unser planet?",
"wie kapital sichern",
"supermarkt lebensmittel reduziert",
"wodurch ist der tyrannosaurus aussgestorben",
"serien streamen"
]
# encode each query document pair
from itertools import product
combs = list(product(queries, docs))
outputs = cross_model.predict(combs).reshape((len(queries), len(docs)))
# print results
for i, query in enumerate(queries):
ranks = np.argsort(-outputs[i])
print("Query:", query)
for j, r in enumerate(ranks[:3]):
print(f"[{j}: {outputs[i, r]: .3f}]", docs[r])
print("-"*96)
```
**Console Output**:
```
Query: dax steigt
[0: 7.676] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[1: 0.821] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
[2: -9.905] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.
------------------------------------------------------------------------------------------------
Query: dax sinkt
[0: 8.079] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
[1: -0.491] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[2: -9.224] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.
------------------------------------------------------------------------------------------------
Query: probleme mit knieschmerzen
[0: 6.753] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.
[1: -5.866] Einen Impftermin kann mir der Arzt momentan noch nicht anbieten.
[2: -9.461] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.
------------------------------------------------------------------------------------------------
Query: software für urlaubsstunden
[0: 1.707] Wir haben in der Agentur ein neues System für Zeiterfassung.
[1: -10.649] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.
[2: -11.280] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
------------------------------------------------------------------------------------------------
Query: raubtier auf der jagd
[0: 4.596] Der Gepard jagt seine Beute.
[1: -6.809] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[2: -8.392] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.
------------------------------------------------------------------------------------------------
Query: alter der erde
[0: 7.343] Die Entstehung der Erde ist 4,5 milliarden jahre her.
[1: -7.664] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
[2: -8.020] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.
------------------------------------------------------------------------------------------------
Query: wie alt ist unser planet?
[0: 7.672] Die Entstehung der Erde ist 4,5 milliarden jahre her.
[1: -9.638] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
[2: -10.251] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.
------------------------------------------------------------------------------------------------
Query: wie kapital sichern
[0: 3.927] Um in Zukunft sein Vermögen zu schützen, sollte man andere Investmentstrategien in Betracht ziehen.
[1: -8.733] Finanzwerte treiben DAX um mehr als sechs Prozent nach oben Frankfurt/Main gegeben.
[2: -10.090] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.
------------------------------------------------------------------------------------------------
Query: supermarkt lebensmittel reduziert
[0: 3.508] Bei ALDI sind die Bananen gerade im Angebot.
[1: -10.057] Das historische Zentrum (centro storico) liegt auf mehr als 100 Inseln in der Lagune von Venedig.
[2: -10.470] DAX dreht ins Minus. Konjunkturdaten und Gewinnmitnahmen belasten Frankfurt/Main.
------------------------------------------------------------------------------------------------
Query: wodurch ist der tyrannosaurus aussgestorben
[0: 0.079] Die Ära der Dinosaurier wurde vermutlich durch den Einschlag eines gigantischen Meteoriten auf der Erde beendet.
[1: -10.701] Mein Arzt sagt, dass mir dabei eher ein Orthopäde helfen könnte.
[2: -11.200] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
------------------------------------------------------------------------------------------------
Query: serien streamen
[0: 3.392] Auf Netflix gibt es endlich die neue Staffel meiner Lieblingsserie.
[1: -5.725] Der Gepard jagt seine Beute.
[2: -8.378] Auf Kreta hat meine Tochter mit Muscheln eine schöne Sandburg gebaut.
------------------------------------------------------------------------------------------------
```
### Contact
- Baran Avinc, [email protected]
- Jonas Grebe, [email protected]
- Lisa Stolz, [email protected]
- Bonian Riebe, [email protected]
### References
- N. Reimers and I. Gurevych (2019), ['Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks'](https://arxiv.org/abs/1908.10084).
- Payal Bajaj et al. (2018), ['MS MARCO: A Human Generated MAchine Reading COmprehension Dataset'](https://arxiv.org/abs/1611.09268).
- N. Thakur et al. (2021), ['BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models'](https://arxiv.org/abs/2104.08663).
- T. Möller, J. Risch and M. Pietsch (2021), ['GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval'](https://arxiv.org/abs/2104.12741).
- Hofstätter et al. (2021), ['Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation'](https://arxiv.org/abs/2010.02666)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.