modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sdadas/polish-longformer-base-4096 | 848f1c0571529d428233937ac131d29ca30c2250 | 2022-03-08T17:58:15.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | fill-mask | false | sdadas | null | sdadas/polish-longformer-base-4096 | 38 | null | transformers | 6,600 | ---
license: lgpl-3.0
---
|
mafeu/DialoGPT-medium-willem | dd01cff8fb370dac482a5cb5eee4963f7e8693c2 | 2022-03-11T05:15:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mafeu | null | mafeu/DialoGPT-medium-willem | 38 | null | transformers | 6,601 | ---
tags:
- conversational
---
# willem DialoGPT Model |
Alvenir/bert-punct-restoration-en | 7b70ced9f319edea3e02a1d83c118a5b87d9ac04 | 2022-03-23T08:39:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Alvenir | null | Alvenir/bert-punct-restoration-en | 38 | null | transformers | 6,602 | ---
license: apache-2.0
---
TODO |
hackathon-pln-es/paraphrase-spanish-distilroberta | 5ed9fdaabd705e7bd88029a3f08ce7397a666d6a | 2022-04-02T18:33:17.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"es",
"dataset:hackathon-pln-es/parallel-sentences",
"arxiv:2004.09813",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | hackathon-pln-es | null | hackathon-pln-es/paraphrase-spanish-distilroberta | 38 | 3 | sentence-transformers | 6,603 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- es
datasets:
- hackathon-pln-es/parallel-sentences
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---
# paraphrase-spanish-distilroberta
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Este es un ejemplo", "Cada oración es transformada']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Evaluation Results
Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
### ES-ES
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 |
### ES-EN
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 |
------
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
## Background
This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model.
We developped this model during the
[Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon),
organized by hackathon-pln-es Organization.
### Training data
We use the concatenation from multiple datasets with sentence pairs (EN-ES).
We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences)
| Dataset |
|--------------------------------------------------------|
| AllNLI - ES (SNLI + MultiNLI)|
| EuroParl |
| JW300 |
| News Commentary |
| Open Subtitles |
| TED 2020 |
| Tatoeba |
| WikiMatrix |
## Authors
- [Anibal Pérez](https://huggingface.co/Anarpego),
- [Emilio Tomás Ariza](https://huggingface.co/medardodt),
- [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro)
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos) |
itaihay/wav2vec_asr_swbd | 048f809e7b5e62c0677617af12ef2f6111bf992d | 2022-05-21T20:37:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | itaihay | null | itaihay/wav2vec_asr_swbd | 38 | null | transformers | 6,604 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec_asr_swbd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_asr_swbd
This model is a fine-tuned version of [facebook/wav2vec2-large-robust-ft-swbd-300h](https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Wer: 0.5302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.5445 | 0.29 | 500 | 0.9114 | 0.6197 |
| 0.9397 | 0.58 | 1000 | 0.5057 | 0.5902 |
| 0.8557 | 0.86 | 1500 | 0.4465 | 0.6264 |
| 0.7716 | 1.15 | 2000 | 0.4182 | 0.5594 |
| 0.7659 | 1.44 | 2500 | 0.4111 | 0.7048 |
| 0.7406 | 1.73 | 3000 | 0.3927 | 0.5944 |
| 0.6857 | 2.02 | 3500 | 0.3852 | 0.7118 |
| 0.7113 | 2.31 | 4000 | 0.3775 | 0.5608 |
| 0.6804 | 2.59 | 4500 | 0.3885 | 0.5759 |
| 0.6654 | 2.88 | 5000 | 0.3703 | 0.7226 |
| 0.6569 | 3.17 | 5500 | 0.3688 | 0.5972 |
| 0.6335 | 3.46 | 6000 | 0.3661 | 0.7278 |
| 0.6309 | 3.75 | 6500 | 0.3579 | 0.6324 |
| 0.6231 | 4.03 | 7000 | 0.3620 | 0.5770 |
| 0.6171 | 4.32 | 7500 | 0.3640 | 0.5772 |
| 0.6191 | 4.61 | 8000 | 0.3553 | 0.6075 |
| 0.6142 | 4.9 | 8500 | 0.3543 | 0.6126 |
| 0.5905 | 5.19 | 9000 | 0.3601 | 0.6319 |
| 0.5846 | 5.48 | 9500 | 0.3429 | 0.7343 |
| 0.5874 | 5.76 | 10000 | 0.3429 | 0.5962 |
| 0.5768 | 6.05 | 10500 | 0.3381 | 0.7410 |
| 0.5783 | 6.34 | 11000 | 0.3391 | 0.5823 |
| 0.5835 | 6.63 | 11500 | 0.3447 | 0.5821 |
| 0.5817 | 6.92 | 12000 | 0.3314 | 0.6890 |
| 0.5459 | 7.2 | 12500 | 0.3363 | 0.5727 |
| 0.5575 | 7.49 | 13000 | 0.3363 | 0.7387 |
| 0.5505 | 7.78 | 13500 | 0.3368 | 0.5685 |
| 0.55 | 8.07 | 14000 | 0.3330 | 0.5587 |
| 0.5523 | 8.36 | 14500 | 0.3338 | 0.5484 |
| 0.5116 | 8.65 | 15000 | 0.3350 | 0.4351 |
| 0.5263 | 8.93 | 15500 | 0.3254 | 0.6235 |
| 0.5265 | 9.22 | 16000 | 0.3297 | 0.6207 |
| 0.5265 | 9.51 | 16500 | 0.3279 | 0.6143 |
| 0.5172 | 9.8 | 17000 | 0.3260 | 0.5800 |
| 0.5028 | 10.09 | 17500 | 0.3259 | 0.5774 |
| 0.5062 | 10.37 | 18000 | 0.3259 | 0.5552 |
| 0.5112 | 10.66 | 18500 | 0.3201 | 0.6625 |
| 0.5149 | 10.95 | 19000 | 0.3184 | 0.6865 |
| 0.4939 | 11.24 | 19500 | 0.3152 | 0.6116 |
| 0.5065 | 11.53 | 20000 | 0.3172 | 0.5246 |
| 0.5129 | 11.82 | 20500 | 0.3129 | 0.5908 |
| 0.4909 | 12.1 | 21000 | 0.3152 | 0.6075 |
| 0.4865 | 12.39 | 21500 | 0.3160 | 0.5037 |
| 0.4805 | 12.68 | 22000 | 0.3139 | 0.5458 |
| 0.4691 | 12.97 | 22500 | 0.3225 | 0.5815 |
| 0.4534 | 13.26 | 23000 | 0.3168 | 0.5614 |
| 0.4661 | 13.54 | 23500 | 0.3135 | 0.6053 |
| 0.4636 | 13.83 | 24000 | 0.3120 | 0.5142 |
| 0.4554 | 14.12 | 24500 | 0.3127 | 0.5552 |
| 0.4602 | 14.41 | 25000 | 0.3117 | 0.5562 |
| 0.4521 | 14.7 | 25500 | 0.3106 | 0.4995 |
| 0.4369 | 14.99 | 26000 | 0.3100 | 0.5663 |
| 0.4249 | 15.27 | 26500 | 0.3110 | 0.5262 |
| 0.4321 | 15.56 | 27000 | 0.3106 | 0.5183 |
| 0.4293 | 15.85 | 27500 | 0.3091 | 0.5311 |
| 0.4537 | 16.14 | 28000 | 0.3134 | 0.4986 |
| 0.4258 | 16.43 | 28500 | 0.3138 | 0.4487 |
| 0.4347 | 16.71 | 29000 | 0.3091 | 0.5011 |
| 0.4615 | 17.0 | 29500 | 0.3068 | 0.5616 |
| 0.4163 | 17.29 | 30000 | 0.3115 | 0.5426 |
| 0.4074 | 17.58 | 30500 | 0.3079 | 0.5341 |
| 0.4121 | 17.87 | 31000 | 0.3047 | 0.5619 |
| 0.4219 | 18.16 | 31500 | 0.3085 | 0.5051 |
| 0.4049 | 18.44 | 32000 | 0.3084 | 0.5116 |
| 0.4119 | 18.73 | 32500 | 0.3071 | 0.5028 |
| 0.4129 | 19.02 | 33000 | 0.3064 | 0.5030 |
| 0.4143 | 19.31 | 33500 | 0.3040 | 0.5086 |
| 0.4013 | 19.6 | 34000 | 0.3057 | 0.5271 |
| 0.4162 | 19.88 | 34500 | 0.3052 | 0.5302 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-zls-en | 7aab35f8931e35023d23bd43fec94e189bf8c073 | 2022-06-01T12:58:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"bs_Latn",
"en",
"hr",
"mk",
"sh",
"sl",
"sr_Cyrl",
"sr_Latn",
"zls",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zls-en | 38 | null | transformers | 6,605 | ---
language:
- bg
- bs_Latn
- en
- hr
- mk
- sh
- sl
- sr_Cyrl
- sr_Latn
- zls
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zls-en
results:
- task:
name: Translation bul-eng
type: translation
args: bul-eng
dataset:
name: flores101-devtest
type: flores_101
args: bul eng devtest
metrics:
- name: BLEU
type: bleu
value: 42.0
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: flores101-devtest
type: flores_101
args: hrv eng devtest
metrics:
- name: BLEU
type: bleu
value: 37.1
- task:
name: Translation mkd-eng
type: translation
args: mkd-eng
dataset:
name: flores101-devtest
type: flores_101
args: mkd eng devtest
metrics:
- name: BLEU
type: bleu
value: 43.2
- task:
name: Translation slv-eng
type: translation
args: slv-eng
dataset:
name: flores101-devtest
type: flores_101
args: slv eng devtest
metrics:
- name: BLEU
type: bleu
value: 35.2
- task:
name: Translation srp_Cyrl-eng
type: translation
args: srp_Cyrl-eng
dataset:
name: flores101-devtest
type: flores_101
args: srp_Cyrl eng devtest
metrics:
- name: BLEU
type: bleu
value: 36.8
- task:
name: Translation bos_Latn-eng
type: translation
args: bos_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bos_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 66.5
- task:
name: Translation bul-eng
type: translation
args: bul-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bul-eng
metrics:
- name: BLEU
type: bleu
value: 59.3
- task:
name: Translation hbs-eng
type: translation
args: hbs-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hbs-eng
metrics:
- name: BLEU
type: bleu
value: 57.3
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrv-eng
metrics:
- name: BLEU
type: bleu
value: 59.2
- task:
name: Translation mkd-eng
type: translation
args: mkd-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: mkd-eng
metrics:
- name: BLEU
type: bleu
value: 57.4
- task:
name: Translation slv-eng
type: translation
args: slv-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: slv-eng
metrics:
- name: BLEU
type: bleu
value: 23.5
- task:
name: Translation srp_Cyrl-eng
type: translation
args: srp_Cyrl-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Cyrl-eng
metrics:
- name: BLEU
type: bleu
value: 47.0
- task:
name: Translation srp_Latn-eng
type: translation
args: srp_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 58.5
---
# opus-mt-tc-big-zls-en
Neural machine translation model for translating from South Slavic languages (zls) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): bos_Latn bul hbs hrv mkd slv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT zls-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Да не би случайно Том да остави Мери да кара колата?",
"Какво е времето днес?"
]
model_name = "pytorch-models/opus-mt-tc-big-zls-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Did Tom just let Mary drive the car?
# What's the weather like today?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-en")
print(pipe("Да не би случайно Том да остави Мери да кара колата?"))
# expected output: Did Tom just let Mary drive the car?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.79339 | 66.5 | 301 | 1826 |
| bul-eng | tatoeba-test-v2021-08-07 | 0.72656 | 59.3 | 10000 | 71872 |
| hbs-eng | tatoeba-test-v2021-08-07 | 0.71783 | 57.3 | 10017 | 68934 |
| hrv-eng | tatoeba-test-v2021-08-07 | 0.74066 | 59.2 | 1480 | 10620 |
| mkd-eng | tatoeba-test-v2021-08-07 | 0.70043 | 57.4 | 10010 | 65667 |
| slv-eng | tatoeba-test-v2021-08-07 | 0.39534 | 23.5 | 2495 | 16940 |
| srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.67628 | 47.0 | 1580 | 10181 |
| srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71878 | 58.5 | 6656 | 46307 |
| bul-eng | flores101-devtest | 0.67375 | 42.0 | 1012 | 24721 |
| hrv-eng | flores101-devtest | 0.63914 | 37.1 | 1012 | 24721 |
| mkd-eng | flores101-devtest | 0.67444 | 43.2 | 1012 | 24721 |
| slv-eng | flores101-devtest | 0.62087 | 35.2 | 1012 | 24721 |
| srp_Cyrl-eng | flores101-devtest | 0.67810 | 36.8 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 20:12:26 EEST 2022
* port machine: LM0-400-22516.local
|
NTUYG/ComFormer | f9d442b8ba969018a873c406709d33caf64ed394 | 2022-05-09T10:55:14.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:DeepCom",
"arxiv:2107.03644",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | NTUYG | null | NTUYG/ComFormer | 38 | null | transformers | 6,606 | ---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- DeepCom
metrics:
- bleu
---
# How To Use
```PYTHON
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("NTUYG/ComFormer")
tokenizer = BartTokenizer.from_pretrained("NTUYG/ComFormer")
code = '''
public static void copyFile( File in, File out )
throws IOException
{
FileChannel inChannel = new FileInputStream( in ).getChannel();
FileChannel outChannel = new FileOutputStream( out ).getChannel();
try
{
// inChannel.transferTo(0, inChannel.size(), outChannel); // original -- apparently has trouble copying large files on Windows
// magic number for Windows, 64Mb - 32Kb)
int maxCount = (64 * 1024 * 1024) - (32 * 1024);
long size = inChannel.size();
long position = 0;
while ( position < size )
{
position += inChannel.transferTo( position, maxCount, outChannel );
}
}
finally
{
if ( inChannel != null )
{
inChannel.close();
}
if ( outChannel != null )
{
outChannel.close();
}
}
}
'''
code_seq, sbt = utils.transformer(code) #can find in https://github.com/NTDXYG/ComFormer
input_text = code_seq + sbt
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True)
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=30,
min_length=2,
num_beams=5,
)
comment = tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)
print(comment)
```
# BibTeX entry and citation info
```
@misc{yang2021comformer,
title={ComFormer: Code Comment Generation via Transformer and Fusion Method-based Hybrid Code Representation},
author={Guang Yang and Xiang Chen and Jinxin Cao and Shuyuan Xu and Zhanqi Cui and Chi Yu and Ke Liu},
year={2021},
eprint={2107.03644},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
BitanBiswas/wav2vec2-base-timit-demo-google-colab | a5ebbb51fbcb749c3c1f183ed90e7fee8a99535c | 2022-05-14T07:46:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | BitanBiswas | null | BitanBiswas/wav2vec2-base-timit-demo-google-colab | 38 | null | transformers | 6,607 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4770
- Wer: 0.3360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6401 | 1.0 | 500 | 2.4138 | 1.0 |
| 0.9717 | 2.01 | 1000 | 0.6175 | 0.5531 |
| 0.4393 | 3.01 | 1500 | 0.4309 | 0.4414 |
| 0.2976 | 4.02 | 2000 | 0.4167 | 0.4162 |
| 0.2345 | 5.02 | 2500 | 0.4273 | 0.3927 |
| 0.1919 | 6.02 | 3000 | 0.3983 | 0.3886 |
| 0.1565 | 7.03 | 3500 | 0.5581 | 0.3928 |
| 0.1439 | 8.03 | 4000 | 0.4509 | 0.3821 |
| 0.1266 | 9.04 | 4500 | 0.4733 | 0.3774 |
| 0.1091 | 10.04 | 5000 | 0.4755 | 0.3808 |
| 0.1001 | 11.04 | 5500 | 0.4435 | 0.3689 |
| 0.0911 | 12.05 | 6000 | 0.4962 | 0.3897 |
| 0.0813 | 13.05 | 6500 | 0.5031 | 0.3622 |
| 0.0729 | 14.06 | 7000 | 0.4853 | 0.3597 |
| 0.0651 | 15.06 | 7500 | 0.5180 | 0.3577 |
| 0.0608 | 16.06 | 8000 | 0.5251 | 0.3630 |
| 0.0592 | 17.07 | 8500 | 0.4915 | 0.3591 |
| 0.0577 | 18.07 | 9000 | 0.4724 | 0.3656 |
| 0.0463 | 19.08 | 9500 | 0.4536 | 0.3546 |
| 0.0475 | 20.08 | 10000 | 0.5107 | 0.3546 |
| 0.0464 | 21.08 | 10500 | 0.4829 | 0.3464 |
| 0.0369 | 22.09 | 11000 | 0.4844 | 0.3448 |
| 0.0327 | 23.09 | 11500 | 0.4865 | 0.3437 |
| 0.0337 | 24.1 | 12000 | 0.4825 | 0.3488 |
| 0.0271 | 25.1 | 12500 | 0.4824 | 0.3445 |
| 0.0236 | 26.1 | 13000 | 0.4747 | 0.3397 |
| 0.0243 | 27.11 | 13500 | 0.4840 | 0.3397 |
| 0.0226 | 28.11 | 14000 | 0.4716 | 0.3354 |
| 0.0235 | 29.12 | 14500 | 0.4770 | 0.3360 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Anjoe/german-poetry-gpt2 | 62ceddc7716eabfd104604277bc18fd10f6ffc4f | 2022-06-08T14:59:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Anjoe | null | Anjoe/german-poetry-gpt2 | 38 | null | transformers | 6,608 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: german-poetry-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-poetry-gpt2
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8196
- eval_runtime: 43.8543
- eval_samples_per_second: 86.993
- eval_steps_per_second: 5.45
- epoch: 9.0
- step: 11520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 22
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
alibaba-pai/pai-bert-base-zh | 8960eef5606b0034b3b21afc5d0534d5ec491539 | 2022-06-10T02:35:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2205.00258",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | alibaba-pai | null | alibaba-pai/pai-bert-base-zh | 38 | 1 | transformers | 6,609 | ---
language: zh
pipeline_tag: fill-mask
tags:
- bert
license: apache-2.0
---
## Alibaba PAI BERT Base Chinese
This project provides Chinese pre-trained language models and various types of NLP tools. The models are pre-trained on the large-scale corpora hosted by the Alibaba PAI team. It is developed based on the EasyNLP framework (https://github.com/alibaba/EasyNLP).
## Citation
If you find the resource is useful, please cite the following paper in your work:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing},
publisher = {arXiv},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
``` |
Tonjk/wangchanberta-base-att-spm-uncased | 993b181e83d9eb76ee45aae2c077fcbc4ef79c88 | 2022-07-09T18:36:41.000Z | [
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Tonjk | null | Tonjk/wangchanberta-base-att-spm-uncased | 38 | null | transformers | 6,610 | ---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1554 | 1.0 | 5564 | 0.0443 |
| 0.0403 | 2.0 | 11128 | 0.0412 |
| 0.0373 | 3.0 | 16692 | 0.0515 |
| 0.0419 | 4.0 | 22256 | 0.0515 |
| 0.0416 | 5.0 | 27820 | 0.0577 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Lemswasabi/xlsr-53-tuudle-14h-with-lm-4g | 9d5276ad811f3e912a991dacdb5d65405bac8064 | 2022-07-10T18:24:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Lemswasabi | null | Lemswasabi/xlsr-53-tuudle-14h-with-lm-4g | 38 | null | transformers | 6,611 | Entry not found |
matanbn/smsPhishing | 2a6a11a4983b899d12a57f320a973e210dc74ed4 | 2022-07-18T14:17:25.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | matanbn | null | matanbn/smsPhishing | 38 | null | transformers | 6,612 | Entry not found |
naver-clova-ix/donut-base-finetuned-zhtrainticket | 2d3fed6b7075870ec620a35d74efd2920636f352 | 2022-07-19T14:57:51.000Z | [
"pytorch",
"donut",
"transformers",
"license:mit"
] | null | false | naver-clova-ix | null | naver-clova-ix/donut-base-finetuned-zhtrainticket | 38 | null | transformers | 6,613 | ---
license: mit
---
|
TeaTM/DialoGPT-large-bushcat | 08d0fc23e3995cbafefb3b860316235ecf525b2a | 2022-07-21T17:40:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TeaTM | null | TeaTM/DialoGPT-large-bushcat | 38 | null | transformers | 6,614 | ---
tags:
- conversational
---
# Bushcat DialoGPT-Large Model |
51la5/XSUM-keyphrase-gen | 3759bc51223abaff7b5738690f18298883f74638 | 2022-07-22T10:26:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | 51la5 | null | 51la5/XSUM-keyphrase-gen | 38 | null | transformers | 6,615 | Entry not found |
IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | 374c3b6559b0ff8df0f86978d7c24063148049c9 | 2022-07-27T06:07:26.000Z | [
"pytorch",
"zh",
"transformers",
"ZEN",
"chinese",
"license:apache-2.0"
] | null | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | 38 | null | transformers | 6,616 | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN1-224M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
Erlangshen-ZEN1-224M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN1.0](https://github.com/sinovation/ZEN) and the [paper of ZEN1.0](https://aclanthology.org/2020.findings-emnlp.425/), and provides the Chinese classification task and extraction task of ZEN1.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.
## Usage
There is no structure of ZEN1 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN1 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
## load model
```python
from fengshen.models.zen1.ngram_utils import ZenNgramDict
from fengshen.models.zen1.tokenization import BertTokenizer
from fengshen.models.zen1.modeling import ZenModel
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model = ZenForSequenceClassification.from_pretrained(pretrain_path)
# model = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
You can get classification and extraction examples below.
[classification example on fengshen]()
[extraction example on fengshen]()
## Evaluation
### Classification
| model | dataset | Acc |
| ---- | ---- | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | Tnews | 56.82% |
### Extraction
| model | dataset | F1 |
| ---- | ---- | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | OntoNote4.0 | 80.8% |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@inproceedings{diao-etal-2020-zen,
title = "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations",
author = "Diao, Shizhe and Bai, Jiaxin and Song, Yan and Zhang, Tong and Wang, Yonggang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
pages = "4729--4740",
}
``` |
Finnish-NLP/gpt2-medium-finnish | e34f06fc20e97d3f07125e176e8d5a965cb522ed | 2022-06-13T16:14:13.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"transformers",
"finnish",
"license:apache-2.0"
] | text-generation | false | Finnish-NLP | null | Finnish-NLP/gpt2-medium-finnish | 37 | 2 | transformers | 6,617 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- gpt2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
widget:
- text: "Tekstiä tuottava tekoäly on"
---
# GPT-2 medium for Finnish
Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
**Note**: this model is 345M parameter variant as in Huggingface's [GPT-2-medium config](https://huggingface.co/gpt2-medium), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which performs better compared to this model.
## Model description
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-medium-finnish')
>>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5)
[{'generated_text': 'Tekstiä tuottava tekoäly on tullut ihmisten arkeen viime vuosina. Se auttaa hahmottamaan ja tulkitsemaan monimutkaisia kokonaisuuksia ja ilmiöitä, joita ihmiset tekevät esimerkiksi ruokakaupassa'},
{'generated_text': 'Tekstiä tuottava tekoäly on jo ottanut haltuunsa myös ihmisten käyttämiä sovelluksia ja esimerkiksi pankkipalveluita. Sen vuoksi tekoäly on tärkeä kumppani etenkin yritysten liiketoiminnan kehittämisessä.-'},
{'generated_text': 'Tekstiä tuottava tekoäly on tekoälylle luonnollinen valinta, sillä sen avulla voi kommunikoida ihmisten kanssa hyvin pitkälle samalla tavalla kuin tietokoneiden kanssa. Se on kehittynyt muun'},
{'generated_text': 'Tekstiä tuottava tekoäly on ihmisen kehittämä tekoäly, jota ei vielä ole pystytty rakentamaan. Tekoäly kykenee toimimaan esimerkiksi matemaattisissa, tilastollisissa ja sosiaalisissa'},
{'generated_text': 'Tekstiä tuottava tekoäly on jo niin iso juttu ettei sitä kannata rajoittaakaan. Ja jos se saadaan käyttöön, niin se voi jo pian syrjäyttää perinteisen'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-medium-finnish')
model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-medium-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Training data
This Finnish GPT-2 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
## Evaluation results
Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller [gpt2-finnish](https://huggingface.co/Finnish-NLP/gpt2-finnish) model variant but loses to our bigger [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) model.
| | Perplexity |
|------------------------------------------|------------|
|Finnish-NLP/gpt2-medium-finnish |34.08 |
|Finnish-NLP/gpt2-finnish |44.19 |
|Finnish-NLP/gpt2-large-finnish |**30.74** |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Helsinki-NLP/opus-mt-en-kg | 13e60125596c4f07a3b02dcd38caff5f334bec4c | 2021-09-09T21:36:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"kg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-kg | 37 | null | transformers | 6,618 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kg
* source languages: en
* target languages: kg
* OPUS readme: [en-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kg | 39.6 | 0.613 |
|
Helsinki-NLP/opus-mt-en-nso | 72a24d292415856a91b71ec1fbf9bc37e4bb691d | 2021-09-09T21:38:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"nso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-nso | 37 | null | transformers | 6,619 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-nso
* source languages: en
* target languages: nso
* OPUS readme: [en-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.nso | 52.2 | 0.684 |
|
Helsinki-NLP/opus-mt-en-zlw | 1a4faff7e8d9673adc6517e1e54a7d2938d35e23 | 2021-01-18T08:19:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"pl",
"cs",
"zlw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-zlw | 37 | null | transformers | 6,620 | ---
language:
- en
- pl
- cs
- zlw
tags:
- translation
license: apache-2.0
---
### eng-zlw
* source group: English
* target group: West Slavic languages
* OPUS readme: [eng-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md)
* model: transformer
* source language(s): eng
* target language(s): ces csb_Latn dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.6 | 0.488 |
| news-test2008-engces.eng.ces | 18.3 | 0.466 |
| newstest2009-engces.eng.ces | 19.8 | 0.483 |
| newstest2010-engces.eng.ces | 19.8 | 0.486 |
| newstest2011-engces.eng.ces | 20.6 | 0.489 |
| newstest2012-engces.eng.ces | 18.6 | 0.464 |
| newstest2013-engces.eng.ces | 22.3 | 0.495 |
| newstest2015-encs-engces.eng.ces | 21.7 | 0.502 |
| newstest2016-encs-engces.eng.ces | 24.5 | 0.521 |
| newstest2017-encs-engces.eng.ces | 20.1 | 0.480 |
| newstest2018-encs-engces.eng.ces | 19.9 | 0.483 |
| newstest2019-encs-engces.eng.ces | 21.2 | 0.490 |
| Tatoeba-test.eng-ces.eng.ces | 43.7 | 0.632 |
| Tatoeba-test.eng-csb.eng.csb | 1.2 | 0.188 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.5 | 0.167 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.199 |
| Tatoeba-test.eng.multi | 42.8 | 0.632 |
| Tatoeba-test.eng-pol.eng.pol | 43.2 | 0.641 |
### System Info:
- hf_name: eng-zlw
- source_languages: eng
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pl', 'cs', 'zlw']
- src_constituents: {'eng'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zlw
- short_pair: en-zlw
- chrF2_score: 0.632
- bleu: 42.8
- brevity_penalty: 0.973
- ref_len: 65397.0
- src_name: English
- tgt_name: West Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: eng-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-hil-en | de7ad57b834519e8ee9f3eb9a96b2a546709acf9 | 2021-09-09T22:10:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hil",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hil-en | 37 | null | transformers | 6,621 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hil-en
* source languages: hil
* target languages: en
* OPUS readme: [hil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.en | 49.2 | 0.638 |
|
Helsinki-NLP/opus-mt-no-fr | 00ad8c45b62b5a7e8815dbdb02d1c84edb0e551e | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-fr | 37 | null | transformers | 6,622 | ---
language:
- no
- fr
tags:
- translation
license: apache-2.0
---
### nor-fra
* source group: Norwegian
* target group: French
* OPUS readme: [nor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fra | 39.1 | 0.578 |
### System Info:
- hf_name: nor-fra
- source_languages: nor
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fr']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fra
- short_pair: no-fr
- chrF2_score: 0.578
- bleu: 39.1
- brevity_penalty: 0.987
- ref_len: 3205.0
- src_name: Norwegian
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fr
- prefer_old: False
- long_pair: nor-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-es | 3c50d646c512d4e553ac53ae0a584291eeaee659 | 2021-09-10T14:06:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-es | 37 | null | transformers | 6,623 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-es
* source languages: sv
* target languages: es
* OPUS readme: [sv-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-es/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-es/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-es/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sv.es | 52.1 | 0.683 |
|
Helsinki-NLP/opus-mt-sv-fi | 933c6ce27c572414033f42cf5c898334071c497a | 2021-09-10T14:06:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-fi | 37 | null | transformers | 6,624 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-fi
* source languages: sv
* target languages: fi
* OPUS readme: [sv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-fi/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-04-07.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.zip)
* test set translations: [opus+bt-2020-04-07.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.test.txt)
* test set scores: [opus+bt-2020-04-07.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| fiskmo_testset.sv.fi | 26.9 | 0.623 |
| Tatoeba.sv.fi | 45.2 | 0.678 |
|
IDEA-CCNL/Zhouwenwang-Unified-110M | 87a08517e6f355808aee9d1faad6cd5399b75977 | 2022-04-12T01:59:26.000Z | [
"pytorch",
"megatron-bert",
"zh",
"transformers",
"license:apache-2.0"
] | null | false | IDEA-CCNL | null | IDEA-CCNL/Zhouwenwang-Unified-110M | 37 | 2 | transformers | 6,625 | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Zhouwenwang-Unified-110M model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
Zhouwenwang-Unified-110M apply a new unified structure, and jointly developed by the IDEA-CCNL and Zhuiyi Technology. In the pre-training, the model considers LM (Language Model) and MLM (Mask Language Model) tasks uniformly, and adds rotational position coding, so that the model has the ability to generate and understand. Zhouwenwang-Unified-110M is the largest model for LM and MLM tasks in the Chinese field. It will continue to be optimized in the direction of model scale, knowledge integration, and supervision task assistance.
## Usage
There is no structure of Zhouwenwang-Unified-110M in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of Zhouwenwang-Unified-110M from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### Load Model
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
```
### Generate task
You can use Zhouwenwang-110M to continue writing
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = '清华大学位于'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == '。':
break
print(sentence)
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Laeyoung/BTS-comments-generator | 5f7d12030bd3e1cfdd4c31eff1b4af79dfa5bda8 | 2021-06-08T07:59:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Laeyoung | null | Laeyoung/BTS-comments-generator | 37 | null | transformers | 6,626 | ### Model information
* Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments
* Base model: GPT2 Small
* Epoch: 5
* API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp)
* Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
* Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
* Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
Michael711/feinschwarz | d7dcef9f70eaeffd4b09cc9737c9d1fc542b4220 | 2021-10-27T18:28:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"de",
"license:mit",
"model-index"
] | text-generation | false | Michael711 | null | Michael711/feinschwarz | 37 | null | transformers | 6,627 | ---
license: mit
tags:
- generated_from_trainer
- de
model-index:
- name: feinesblack
results: []
---
# feinschwarz
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). The dataset was compiled from all texts of https://www.feinschwarz.net (as of October 2021). The homepage gathers essayistic texts on theological topics.
The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not?
The model is a very first attempt and in its current version certainly not yet a danger for academic theology 🤓
# Using the model
You can create text with the model using this code:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="Michael711/feinschwarz",
tokenizer="Michael711/feinschwarz")
text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"]
print(text)
```
Have fun theologizing! |
RabotaRu/HRBert-mini | 5a941ea031c513dec885ae38829963c0899066e5 | 2021-12-03T10:55:36.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ru",
"en",
"be",
"bg",
"uk",
"ro",
"kz",
"tg",
"tat",
"sv",
"sl",
"sr",
"uz",
"es",
"fi",
"transformers",
"russian",
"pretraining",
"embeddings",
"masked-lm",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | RabotaRu | null | RabotaRu/HRBert-mini | 37 | 3 | transformers | 6,628 | ---
language: ["ru", "en", "be", "bg", "uk", "ro", "kz", "tg", "tat", "sv", "sl", "sr", "uz", "es", "fi"]
tags:
- russian
- fill-mask
- pretraining
- embeddings
- masked-lm
license: mit
widget:
- text: "<mask> на склад"
---
!!!
At the moment, the model is distilled, a version from one of the first checkpoints is available for download.
We plan to post the full model in the next few days.
!!!
This is a distilled HRBert model for an mlm task.
Sentence embeddings can be produced as follows:
```python
# pip install transformers
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model='RabotaRu/HRBert-mini',
tokenizer='RabotaRu/HRBert-mini'
)
fill_mask('<mask> на склад')
``` |
Roberta55/deberta-base-mnli-finetuned-cola | c3d249fc84118ce311e1f1c7d110b212caf13442 | 2021-10-21T09:07:56.000Z | [
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Roberta55 | null | Roberta55/deberta-base-mnli-finetuned-cola | 37 | null | transformers | 6,629 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: deberta-base-mnli-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6281691768918801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-mnli-finetuned-cola
This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8205
- Matthews Correlation: 0.6282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4713 | 1.0 | 535 | 0.5110 | 0.5797 |
| 0.2678 | 2.0 | 1070 | 0.6648 | 0.5154 |
| 0.1811 | 3.0 | 1605 | 0.6681 | 0.6121 |
| 0.113 | 4.0 | 2140 | 0.8205 | 0.6282 |
| 0.0831 | 5.0 | 2675 | 1.0413 | 0.6057 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base | fa3b072f1e5cc0de5addaad4cdcc22d7eb175ab5 | 2021-05-28T05:45:41.000Z | [
"pytorch",
"roberta",
"arxiv:2104.08821",
"transformers"
] | null | false | VoVanPhuc | null | VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base | 37 | null | transformers | 6,630 |
#### Table of contents
1. [Introduction](#introduction)
2. [Pretrain model](#models)
3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers)
- [Installation](#install1)
- [Example usage](#usage1)
4. [Using SimeCSE_Vietnamese with `transformers`](#transformers)
- [Installation](#install2)
- [Example usage](#usage2)
# <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese
Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese :
- SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance.
- SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/)
- SimeCSE_Vietnamese works with both unlabeled and labeled data.
## Pre-trained models <a name="models"></a>
Model | #params | Arch.
---|---|---
[`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base
[`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers`
### Installation <a name="install1"></a>
- Install `sentence-transformers`:
- `pip install -U sentence-transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage1"></a>
```python
from sentence_transformers import SentenceTransformer
from pyvi.ViTokenizer import tokenize
model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base')
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
embeddings = model.encode(sentences)
```
## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers`
### Installation <a name="install2"></a>
- Install `transformers`:
- `pip install -U transformers`
- Install `pyvi` to word segment:
- `pip install pyvi`
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
from pyvi.ViTokenizer import tokenize
PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base")
sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.',
'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.',
'Bắc Giang tăng khả năng điều trị và xét nghiệm.',
'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.',
'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.',
'20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.',
'Thái Lan thua giao hữu trước vòng loại World Cup.',
'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam',
'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.',
'Bắn chết người trong cuộc rượt đuổi trên sông.'
]
sentences = [tokenize(sentence) for sentence in sentences]
inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
```
## Quick Start
[Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing)
## Citation
@article{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
journal={arXiv preprint arXiv:2104.08821},
year={2021}
}
@inproceedings{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year = {2020},
pages = {1037--1042}
}
|
arvalinno/distilbert-base-uncased-finetuned-indosquad-v2 | 421ce963a1c39e1be5f7706b429619cb7603a05e | 2021-11-21T04:15:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | arvalinno | null | arvalinno/distilbert-base-uncased-finetuned-indosquad-v2 | 37 | null | transformers | 6,631 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-indosquad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-indosquad-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9015 | 1.0 | 9676 | 1.5706 |
| 1.6438 | 2.0 | 19352 | 1.5926 |
| 1.4714 | 3.0 | 29028 | 1.5253 |
| 1.3486 | 4.0 | 38704 | 1.6650 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
castorini/duot5-3b-msmarco | 72c531606da1a625c9060d3e8ac1cf2157c7dcf3 | 2021-05-28T11:51:36.000Z | [
"pytorch",
"t5",
"feature-extraction",
"arxiv:2101.05667",
"transformers"
] | feature-extraction | false | castorini | null | castorini/duot5-3b-msmarco | 37 | null | transformers | 6,632 | This model is a T5-3B reranker, initialized with our pointwise ranker, [castorini/monot5-3b-msmarco](https://huggingface.co/castorini/monot5-3b-msmarco), and finetuned on the MS MARCO passage dataset for 50K steps (or 5 epochs) on the pairwise reranking task.
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)!
Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667) |
cimm-kzn/enrudr-bert | e1ecd42e660a377a2ae5f8a608afeb4c5fa75675 | 2020-12-11T21:35:46.000Z | [
"pytorch",
"ru",
"en",
"arxiv:2004.03659",
"transformers"
] | null | false | cimm-kzn | null | cimm-kzn/enrudr-bert | 37 | null | transformers | 6,633 | ---
language:
- ru
- en
---
## EnRuDR-BERT
EnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
```
|
echarlaix/bert-large-uncased-whole-word-masking-finetuned-sst-2 | cad77115b29db5fcb25ef6b6ff5da941dd614d31 | 2021-05-19T16:48:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | echarlaix | null | echarlaix/bert-large-uncased-whole-word-masking-finetuned-sst-2 | 37 | null | transformers | 6,634 | Entry not found |
enelpi/bert-question-answering-uncased-squadv2_tr | 058ac980155af602e15d7dafdfee7c275c0eb826 | 2021-05-19T16:28:28.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | enelpi | null | enelpi/bert-question-answering-uncased-squadv2_tr | 37 | null | transformers | 6,635 | Entry not found |
flax-community/clip-rsicd | 8357af47297adf43a37a88e424a7cfffc04ec95c | 2022-04-24T21:02:26.000Z | [
"pytorch",
"jax",
"clip",
"feature-extraction",
"transformers",
"vision"
] | feature-extraction | false | flax-community | null | flax-community/clip-rsicd | 37 | null | transformers | 6,636 | ---
tags:
- vision
---
# Model Card: clip-rsicd
## Model Details
This model is a finetuned [CLIP by OpenAI](https://huggingface.co/openai/clip-vit-base-patch32). It is designed with an aim to improve zero-shot image classification, text-to-image and image-to-image retrieval specifically on remote sensing images.
### Model Date
July 2021
### Model Type
The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
### Model Version
We release several checkpoints for `clip-rsicd` model. Refer to [our github repo](https://github.com/arampacha/CLIP-rsicd#evaluation-results) for performance metrics on zero-shot classification for each of those.
### Training
To reproduce the fine-tuning procedure one can use released [script](https://github.com/arampacha/CLIP-rsicd/blob/master/run_clip_flax_tv.py).
The model was trained using batch size 1024, adafactor optimizer with linear warmup and decay with peak learning rate 1e-4 on 1 TPU-v3-8.
Full log of the training run can be found on [WandB](https://wandb.ai/wandb/hf-flax-clip-rsicd/runs/1ts243k3).
### Demo
Check out the model text-to-image and image-to-image capabilities using [this demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo).
### Documents
- [Fine-tuning CLIP on RSICD with HuggingFace and flax/jax on colab using TPU](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/Finetuning_CLIP_with_HF_and_jax.ipynb)
### Use with Transformers
```py
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("flax-community/clip-rsicd")
processor = CLIPProcessor.from_pretrained("flax-community/clip-rsicd")
url = "https://raw.githubusercontent.com/arampacha/CLIP-rsicd/master/data/stadium_1.jpg"
image = Image.open(requests.get(url, stream=True).raw)
labels = ["residential area", "playground", "stadium", "forrest", "airport"]
inputs = processor(text=[f"a photo of a {l}" for l in labels], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
for l, p in zip(labels, probs[0]):
print(f"{l:<16} {p:.4f}")
```
[Try it on colab](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/clip_rsicd_zero_shot.ipynb)
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification.
In addition, we can imagine applications in defense and law enforcement, climate change and global warming, and even some consumer applications. A partial list of applications can be found [here](https://github.com/arampacha/CLIP-rsicd#applications). In general we think such models can be useful as digital assistants for humans engaged in searching through large collections of images.
We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
## Data
The model was trained on publicly available remote sensing image captions datasets. Namely [RSICD](https://github.com/201528014227051/RSICD_optimal), [UCM](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and [Sydney](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ). More information on the datasets used can be found on [our project page](https://github.com/arampacha/CLIP-rsicd#dataset).
## Performance and Limitations
### Performance
| Model-name | k=1 | k=3 | k=5 | k=10 |
| -------------------------------- | ----- | ----- | ----- | ----- |
| original CLIP | 0.572 | 0.745 | 0.837 | 0.939 |
| clip-rsicd (this model) | 0.843 | 0.958 | 0.977 | 0.993 |
## Limitations
The model is finetuned on RSI data but can contain some biases and limitations of the original CLIP model. Refer to [CLIP model card](https://huggingface.co/openai/clip-vit-base-patch32#limitations) for details on those.
|
google/roberta2roberta_L-24_gigaword | eed8e81a8b45221556517f48b0e6e40e70006111 | 2020-12-11T21:43:15.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:gigaword",
"arxiv:1907.12461",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | google | null | google/roberta2roberta_L-24_gigaword | 37 | null | transformers | 6,637 | ---
language: en
license: apache-2.0
datasets:
- gigaword
tags:
- summarization
---
# Roberta2Roberta_L-24_gigaword EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_gigaword/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on headline generation using the Gigaword dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_gigaword")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_gigaword")
article = """australian shares closed down #.# percent monday
following a weak lead from the united states and
lower commodity prices , dealers said ."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# australian shares close down #.# percent.
```
|
healx/biomedical-slot-filling-reader-base | b1ad1e5d67113f0e25a9d9a9b8a9732db8cec6eb | 2021-11-16T09:16:36.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2109.08564",
"transformers",
"autotrain_compatible"
] | question-answering | false | healx | null | healx/biomedical-slot-filling-reader-base | 37 | null | transformers | 6,638 | Reader model for Biomedical slot filling see https://arxiv.org/abs/2109.08564 for details. The model is initialized with [biobert-base](https://huggingface.co/dmis-lab/biobert-v1.1). |
it5/it5-small-news-summarization | b619de1c052990b8a503dc8013165a1267ba08a9 | 2022-03-09T07:52:53.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | summarization | false | it5 | null | it5/it5-small-news-summarization | 37 | 1 | transformers | 6,639 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: it5-small-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.333
name: "Test Rouge1 IlPost"
- type: rouge2
value: 0.162
name: "Test Rouge2 IlPost"
- type: rougeL
value: 0.273
name: "Test RougeL IlPost"
- type: bertscore
value: 0.395
name: "Test BERTScore IlPost"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: rouge1
value: 0.328
name: "Test Rouge1 Fanpage"
- type: rouge2
value: 0.148
name: "Test Rouge2 Fanpage"
- type: rougeL
value: 0.242
name: "Test RougeL Fanpage"
- type: bertscore
value: 0.377
name: "Test BERTScore Fanpage"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "8g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Small for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-small-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
kazandaev/opus-mt-ru-en-finetuned | 726f655b41154c21283acf053dce87166789e608 | 2022-02-27T20:47:54.000Z | [
"pytorch",
"tensorboard",
"rust",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kazandaev | null | kazandaev/opus-mt-ru-en-finetuned | 37 | null | transformers | 6,640 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ru-en-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned
This model is a fine-tuned version of [kazandaev/opus-mt-ru-en-finetuned](https://huggingface.co/kazandaev/opus-mt-ru-en-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0399
- Bleu: 43.5078
- Gen Len: 26.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.7462 | 1.0 | 35147 | 1.0422 | 43.3884 | 26.1742 |
| 0.7501 | 2.0 | 70294 | 1.0407 | 43.5296 | 26.1671 |
| 0.7471 | 3.0 | 105441 | 1.0402 | 43.5133 | 26.1118 |
| 0.7514 | 4.0 | 140588 | 1.0401 | 43.492 | 26.1529 |
| 0.7565 | 5.0 | 175735 | 1.0399 | 43.5078 | 26.1256 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kuppuluri/telugu_bertu | b47dda16f5ba373e547b3a41b4410a219e20f7ff | 2021-12-02T18:14:46.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"te",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kuppuluri | null | kuppuluri/telugu_bertu | 37 | 2 | transformers | 6,641 | ---
language: te
---
# telugu_bertu
## Model description
This model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoModelWithLMHead, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
model = AutoModelWithLMHead.from_pretrained("kuppuluri/telugu_bertu")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("మక్దూంపల్లి పేరుతో చాలా [MASK] ఉన్నాయి.")
```
|
lordtt13/blenderbot_small-news | 1599591fbf90efc193aeebcd2a3a28955f56e745 | 2021-02-11T08:21:09.000Z | [
"pytorch",
"tf",
"blenderbot-small",
"text2text-generation",
"en",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lordtt13 | null | lordtt13/blenderbot_small-news | 37 | null | transformers | 6,642 | ---
language: en
---
## BlenderBotSmall-News: Small version of a state-of-the-art open source chatbot, trained on custom summaries
### Details of BlenderBotSmall
The **BlenderBotSmall** model was presented in [A state-of-the-art open source chatbot](https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/) by *Facebook AI* and here are it's details:
- Facebook AI has built and open-sourced BlenderBot, the largest-ever open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators.
- The culmination of years of research in conversational AI, this is the first chatbot to blend a diverse set of conversational skills — including empathy, knowledge, and personality — together in one system.
- We achieved this milestone through a new chatbot recipe that includes improved decoding techniques, novel blending of skills, and a model with 9.4 billion parameters, which is 3.6x more than the largest existing system.
### Details of the downstream task (Summarization) - Dataset 📚
A custom dataset was used, which was hand prepared by [SmokeTrees Digital](https://github.com/smoke-trees) AI engineers. This data contains long texts and summaries.
### Model training
The training script is present [here](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb).
### Pipelining the Model
```python
model = transformers.BlenderbotSmallForConditionalGeneration.from_pretrained('lordtt13/blenderbot_small-news')
tokenizer = transformers.BlenderbotSmallTokenizer.from_pretrained("lordtt13/blenderbot_small-news")
nlp_fill = transformers.pipeline('summarization', model = model, tokenizer = tokenizer)
nlp_fill('The CBI on Saturday booked four former officials of Syndicate Bank and six others for cheating, forgery, criminal conspiracy and causing ₹209 crore loss to the state-run bank. The accused had availed home loans and credit from Syndicate Bank on the basis of forged and fabricated documents. These funds were fraudulently transferred to the companies owned by the accused persons.', min_length=5, max_length=40)
# Output:
# [{'summary_text': 'marize: the cbi booked four former officials of syndicate bank and six others for cheating , forgery , criminal conspiracy and causing 209 crore loss to the staterun bank'}]
```
> Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/)
|
megagonlabs/t5-base-japanese-web-8k | 9b69d115ad51cd03338a0d26cccc29c5a3bb30d5 | 2021-09-06T10:31:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ja",
"dataset:mc4",
"dataset:wiki40b",
"arxiv:1910.10683",
"transformers",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | megagonlabs | null | megagonlabs/t5-base-japanese-web-8k | 37 | 1 | transformers | 6,643 | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: apache-2.0
datasets:
- mc4
- wiki40b
---
# t5-base-japanese-web (with Byte-fallback, 8K)
## Description
[megagonlabs/t5-base-japanese-web](https://huggingface.co/megagonlabs/t5-base-japanese-web) is a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts.
Training codes are [available on GitHub](https://github.com/megagonlabs/t5-japanese).
The vocabulary size of this model is 8K.
[32K version is also available](https://huggingface.co/megagonlabs/t5-base-japanese-web).
### Corpora
We used following corpora for pre-training.
- Japanese in [mC4/3.0.1](https://huggingface.co/datasets/mc4) (We used [Tensorflow native format](https://github.com/allenai/allennlp/discussions/5056))
- 87,425,304 pages
- 782 GB in TFRecord format
- [Japanese](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) in [wiki40b/1.3.0](https://www.tensorflow.org/datasets/catalog/wiki40b)
- 828,236 articles (2,073,584 examples)
- 2 GB in TFRecord format
### Tokenizer
We used Japanese Wikipedia to train [SentencePiece](https://github.com/google/sentencepiece).
- Vocabulary size: 8,000
- [Byte-fallback](https://github.com/google/sentencepiece/releases/tag/v0.1.9): Enabled
### Parameters
- T5 model: [models/t5.1.1.base.gin](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/gin/models/t5.1.1.base.gin)
- Training steps: 1,000,000
It took about 126 hours with TPU v3-8
## Related models
- [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese)](https://huggingface.co/sonoisa/t5-base-japanese)
- [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese-mC4-Wikipedia)](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia)
## License
Apache License 2.0
## Citations
- mC4
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
```bibtex
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
- wiki40b
```bibtex
@inproceedings{49029,
title = {Wiki-40B: Multilingual Language Model Dataset},
author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou},
year = {2020},
booktitle = {LREC 2020}
}
```
|
mrm8488/spanbert-base-finetuned-squadv2 | c4abebca5cf02dc3812a7f34004146ed28a93c35 | 2021-05-20T00:51:05.000Z | [
"pytorch",
"jax",
"bert",
"en",
"arxiv:1907.10529",
"transformers"
] | null | false | mrm8488 | null | mrm8488/spanbert-base-finetuned-squadv2 | 37 | null | transformers | 6,644 | ---
language: en
thumbnail:
---
# SpanBERT base fine-tuned on SQuAD v2
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)).
## Details of SpanBERT
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model fine-tuning 🏋️
You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT)
```bash
python code/run_squad.py \
--do_train \
--do_eval \
--model spanbert-base-cased \
--train_file train-v2.0.json \
--dev_file dev-v2.0.json \
--train_batch_size 32 \
--eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 4 \
--max_seq_length 512 \
--doc_stride 128 \
--eval_metric best_f1 \
--output_dir squad2_output \
--version_2_with_negative \
--fp16
```
## Results Comparison 📝
| | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED |
| ---------------------- | ------------- | --------- | ------- | ------ |
| | F1 | F1 | avg. F1 | F1 |
| BERT (base) | 88.5 | 76.5 | 73.1 | 67.7 |
| SpanBERT (base) | [92.4](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | **83.6** (this one) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) |
| BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 |
| SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) |
Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-base-finetuned-squadv2",
tokenizer="SpanBERT/spanbert-base-cased"
)
qa_pipeline({
'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately",
'question': "How has been working Manuel Romero lately?"
})
# Output: {'answer': 'very hard', 'end': 40, 'score': 0.9052708846768347, 'start': 31}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/spanbert-large-finetuned-tacred | 0a7170618e7eccb43a61ff24d58b8c65c266f4fc | 2021-05-20T01:01:51.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"arxiv:1907.10529",
"transformers"
] | feature-extraction | false | mrm8488 | null | mrm8488/spanbert-large-finetuned-tacred | 37 | null | transformers | 6,645 | ---
language: en
thumbnail:
---
# SpanBERT large fine-tuned on TACRED
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [TACRED](https://nlp.stanford.edu/projects/tacred/) dataset by [them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)
## Details of SpanBERT
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Dataset 📚
[TACRED](https://nlp.stanford.edu/projects/tacred/) A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types.
## Model fine-tuning 🏋️
You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT)
```bash
python code/run_tacred.py \
--do_train \
--do_eval \
--data_dir <TACRED_DATA_DIR> \
--model spanbert-large-cased \
--train_batch_size 32 \
--eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--max_seq_length 128 \
--output_dir tacred_dir \
--fp16
```
## Results Comparison 📝
| | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED |
| ---------------------- | ------------- | --------- | ------- | ------ |
| | F1 | F1 | avg. F1 | F1 |
| BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 |
| SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) |
| BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 |
| SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | **70.8** (this one) |
Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
persiannlp/mt5-base-parsinlu-arc-comqa-obqa-multiple-choice | cd15568d1ad33eaed099e67ae3b1eb2947e43008 | 2021-09-23T16:19:52.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"transformers",
"multiple-choice",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-arc-comqa-obqa-multiple-choice | 37 | null | transformers | 6,646 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pysentimiento/robertuito-base-cased | ad3aad808a26dd2208003e8068137cbd40c4ad1b | 2021-11-19T13:57:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2111.09453",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pysentimiento | null | pysentimiento/robertuito-base-cased | 37 | null | transformers | 6,647 | # robertuito-base-cased
**WORK IN PROGRESS**
# RoBERTuito
## A pre-trained language model for social media text in Spanish
[**READ THE FULL PAPER**](https://arxiv.org/abs/2111.09453)
[Github Repository](https://github.com/pysentimiento/robertuito)
*RoBERTuito* is a pre-trained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets. *RoBERTuito* comes in 3 flavors: cased, uncased, and uncased+deaccented.
We tested *RoBERTuito* on a benchmark of tasks involving user-generated text in Spanish. It outperforms other pre-trained language models for this language such as *BETO*, *BERTin* and *RoBERTa-BNE*. The 4 tasks selected for evaluation were: Hate Speech Detection (using SemEval 2019 Task 5, HatEval dataset), Sentiment and Emotion Analysis (using TASS 2020 datasets), and Irony detection (using IrosVa 2019 dataset).
| model | hate speech | sentiment analysis | emotion analysis | irony detection | score |
|:-------------------|:----------------|:---------------------|:-------------------|:-----------------|---------:|
| robertuito-uncased | 0.801 ± 0.010 | 0.707 ± 0.004 | 0.551 ± 0.011 | 0.736 ± 0.008 | 0.6987 |
| robertuito-deacc | 0.798 ± 0.008 | 0.702 ± 0.004 | 0.543 ± 0.015 | 0.740 ± 0.006 | 0.6958 |
| robertuito-cased | 0.790 ± 0.012 | 0.701 ± 0.012 | 0.519 ± 0.032 | 0.719 ± 0.023 | 0.6822 |
| roberta-bne | 0.766 ± 0.015 | 0.669 ± 0.006 | 0.533 ± 0.011 | 0.723 ± 0.017 | 0.6726 |
| bertin | 0.767 ± 0.005 | 0.665 ± 0.003 | 0.518 ± 0.012 | 0.716 ± 0.008 | 0.6666 |
| beto-cased | 0.768 ± 0.012 | 0.665 ± 0.004 | 0.521 ± 0.012 | 0.706 ± 0.007 | 0.6651 |
| beto-uncased | 0.757 ± 0.012 | 0.649 ± 0.005 | 0.521 ± 0.006 | 0.702 ± 0.008 | 0.6571 |
We release the pre-trained models on huggingface model hub:
- [RoBERTuito uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased)
- [RoBERTuito cased](https://huggingface.co/pysentimiento/robertuito-base-cased)
- [RoBERTuito deacc](https://huggingface.co/pysentimiento/robertuito-base-deacc)
## Masked LM
To test the masked LM, take into account that space is encoded inside SentencePiece's tokens. So, if you want to test
```
Este es un día<mask>
```
don't put a space between `día` and `<mask>`
## Usage
**IMPORTANT -- READ THIS FIRST**
*RoBERTuito* is not yet fully-integrated into `huggingface/transformers`. To use it, first install `pysentimiento`
```bash
pip install pysentimiento
```
and preprocess text using `pysentimiento.preprocessing.preprocess_tweet` before feeding it into the tokenizer
```python
from transformers import AutoTokenizer
from pysentimiento.preprocessing import preprocess_tweet
tokenizer = AutoTokenizer.from_pretrained('pysentimiento/robertuito-base-cased')
text = "Esto es un tweet estoy usando #Robertuito @pysentimiento 🤣"
preprocessed_text = preprocess_tweet(text, ha)
tokenizer.tokenize(preprocessed_text)
# ['<s>','▁Esto','▁es','▁un','▁tweet','▁estoy','▁usando','▁','▁hashtag','▁','▁ro','bert','uito','▁@usuario','▁','▁emoji','▁cara','▁revolviéndose','▁de','▁la','▁risa','▁emoji','</s>']
```
We are working on integrating this preprocessing step into a Tokenizer within `transformers` library
## Citation
If you use *RoBERTuito*, please cite our paper:
```bibtex
@misc{perez2021robertuito,
title={RoBERTuito: a pre-trained language model for social media text in Spanish},
author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque},
year={2021},
eprint={2111.09453},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sbrandeis/autonlp-emotion-clf | 2a2378d1f2fba68c5a87cd285df488a69ff72e71 | 2021-12-07T08:16:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sbrandeis/autonlp-data-emotion-classification-pre",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | sbrandeis | null | sbrandeis/autonlp-emotion-clf | 37 | null | transformers | 6,648 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- sbrandeis/autonlp-data-emotion-classification-pre
co2_eq_emissions: 23.4692320403666
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 3252433
- CO2 Emissions (in grams): 23.4692320403666
## Validation Metrics
- Loss: 0.15040820837020874
- Accuracy: 0.9438026849828286
- Macro F1: 0.9093924156122387
- Micro F1: 0.9438026849828286
- Weighted F1: 0.9423168992167734
- Macro Precision: 0.9482796335288181
- Micro Precision: 0.9438026849828286
- Weighted Precision: 0.9466095426853992
- Macro Recall: 0.8842649120281764
- Micro Recall: 0.9438026849828286
- Weighted Recall: 0.9438026849828286
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/sbrandeis/autonlp-emotion-classification-pre-3252433
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sbrandeis/autonlp-emotion-classification-pre-3252433", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sbrandeis/autonlp-emotion-classification-pre-3252433", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
textattack/albert-base-v2-RTE | 4a2a6a5abfc24d88d493e0df81de4c9192d88793 | 2020-07-06T16:31:05.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-RTE | 37 | null | transformers | 6,649 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.776173285198556, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
valhalla/cogview-gpt2-test | 8d82689e18f46530c080a35c69097e56c8e62557 | 2021-06-21T07:00:17.000Z | [
"pytorch",
"cog_view",
"text-generation",
"transformers"
] | text-generation | false | valhalla | null | valhalla/cogview-gpt2-test | 37 | null | transformers | 6,650 | Entry not found |
w11wo/wav2vec2-xls-r-300m-korean | 72ac2f064315c4ab807c8a59ce1ce17536c1d520 | 2022-03-23T18:26:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ko",
"dataset:kresnik/zeroth_korean",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | w11wo | null | w11wo/wav2vec2-xls-r-300m-korean | 37 | null | transformers | 6,651 | ---
language: ko
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- kresnik/zeroth_korean
model-index:
- name: Wav2Vec2 XLS-R 300M Korean
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
args: clean
metrics:
- name: Test WER
type: wer
value: 29.54
- name: Test CER
type: cer
value: 9.53
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ko
metrics:
- name: Test WER
type: wer
value: 76.26
- name: Test CER
type: cer
value: 38.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ko
metrics:
- name: Test WER
type: wer
value: 73.18
---
# Wav2Vec2 XLS-R 300M Korean
Wav2Vec2 XLS-R 300M Korean is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Zeroth Korean](https://huggingface.co/datasets/kresnik/zeroth_korean) dataset.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------- | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-korean` | 300M | XLS-R | `Zeroth Korean` Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | WER | CER |
| -------------------------------- | ------ | ------ | ------ |
| `Zeroth Korean` | 0.2089 | 29.54% | 9.53% |
| `Robust Speech Event - Dev Data` | N/A | 76.26% | 38.67% |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 7.5e-05
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 50.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 19.7138 | 0.72 | 500 | 19.6427 | 1.0 | 1.0 |
| 4.8039 | 1.44 | 1000 | 4.7842 | 1.0 | 1.0 |
| 4.5619 | 2.16 | 1500 | 4.5608 | 0.9992 | 0.9598 |
| 4.254 | 2.88 | 2000 | 4.2729 | 0.9955 | 0.9063 |
| 4.1905 | 3.6 | 2500 | 4.2257 | 0.9903 | 0.8758 |
| 4.0683 | 4.32 | 3000 | 3.9294 | 0.9937 | 0.7911 |
| 3.486 | 5.04 | 3500 | 2.7045 | 1.0012 | 0.5934 |
| 2.946 | 5.75 | 4000 | 1.9691 | 0.9425 | 0.4634 |
| 2.634 | 6.47 | 4500 | 1.5212 | 0.8807 | 0.3850 |
| 2.4066 | 7.19 | 5000 | 1.2551 | 0.8177 | 0.3601 |
| 2.2651 | 7.91 | 5500 | 1.0423 | 0.7650 | 0.3039 |
| 2.1828 | 8.63 | 6000 | 0.9599 | 0.7273 | 0.3106 |
| 2.1023 | 9.35 | 6500 | 0.9482 | 0.7161 | 0.3063 |
| 2.0536 | 10.07 | 7000 | 0.8242 | 0.6767 | 0.2860 |
| 1.9803 | 10.79 | 7500 | 0.7643 | 0.6563 | 0.2637 |
| 1.9468 | 11.51 | 8000 | 0.7319 | 0.6441 | 0.2505 |
| 1.9178 | 12.23 | 8500 | 0.6937 | 0.6320 | 0.2489 |
| 1.8515 | 12.95 | 9000 | 0.6443 | 0.6053 | 0.2196 |
| 1.8083 | 13.67 | 9500 | 0.6286 | 0.6122 | 0.2148 |
| 1.819 | 14.39 | 10000 | 0.6015 | 0.5986 | 0.2074 |
| 1.7684 | 15.11 | 10500 | 0.5682 | 0.5741 | 0.1982 |
| 1.7195 | 15.83 | 11000 | 0.5385 | 0.5592 | 0.2007 |
| 1.7044 | 16.55 | 11500 | 0.5362 | 0.5524 | 0.2097 |
| 1.6879 | 17.27 | 12000 | 0.5119 | 0.5489 | 0.2083 |
| 1.656 | 17.98 | 12500 | 0.4990 | 0.5362 | 0.1968 |
| 1.6122 | 18.7 | 13000 | 0.4561 | 0.5092 | 0.1900 |
| 1.5919 | 19.42 | 13500 | 0.4778 | 0.5225 | 0.1975 |
| 1.5896 | 20.14 | 14000 | 0.4563 | 0.5098 | 0.1859 |
| 1.5589 | 20.86 | 14500 | 0.4362 | 0.4940 | 0.1725 |
| 1.5353 | 21.58 | 15000 | 0.4140 | 0.4826 | 0.1580 |
| 1.5441 | 22.3 | 15500 | 0.4031 | 0.4742 | 0.1550 |
| 1.5116 | 23.02 | 16000 | 0.3916 | 0.4748 | 0.1545 |
| 1.4731 | 23.74 | 16500 | 0.3841 | 0.4810 | 0.1542 |
| 1.4647 | 24.46 | 17000 | 0.3752 | 0.4524 | 0.1475 |
| 1.4328 | 25.18 | 17500 | 0.3587 | 0.4476 | 0.1461 |
| 1.4129 | 25.9 | 18000 | 0.3429 | 0.4242 | 0.1366 |
| 1.4062 | 26.62 | 18500 | 0.3450 | 0.4251 | 0.1355 |
| 1.3928 | 27.34 | 19000 | 0.3297 | 0.4145 | 0.1322 |
| 1.3906 | 28.06 | 19500 | 0.3210 | 0.4185 | 0.1336 |
| 1.358 | 28.78 | 20000 | 0.3131 | 0.3970 | 0.1275 |
| 1.3445 | 29.5 | 20500 | 0.3069 | 0.3920 | 0.1276 |
| 1.3159 | 30.22 | 21000 | 0.3035 | 0.3961 | 0.1255 |
| 1.3044 | 30.93 | 21500 | 0.2952 | 0.3854 | 0.1242 |
| 1.3034 | 31.65 | 22000 | 0.2966 | 0.3772 | 0.1227 |
| 1.2963 | 32.37 | 22500 | 0.2844 | 0.3706 | 0.1208 |
| 1.2765 | 33.09 | 23000 | 0.2841 | 0.3567 | 0.1173 |
| 1.2438 | 33.81 | 23500 | 0.2734 | 0.3552 | 0.1137 |
| 1.2487 | 34.53 | 24000 | 0.2703 | 0.3502 | 0.1118 |
| 1.2249 | 35.25 | 24500 | 0.2650 | 0.3484 | 0.1142 |
| 1.2229 | 35.97 | 25000 | 0.2584 | 0.3374 | 0.1097 |
| 1.2374 | 36.69 | 25500 | 0.2568 | 0.3337 | 0.1095 |
| 1.2153 | 37.41 | 26000 | 0.2494 | 0.3327 | 0.1071 |
| 1.1925 | 38.13 | 26500 | 0.2518 | 0.3366 | 0.1077 |
| 1.1908 | 38.85 | 27000 | 0.2437 | 0.3272 | 0.1057 |
| 1.1858 | 39.57 | 27500 | 0.2396 | 0.3265 | 0.1044 |
| 1.1808 | 40.29 | 28000 | 0.2373 | 0.3156 | 0.1028 |
| 1.1842 | 41.01 | 28500 | 0.2356 | 0.3152 | 0.1026 |
| 1.1668 | 41.73 | 29000 | 0.2319 | 0.3188 | 0.1025 |
| 1.1448 | 42.45 | 29500 | 0.2293 | 0.3099 | 0.0995 |
| 1.1327 | 43.17 | 30000 | 0.2265 | 0.3047 | 0.0979 |
| 1.1307 | 43.88 | 30500 | 0.2222 | 0.3078 | 0.0989 |
| 1.1419 | 44.6 | 31000 | 0.2215 | 0.3038 | 0.0981 |
| 1.1231 | 45.32 | 31500 | 0.2193 | 0.3013 | 0.0972 |
| 1.139 | 46.04 | 32000 | 0.2162 | 0.3007 | 0.0968 |
| 1.1114 | 46.76 | 32500 | 0.2122 | 0.2982 | 0.0960 |
| 1.111 | 47.48 | 33000 | 0.2125 | 0.2946 | 0.0948 |
| 1.0982 | 48.2 | 33500 | 0.2099 | 0.2957 | 0.0953 |
| 1.109 | 48.92 | 34000 | 0.2092 | 0.2955 | 0.0955 |
| 1.0905 | 49.64 | 34500 | 0.2088 | 0.2954 | 0.0953 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Korean was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
yobi/klue-roberta-base-ynat | 1f8ec1e7ee4ed746829a663b9879fae1b4602231 | 2021-06-26T15:57:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | yobi | null | yobi/klue-roberta-base-ynat | 37 | null | transformers | 6,652 | |
yseop/roberta-base-finance-hypernym-identification | 18381a24f6aa39417a7baa4fb1ff2560faa2ced9 | 2021-07-16T22:50:30.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | yseop | null | yseop/roberta-base-finance-hypernym-identification | 37 | 5 | sentence-transformers | 6,653 | ---
inference: false
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FinISH (Finance-Identifying Sroberta for Hypernyms)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2>
</div>
</div>
We present FinISH, a [SRoBERTa](https://huggingface.co/sentence-transformers/nli-roberta-base-v2) base model fine-tuned on the [FIBO ontology](https://spec.edmcouncil.org/fibo/) dataset for domain-specific representation learning on the [**Semantic Search**](https://www.sbert.net/examples/applications/semantic-search/README.html) downstream task.
## SRoBERTa Model Architecture
Sentence-RoBERTa (SRoBERTa) is a modification of the pretrained RoBERTa network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with RoBERTa to about 5 seconds with SRoBERTa, while maintaining the accuracy from RoBERTa. SRoBERTa has been evaluated on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.
Paper: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/pdf/1908.10084.pdf).
Authors: *Nils Reimers and Iryna Gurevych*.
## Details on the downstream task (Semantic Search for Text Classification)
The objective of this task is to correctly classify a given term in the financial domain according to its prototypical hypernym in a list of available hypernyms:
* Bonds
* Forward
* Funds
* Future
* MMIs (Money Market Instruments)
* Option
* Stocks
* Swap
* Equity Index
* Credit Index
* Securities restrictions
* Parametric schedules
* Debt pricing and yields
* Credit Events
* Stock Corporation
* Central Securities Depository
* Regulatory Agency
This kind-based approach relies on identifying the closest hypernym semantically to the given term (even if they possess common properties with other hypernyms).
#### Data Description
The data is a scraped list of term definitions from the FIBO ontology website where each definition has been mapped to its closest hypernym from the proposed labels.
For multi-sentence definitions, we applied sentence-splitting by punctuation delimiters. We also applied lowercase transformation on all input data.
#### Data Instances
The dataset contains a label representing the hypernym of the given definition.
```json
{
'label': 'bonds',
'definition': 'callable convertible bond is a kind of callable bond, convertible bond.'
}
```
#### Data Fields
**label**: Can be one of the 17 predefined hypernyms.
**definition**: Financial term definition relating to a concept or object in the financial domain.
#### Data Splits
The data contains training data with **317101** entries.
#### Test set metrics
The representational learning model is evaluated on a representative test set with 20% of the entries. The test set is scored based on the following metrics:
* Average Accuracy
* Mean Rank (position of the correct label in a set of 5 model predictions)
We evaluate FinISH according to these metrics, where it outperforms other state-of-the-art sentence embeddings methods in this task.
* Average Accuracy: **0.73**
* Mean Rank: **1.61**
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
git clone https://github.com/huggingface/transformers.git
pip install -q ./transformers
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
import torch
model = SentenceTransformer('yseop/roberta-base-finance-hypernym-identification')
# Our corpus containing the list of hypernym labels
hypernyms = ['Bonds',
\t\t\t'Forward',
\t\t\t'Funds',
\t\t\t'Future',
\t\t\t'MMIs',
\t\t\t'Option',
\t\t\t'Stocks',
\t\t\t'Swap',
\t\t\t'Equity Index',
\t\t\t'Credit Index',
\t\t\t'Securities restrictions',
\t\t\t'Parametric schedules',
\t\t\t'Debt pricing and yields',
\t\t\t'Credit Events',
\t\t\t'Stock Corporation',
\t\t\t'Central Securities Depository',
\t\t\t'Regulatory Agency']
hypernym_embeddings = model.encode(hypernyms, convert_to_tensor=True)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Find the closest 5 hypernyms of the corpus for each query sentence based on cosine similarity
top_k = min(5, len(hypernyms))
for query in queries:
query_embedding = model.encode(query, convert_to_tensor=True)
# We use cosine-similarity and torch.topk to find the highest 5 scores
cos_scores = util.pytorch_cos_sim(query_embedding, hypernym_embeddings)[0]
top_results = torch.topk(cos_scores, k=top_k)
print("\
\
======================\
\
")
print("Query:", query)
print("\
Top 5 most similar hypernyms:")
for score, idx in zip(top_results[0], top_results[1]):
print(hypernyms[idx], "(Score: {:.4f})".format(score))
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Query sentences are financial terms to match to the predefined labels
queries = ['Convertible bond', 'weighted average coupon', 'Restriction 144-A']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
model = AutoModel.from_pretrained('yseop/roberta-base-finance-hypernym-identification')
# Tokenize sentences
encoded_input = tokenizer(queries, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
query_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Query embeddings:")
print(query_embeddings)
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation. |
zhuqing/bert-base-uncased-reddit-business-v2 | 5af3dacba3726ec11b6b43bf23d4cfc418abfedb | 2021-08-03T06:15:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-reddit-business-v2 | 37 | null | transformers | 6,654 | Entry not found |
microsoft/tapex-large-finetuned-tabfact | 690c413ce530ea49370b5f4fe452ce2628460e1e | 2022-07-14T10:10:10.000Z | [
"pytorch",
"bart",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:mit"
] | text-classification | false | microsoft | null | microsoft/tapex-large-finetuned-tabfact | 37 | null | transformers | 6,655 | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- tab_fact
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
zhufy/squad-ms-bert-base | 61d4fd76fed22f66185a866c500b40ac77b3465b | 2022-04-23T05:09:03.000Z | [
"pytorch",
"bert",
"question-answering",
"Malay",
"dataset:Malay SQuAD",
"transformers",
"bert-base",
"autotrain_compatible"
] | question-answering | false | zhufy | null | zhufy/squad-ms-bert-base | 37 | null | transformers | 6,656 | ---
language: Malay
task: extractive question answering
datasets: Malay SQuAD
tags:
- bert-base
---
# Model Description
This model is for Malay extractive question answering. It is based on the [malay-huggingface/bert-base-bahasa-cased](https://huggingface.co/malay-huggingface/bert-base-bahasa-cased/tree/main) model, and it is case-sensitive: it makes a difference between english and English.
# Training data
[Malay SQuAD v2.0](https://github.com/huseinzol05/malay-dataset/tree/master/question-answer/squad)
# How to use
You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-ms-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-ms-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "Pada manusia, tindak balas ini diaktifkan dengan pelengkap
pengikatan kepada antibodi yang telah melekat pada mikrob ini
atau pengikatan protein pelengkap kepada karbohidrat pada permukaan
mikrob. Isyarat pengiktirafan ini mencetuskan tindak balas pembunuhan
yang pantas. Kelajuan tindak balas adalah hasil penguatan isyarat
yang berlaku berikutan pengaktifan proteolytik berturutan molekul
pelengkap, yang juga protease. Selepas protein pelengkap pada mulanya
mengikat kepada mikrob, mereka mengaktifkan aktiviti protease mereka,
yang seterusnya mengaktifkan protease pelengkap lain, dan sebagainya.
Ini menghasilkan cascade bermangkin yang menguatkan isyarat awal dengan
maklum balas positif terkawal. Kastil menghasilkan penghasilan peptida
yang menarik sel imun, meningkatkan kebolehtelapan vaskular, dan opsonize
(kot) permukaan patogen, menandakannya untuk kemusnahan. Pemendapan
pelengkap ini juga boleh membunuh sel secara terus dengan mengganggu
membran plasma mereka."
>>> question = "Protein pelengkap mengikat molekul apa yang berada di
permukaan mikrob untuk mendapatkan tindak balas imunWhat
are two basic primary resources used to guage complexity?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.9848766922950745,
'start': 162,
'end': 173,
'answer': 'karbohidrat'}
``` |
mrm8488/electricidad-small-finetuned-amazon-review-classification | cbaeb0d04ba55182b54925e2520318ef320757a5 | 2022-03-14T15:10:38.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/electricidad-small-finetuned-amazon-review-classification | 37 | null | transformers | 6,657 | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
widget:
- text: "me parece muy mal , se salía el producto por la caja y venían vacios , lo devolvere"
- text: "Correa de buena calidad, con un interior oscuro. Cumple perfectamente su función y se intercambia fácilmente. Una buena opción para cambiar el aspecto del reloj"
- text: "cumple su cometido sin nada que merezca la pena destacar"
metrics:
- accuracy
model-index:
- name: electricidad-small-finetuned-amazon-review-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-amazon-review-classification
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9506
- Accuracy: 0.5832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0258 | 1.0 | 6250 | 1.0209 | 0.5502 |
| 0.9668 | 2.0 | 12500 | 0.9960 | 0.565 |
| 0.953 | 3.0 | 18750 | 0.9802 | 0.5704 |
| 0.9201 | 4.0 | 25000 | 0.9831 | 0.567 |
| 0.902 | 5.0 | 31250 | 0.9814 | 0.5672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6 |
canwenxu/laprador | 0ba3b6b7b7327bf3956b1bb6d8a20ac35cfaf44c | 2022-04-25T08:13:10.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2203.06169",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | canwenxu | null | canwenxu/laprador | 37 | 1 | transformers | 6,658 | ---
license: apache-2.0
---
# 🦮 LaPraDoR
Pretrained checkpoint for Findings of ACL 2022 paper [LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval](https://arxiv.org/abs/2203.06169).
To use this model, please refer to our [GitHub repo](https://github.com/JetRunner/LaPraDoR).
|
abdelrahmanzied/bert-fake-news-classifier | 37cbbc55de57a13d08f94d4e57829848f8106ca1 | 2022-04-01T16:40:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | abdelrahmanzied | null | abdelrahmanzied/bert-fake-news-classifier | 37 | null | transformers | 6,659 | ---
license: mit
---
|
nickil/real-fake-news | 9c8e1012fb30fb2ecfe01cd56c58c81d2ab56976 | 2022-04-07T05:50:48.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | nickil | null | nickil/real-fake-news | 37 | null | transformers | 6,660 | ---
license: mit
---
Data: [https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) |
palakagl/bert_MultiClass_TextClassification | be52ae7f69b8bda1a0bc94b96fe5200c358296c1 | 2022-04-07T17:06:55.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:palakagl/autotrain-data-PersonalAssitant",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | palakagl | null | palakagl/bert_MultiClass_TextClassification | 37 | null | transformers | 6,661 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- palakagl/autotrain-data-PersonalAssitant
co2_eq_emissions: 5.080390550458655
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 717221775
- CO2 Emissions (in grams): 5.080390550458655
## Validation Metrics
- Loss: 0.35279911756515503
- Accuracy: 0.9269102990033222
- Macro F1: 0.9261839948926327
- Micro F1: 0.9269102990033222
- Weighted F1: 0.9263981751760975
- Macro Precision: 0.9273912049203341
- Micro Precision: 0.9269102990033222
- Weighted Precision: 0.9280084437800646
- Macro Recall: 0.927250645380574
- Micro Recall: 0.9269102990033222
- Weighted Recall: 0.9269102990033222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221775", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
patrickvonplaten/wav2vec2-base-960h-4-gram | edb5c3d28f5851632687c9de5826744fecfc9176 | 2022-05-24T11:09:47.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-960h-4-gram | 37 | null | transformers | 6,662 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: patrickvonplaten/wav2vec2-base-960h-4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.59
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.46
---
# Wav2Vec2-Base-960h + 4-gram
This model is identical to [Facebook's Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h), but is
augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
## Evaluation
This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-base-960h-4-gram** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torch
from jiwer import wer
model_id = "patrickvonplaten/wav2vec2-base-960h-4-gram"
librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = AutoModelForCTC.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt")
inputs = {k: v.to("cuda") for k,v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print(wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.59 | 6.46 | |
Intel/bart-large-mrpc | 79930c8973deb7fb3a3c72fd040b336e2c3d267e | 2022-04-21T08:11:16.000Z | [
"pytorch",
"bart",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Intel | null | Intel/bart-large-mrpc | 37 | null | transformers | 6,663 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bart-large-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8774509803921569
- name: F1
type: f1
value: 0.9119718309859154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mrpc
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- Accuracy: 0.8775
- F1: 0.9120
- Combined Score: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
clips/republic | ef602ad12874e6db4edb465d63db1d1b5c5967c3 | 2022-06-10T09:06:29.000Z | [
"pytorch",
"bert",
"text-classification",
"nl",
"transformers",
"text classification",
"sentiment analysis",
"domain adaptation"
] | text-classification | false | clips | null | clips/republic | 37 | null | transformers | 6,664 | ---
pipeline_tag: text-classification
language:
- nl
tags:
- text classification
- sentiment analysis
- domain adaptation
widget:
- text: "De NMBS heeft recent de airconditioning in alle treinen vernieuwd."
example_title: "POS-NMBS"
- text: "De wegenwerken langs de E34 blijven al maanden aanhouden."
example_title: "NEG-AWV"
- text: "Natuur en Bos is erin geslaagd 100 hectaren bosgebied te beschermen."
example_title: "POS-ANB"
- text: "Het FWO financiert te weinig excellent onderzoek."
example_title: "NEG-FWO"
- text: "De Lijn is op zoek naar nieuwe buschauffeurs."
example_title: "NEU-De Lijn"
---
# RePublic
### Model description
RePublic (<u>re</u>putation analyzer for <u>public</u> service organizations) is a Dutch BERT model based on BERTje (De Vries, 2019). The model was designed to predict the sentiment in Dutch-language news article text about public agencies. RePublic was developed by CLiPS in collaboration with [Jan Boon](https://www.uantwerpen.be/en/staff/jan-boon/).
### How to use
The model can be loaded and used to make predictions as follows:
```
from transformers import pipeline
model_path = 'clips/republic'
pipe = pipeline(task="text-classification",
model=model_path, tokenizer=model_path)
text = … # load your text here
output = pipe(text)
prediction = output[0]['label'] # 0=”neutral”; 1=”positive”; 2=”negative”
```
### Training data and procedure
RePublic was domain-adapted on 91 661 Flemish news articles from three popular Flemish news providers between 2000 and 2020 (“Het Laatste Nieuws”, “Het Nieuwsblad” and “De Morgen”). These articles mention at least one out of a pre-defined list of 24 public service organizations, which contains, a.o., De Lijn (public transport organization), VDAB (Flemish job placement service), and Agentschap Zorg en Gezondheid (healthcare service). The domain adaptation was achieved by performing BERT’s language modeling tasks (masked language modeling & next sentence prediction).
The model was then fine-tuned on a sentiment classification task (“positive”, “negative”, “neutral”). The supervised data consisted of 4404 annotated sentences mentioning Flemish public agencies of which 1257 sentences were positive, 1485 sentences were negative and 1662 sentences were neutral. Fine-tuning was performed for 4 epochs using a batch size of 8 and a learning rate of 5e-5. In order to evaluate the model, a 10-fold cross validation experiment was conducted. The results of this experiment can be found below.
| **Class** | **Precision (%)** | **Recall (%)** | **F1-score (%)** |
|:---:|:---:|:---:|:---:|
| _Positive_ | 87.3 | 88.6 | 88.0 |
| _Negative_ | 86.4 | 86.5 | 86.5 |
| _Neutral_ | 85.3 | 84.2 | 84.7 |
| _Macro-averaged_ | 86.3 | 86.4 | 86.4 | |
JeffreyLau/SikuGPT2-poem | c9030badb2883ff86f6e3eda4956d19b81e7587f | 2022-07-10T01:29:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers"
] | text-generation | false | JeffreyLau | null | JeffreyLau/SikuGPT2-poem | 37 | 2 | transformers | 6,665 | ---
language: zh
widget:
- text: "[CLS] 明 月 幾 時 有 ,"
- text: "[CLS] 大 漠 孤 烟 直 ,"
- text: "[CLS] 李 白 乘 舟 將 慾 行 ,"
max_length: 50
---
# SikuGPT2-Poem Model
## Model description
The model is used to generate Chinese ancient poems. You can download the model via HuggingFace from the link [SikuGPT2-poem](https://huggingface.co/JeffreyLau/SikuGPT2-poem).
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed.
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("JeffreyLau/SikuGPT2-poem")
>>> model = GPT2LMHeadModel.from_pretrained("JeffreyLau/SikuGPT2-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]明 月 幾 時 有 ,", max_length=50, do_sample=True)
[{'generated_text': '[CLS] 明 月 幾 時 有 , 斜 陽 正 照 人 。 落 花 雖 一 夜 , 何 處 好 春 春 不 管 。 西 風 摇 落 不 禁 愁 , 一 夜 寒 聲 入 客 喉 。 十 月 寒 威 侵 客 鬢 , 四 更 清 怨 入 心 肝 。 春 風 吹 作 萬 紅 銷 , 玉 頰 金 腮 醉 欲 眠 。 柳 色 相 和 風 雨 惡 , 不 堪 芳 節 又 斜 暉 。 何 日 君 王 許 入 朝 , 五 雲 驄 馬 走 黃 埃 。 白 麻 賜 出 朝 回 日 , 一 片 春 光 滿 上 都 。 萬 里 飛 雲 上 翠 微 , 日 華 摇 曳 照 樓 臺 。 自 從 此 際 無 人 賞 , 還 傍 城 邊 一 穗 歸 。 三 徑 深 幽 古 未 逢 , 野 人 行 已 自 多 求 。 高 亭 對 水 空 無 策 , 冷 雨 疎 櫺 獨 自 垂 。 好 句 滿 山 皆 已 有 , 清 詩 三 兩 未 全 無 。 一 徑 危 亭 接 武 湖 , 長 沙 自 有 世 情 知 。 無 人 到 處 題 名 處 , 不 爲 春 風 一 點 開 。 一 春 佳 處 到 清 明 , 日 日 詩 如 錦 繡 囊 。 却 是 梅 花 有 餘 韵 , 便 隨 風 雨 寄 林 坰 。 秋 宵 獨 坐 最 多 情 , 客 裏 無 人 獨 坐 明 。 月 暗 竹 窗 深 又 白 , 霜 濃 樹 葉 下 還 清 。 誰 同 坐 待 東 園 桂 , 獨 對 寒 窗 獨 自 明 。 平 生 最 羨 太 常 孫 , 十 二 行 人 日 暮 歸 。 夜 半 天 壇 雲 雨 合 , 玉 鸞 啼 罷 九 成 宮 。 萬 古 蒼 梧 葉 , 南 天 白 象 尊 。 千 年 無 鶴 舞 , 一 夜 有 龍 吟 。'}]
```
When the parameter skip_special_tokens is False:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("JeffreyLau/SikuGPT2-poem")
>>> model = GPT2LMHeadModel.from_pretrained("JeffreyLau/SikuGPT2-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS] 明 月 幾 時 有 ,", max_length=100, do_sample=True)
[{'generated_text': '[CLS] 明 月 幾 時 有 , 斜 陽 正 照 人 。 落 花 雖 一 夜 , 何 處 好 春 春 不 管 。 西 風 摇 落 不 禁 愁 , 一 夜 寒 聲 入 客 喉 。 十 月 寒 威 侵 客 鬢 , 四 更 清 怨 入 心 肝 。 春 風 吹 作 萬 紅 銷 , 玉 頰 金 腮 醉 欲 眠 。 柳 色 相 和 風 雨 惡 , 不 堪 芳 節 又 斜 暉 。 何 日 君 王 許 入 朝 , 五 雲 驄 馬 走 黃 埃 。 白 麻 賜 出 朝 回 日 , 一 片 春 光 滿 上 都 。 萬 里 飛 雲 上 翠 微 , 日 華 摇 曳 照 樓 臺 。 自 從 此 際 無 人 賞 , 還 傍 城 邊 一 穗 歸 。 三 徑 深 幽 古 未 逢 , 野 人 行 已 自 多 求 。 高 亭 對 水 空 無 策 , 冷 雨 疎 櫺 獨 自 垂 。 好 句 滿 山 皆 已 有 , 清 詩 三 兩 未 全 無 。 一 徑 危 亭 接 武 湖 , 長 沙 自 有 世 情 知 。 無 人 到 處 題 名 處 , 不 爲 春 風 一 點 開 。 一 春 佳 處 到 清 明 , 日 日 詩 如 錦 繡 囊 。 却 是 梅 花 有 餘 韵 , 便 隨 風 雨 寄 林 坰 。 秋 宵 獨 坐 最 多 情 , 客 裏 無 人 獨 坐 明 。 月 暗 竹 窗 深 又 白 , 霜 濃 樹 葉 下 還 清 。 誰 同 坐 待 東 園 桂 , 獨 對 寒 窗 獨 自 明 。 平 生 最 羨 太 常 孫 , 十 二 行 人 日 暮 歸 。 夜 半 天 壇 雲 雨 合 , 玉 鸞 啼 罷 九 成 宮 。 萬 古 蒼 梧 葉 , 南 天 白 象 尊 。 千 年 無 鶴 舞 , 一 夜 有 龍 吟 。'}]
```
## Training data
“Siku Quanshu” full-text corpus was used as Training Data which is same as the project of [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) to train SikuGPT2.
[Chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) was used as Training Data to train SikuGPT2-poem based on SikuGPT2.
## Training procedure
The model is Pre-trained by [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py). We pre-train the model with a sequence length of 512. We use extended vocabulary to handle out-of-vocabulary words.
## Citation
The paper has not been published. You can just cite this url instead. |
BaxterAI/SentimentClassifier | 1c27a5dfd4aee05b8ac858a107b9d87128eff6ea | 2022-05-25T04:28:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_polarity",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | BaxterAI | null | BaxterAI/SentimentClassifier | 37 | null | transformers | 6,666 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
- f1
model-index:
- name: SentimentClassifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.91
- name: F1
type: f1
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentClassifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4425
- Accuracy: 0.91
- F1: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-thai-upos | fa4a2482cc1692deb3c4bc5e7402a42bdc07755e | 2022-05-29T10:45:44.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"th",
"dataset:universal_dependencies",
"transformers",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-thai-upos | 37 | null | transformers | 6,667 | ---
language:
- "th"
tags:
- "thai"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "หลายหัวดีกว่าหัวเดียว"
---
# deberta-base-thai-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [deberta-base-thai](https://huggingface.co/KoichiYasuoka/deberta-base-thai). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-thai-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-thai-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-thai-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
ITESM/st_demo_6 | 7ba6e49bac6dc1f197919b007d9b98bd995e6a8f | 2022-06-05T05:06:07.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"es",
"dataset:hackathon-pln-es/nli-es",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | ITESM | null | ITESM/st_demo_6 | 37 | null | sentence-transformers | 6,668 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- es
datasets:
- hackathon-pln-es/nli-es
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---
# bertin-roberta-base-finetuning-esnli
This is a [sentence-transformers](https://www.SBERT.net) model trained on a
collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Based around the siamese networks approach from [this paper](https://arxiv.org/pdf/1908.10084.pdf).
<!--- Describe your model here -->
You can see a demo for this model [here](https://huggingface.co/spaces/hackathon-pln-es/Sentence-Embedding-Bertin).
You can find our other model, **paraphrase-spanish-distilroberta** [here](https://huggingface.co/hackathon-pln-es/paraphrase-spanish-distilroberta) and its demo [here](https://huggingface.co/spaces/hackathon-pln-es/Paraphrase-Bertin).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
model = AutoModel.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Our model was evaluated on the task of Semantic Textual Similarity using the [SemEval-2015 Task](https://alt.qcri.org/semeval2015/task2/) for [Spanish](http://alt.qcri.org/semeval2015/task2/data/uploads/sts2015-es-test.zip). We measure
| | [BETO STS](https://huggingface.co/espejelomar/sentece-embeddings-BETO) | BERTIN STS (this model) | Relative improvement |
|-------------------:|---------:|-----------:|---------------------:|
| cosine_pearson | 0.609803 | 0.683188 | +12.03 |
| cosine_spearman | 0.528776 | 0.615916 | +16.48 |
| euclidean_pearson | 0.590613 | 0.672601 | +13.88 |
| euclidean_spearman | 0.526529 | 0.611539 | +16.15 |
| manhattan_pearson | 0.589108 | 0.672040 | +14.08 |
| manhattan_spearman | 0.525910 | 0.610517 | +16.09 |
| dot_pearson | 0.544078 | 0.600517 | +10.37 |
| dot_spearman | 0.460427 | 0.521260 | +13.21 |
## Training
The model was trained with the parameters:
**Dataset**
We used a collection of datasets of Natural Language Inference as training data:
- [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
- [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
- [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
The whole dataset used is available [here](https://huggingface.co/datasets/hackathon-pln-es/nli-es).
Here we leave the trick we used to increase the amount of data for training here:
```
for row in reader:
if row['language'] == 'es':
sent1 = row['sentence1'].strip()
sent2 = row['sentence2'].strip()
add_to_samples(sent1, sent2, row['gold_label'])
add_to_samples(sent2, sent1, row['gold_label']) #Also add the opposite
```
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader`
of length 1818 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 909,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Authors
[Anibal Pérez](https://huggingface.co/Anarpego),
[Emilio Tomás Ariza](https://huggingface.co/medardodt),
[Lautaro Gesuelli](https://huggingface.co/Lgesuelli) y
[Mauricio Mazuecos](https://huggingface.co/mmazuecos). |
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned | 37fbf10d51fa4f6ff0a6bdf1d82c9e48fd99527d | 2022-06-14T16:25:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ajtamayoh | null | ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned | 37 | null | transformers | 6,669 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the LivingNER shared task 2022 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0546
- Precision: 0.8574
- Recall: 0.7366
- F1: 0.7924
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0505 | 1.0 | 2568 | 0.0434 | 0.9399 | 0.6781 | 0.7878 | 0.9886 |
| 0.0393 | 2.0 | 5136 | 0.0450 | 0.9384 | 0.6947 | 0.7984 | 0.9892 |
| 0.0306 | 3.0 | 7704 | 0.0451 | 0.9497 | 0.6951 | 0.8027 | 0.9897 |
| 0.0266 | 4.0 | 10272 | 0.0422 | 0.9646 | 0.6904 | 0.8048 | 0.9900 |
| 0.0208 | 5.0 | 12840 | 0.0494 | 0.9576 | 0.6969 | 0.8067 | 0.9902 |
| 0.0141 | 6.0 | 15408 | 0.0506 | 0.8407 | 0.7352 | 0.7844 | 0.9890 |
| 0.0093 | 7.0 | 17976 | 0.0546 | 0.8574 | 0.7366 | 0.7924 | 0.9893 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
waboucay/camembert-large-finetuned-rua_wl_3_classes | 83afb9e9ef5a40f261a69fe441dbc18c120e74e5 | 2022-06-19T14:35:04.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-rua_wl_3_classes | 37 | null | transformers | 6,670 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.3 | 74.9 |
| test | 75.8 | 75.3 | |
Supreeth/Toxic-XLM_RoBERTa | dad6a5f6ec12bbae449053e2d175172a69e1145f | 2022-06-20T13:21:10.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | Supreeth | null | Supreeth/Toxic-XLM_RoBERTa | 37 | null | transformers | 6,671 | ---
license: afl-3.0
---
|
climabench/miniLM-cdp-all | c6805859ce8567ef523dfc3aa6804e4fcef63bbf | 2022-06-25T09:58:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | climabench | null | climabench/miniLM-cdp-all | 37 | null | transformers | 6,672 | Entry not found |
f00d/Multilingual-MiniLM-L12-H384-CLM-finetuned-wikipedia_bn | bee036fe2f6c4a30cdd21b1cc4099fc2a96039e0 | 2022-07-07T11:10:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | f00d | null | f00d/Multilingual-MiniLM-L12-H384-CLM-finetuned-wikipedia_bn | 37 | null | transformers | 6,673 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Multilingual-MiniLM-L12-H384-CLM-finetuned-wikipedia_bn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multilingual-MiniLM-L12-H384-CLM-finetuned-wikipedia_bn
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pnichite/YTFineTuneBert | bbe87bc09965f437992b70efb70fed4f03e92614 | 2022-07-09T17:46:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pnichite | null | pnichite/YTFineTuneBert | 37 | null | transformers | 6,674 | Entry not found |
zhernosek12/classif_sasha | 7a29f8eef716e4c14b358f8e9d8fd1773406535c | 2022-07-13T14:48:37.000Z | [
"pytorch",
"layoutlmv2",
"text-classification",
"transformers"
] | text-classification | false | zhernosek12 | null | zhernosek12/classif_sasha | 37 | null | transformers | 6,675 | Entry not found |
sam34738/xlm-roberta-hindi-nisha | 69b0cc16c8290ba791c7aae0adb726261be4ca9a | 2022-07-14T09:40:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | sam34738 | null | sam34738/xlm-roberta-hindi-nisha | 37 | null | transformers | 6,676 | ---
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-hindi-nisha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-hindi-nisha
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-emotion](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1429 | 1.0 | 460 | 0.7002 |
| 0.5404 | 2.0 | 920 | 0.5305 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
IDEA-CCNL/Erlangshen-Deberta-97M-Chinese | ad248fffb7a0a2536f5bb8a9aaee3faf39ee212b | 2022-07-19T08:57:57.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"zh",
"transformers",
"bert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Deberta-97M-Chinese | 37 | 1 | transformers | 6,677 | ---
language:
- zh
license: apache-2.0
tags:
- bert
inference: true
widget:
- text: "生活的真谛是[MASK]。"
---
# Erlangshen-Deberta-97M-Chinese,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The 97 million parameter deberta-V2 base model, using 180G Chinese data, 24 A100(40G) training for 7 days,which is a encoder-only transformer structure. Consumed totally 1B samples.
## Task Description
Erlangshen-Deberta-97M-Chinese is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248)
## Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Deberta-97M-Chinese', use_fast=false)
model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-Deberta-97M-Chinese')
text = '生活的真谛是[MASK]。'
fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7)
print(fillmask_pipe(text, top_k=10))
```
## Finetune
We present the dev results on some tasks.
| Model | OCNLI | CMNLI |
| ---------------------------------- | ----- | ------ |
| RoBERTa-base | 0.743 | 0.7973 |
| **Erlangshen-Deberta-97M-Chinese** | 0.752 | 0.807 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
YYAH/Bert-RU | d9a6d3cbcbfcd190895aaed0c12c1c79c3167c0d | 2022-07-25T15:23:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | YYAH | null | YYAH/Bert-RU | 37 | null | transformers | 6,678 | Entry not found |
Frikallo/vgdunkey-vgdunkeybot | 0f3e95368e227f8a905ba9f171154c25fec9ebc7 | 2022-07-29T08:41:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Frikallo | null | Frikallo/vgdunkey-vgdunkeybot | 37 | null | transformers | 6,679 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: vgdunkey-vgdunkeybot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vgdunkey-vgdunkeybot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 2843356107
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BritishLibraryLabs/bl-books-genre | b1e0772fb3473db65b0fa6cca03af6817f722b6d | 2022-01-20T14:00:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"multilingual",
"transformers",
"genre",
"books",
"library",
"historic",
"glam",
"license:mit"
] | text-classification | false | BritishLibraryLabs | null | BritishLibraryLabs/bl-books-genre | 36 | 1 | transformers | 6,680 | ---
language: multilingual
tags:
- genre
- books
- library
- historic
- glam
license: mit
metrics:
- f1
widget:
- text: "Poems on various subjects. Whereto is prefixed a short essay on the structure of English verse"
- text: "Two Centuries of Soho: its institutions, firms, and amusements. By the Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C. Wilton ... assisted by other contributors, etc"
- text: "The Adventures of Oliver Twist. [With plates.]"
---
# British Library Books Genre Detector
**Note** this model card is a work in progress.
## Model description
This fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model is trained to predict whether a book from the [British Library's](https://www.bl.uk/) [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection is `fiction` or `non-fiction` based on the title of the book.
## Intended uses & limitations
This model was trained on data created from the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes) largely from the 19th Century. This dataset is dominated by English language books but also includes books in a number of other languages in much smaller numbers. Whilst a subset of this data has metadata relating to Genre, the majority of this dataset does not currently contain this information.
This model was originally developed for use as part of the [Living with Machines](https://livingwithmachines.ac.uk/) project in order to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`.
Particular areas where the model might be limited are:
### Title format
The model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.
To give an example of the types of titles includes in the training data here are 20 random examples:
- 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]
- 'A new musical Interlude, called the Election [By M. P. Andrews.]',
- 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',
- "The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P",
- 'A Little Book of Verse, etc',
- 'The Autumn Leaf Poems',
- 'The Battle of Waterloo, a poem',
- 'Maximilian, and other poems, etc',
- 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',
- 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']
### Date
The model was trained on data that spans the collection period of the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:
| | Date |
|-------|------------|
| mean | 1864.83 |
| std | 43.0199 |
| min | 1540 |
| 25% | 1847 |
| 50% | 1877 |
| 75% | 1893 |
### Language
Whilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:
| Language | Count |
|---------------------|-------|
| English | 22987 |
| Russian | 461 |
| French | 424 |
| Spanish | 366 |
| German | 347 |
| Dutch | 310 |
| Italian | 212 |
| Swedish | 186 |
| Danish | 164 |
| Hungarian | 132 |
| Polish | 112 |
| Latin | 83 |
| Greek,Modern(1453-) | 42 |
| Czech | 25 |
| Portuguese | 24 |
| Finnish | 14 |
| Serbian | 10 |
| Bulgarian | 7 |
| Icelandic | 4 |
| Irish | 4 |
| Hebrew | 2 |
| NorwegianNynorsk | 2 |
| Lithuanian | 2 |
| Slovenian | 2 |
| Cornish | 1 |
| Romanian | 1 |
| Slovak | 1 |
| Scots | 1 |
| Sanskrit | 1 |
#### How to use
There are a few different ways to use the model. To run the model locally the easiest option is to use the 🤗 Transformers [`pipelines`](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("davanstrien/bl-books-genre")
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/bl-books-genre")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Oliver Twist")
```
This will return a dictionary with our predicted label and score
```
[{'label': 'Fiction', 'score': 0.9980145692825317}]
```
If you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.
## Training data
The training data for this model will soon be available from the British Libary Research Repository. This section will be updated once this dataset is made public.
The training data was created using the [Zooniverse platform](zooniverse.org/) and the annotations were done by cataloguers from the [British Library](https://www.bl.uk/). [Snorkel](https://github.com/snorkel-team/snorkel) was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations will soon be available as part of a series of tutorials documenting this piece of work.
## Training procedure
The model was trained using the [`blurr`](https://github.com/ohmeow/blurr) library. A notebook showing the training process will be made available soon.
## Eval results
The results of the model on a held-out training set are:
```
precision recall f1-score support
Fiction 0.88 0.97 0.92 296
Non-Fiction 0.98 0.93 0.95 554
accuracy 0.94 850
macro avg 0.93 0.95 0.94 850
weighted avg 0.95 0.94 0.94 850
```
As discussed briefly in the bias and limitation sections of the model these results should be treated with caution. ** |
Helsinki-NLP/opus-mt-en-az | 5df6d05df97055aea33ee4120019feff558974b8 | 2021-01-18T08:05:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"az",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-az | 36 | null | transformers | 6,681 | ---
language:
- en
- az
tags:
- translation
license: apache-2.0
---
### eng-aze
* source group: English
* target group: Azerbaijani
* OPUS readme: [eng-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): aze_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.aze | 18.6 | 0.477 |
### System Info:
- hf_name: eng-aze
- source_languages: eng
- target_languages: aze
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'az']
- src_constituents: {'eng'}
- tgt_constituents: {'aze_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: aze
- short_pair: en-az
- chrF2_score: 0.47700000000000004
- bleu: 18.6
- brevity_penalty: 1.0
- ref_len: 13012.0
- src_name: English
- tgt_name: Azerbaijani
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: az
- prefer_old: False
- long_pair: eng-aze
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-mk | d5e09817f85f0b89f81a82d5ae217209d15ce05d | 2021-09-09T21:37:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"mk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-mk | 36 | null | transformers | 6,682 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-mk
* source languages: en
* target languages: mk
* OPUS readme: [en-mk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.mk | 52.1 | 0.683 |
|
Helsinki-NLP/opus-mt-en-sem | 7f58a24935a49971fb80ad54ed3fcda545f0035f | 2021-01-18T08:15:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sem | 36 | null | transformers | 6,683 | ---
language:
- en
- mt
- ar
- he
- ti
- am
- sem
tags:
- translation
license: apache-2.0
---
### eng-sem
* source group: English
* target group: Semitic languages
* OPUS readme: [eng-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz heb mlt tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-amh.eng.amh | 11.2 | 0.480 |
| Tatoeba-test.eng-ara.eng.ara | 12.7 | 0.417 |
| Tatoeba-test.eng-heb.eng.heb | 33.8 | 0.564 |
| Tatoeba-test.eng-mlt.eng.mlt | 18.7 | 0.554 |
| Tatoeba-test.eng.multi | 23.5 | 0.486 |
| Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.248 |
### System Info:
- hf_name: eng-sem
- source_languages: eng
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'eng'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sem
- short_pair: en-sem
- chrF2_score: 0.486
- bleu: 23.5
- brevity_penalty: 1.0
- ref_len: 59258.0
- src_name: English
- tgt_name: Semitic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sem
- prefer_old: False
- long_pair: eng-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-de | dc99fd61904de5b6fcf842167c6b5cdee70457a8 | 2021-09-09T21:40:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-de | 36 | null | transformers | 6,684 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-eo-de
* source languages: eo
* target languages: de
* OPUS readme: [eo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.de | 45.5 | 0.644 |
|
Nomi97/Chatbot_QA | 5fd705db355abde8649290e2b080c436baff3628 | 2020-07-06T13:38:50.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Nomi97 | null | Nomi97/Chatbot_QA | 36 | null | transformers | 6,685 | Entry not found |
TransQuest/monotransquest-da-ro_en-wiki | 684be6b6c2523ab2f0763ed06acd74b139f7e36a | 2021-06-03T19:08:40.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ro-en",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-da-ro_en-wiki | 36 | null | transformers | 6,686 | ---
language: ro-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-ro_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
aware-ai/xlmroberta-QA | 04ddaff578c11772b3ac9ec3a97c8aa9a5235e82 | 2020-07-07T10:05:15.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/xlmroberta-QA | 36 | 1 | transformers | 6,687 | Entry not found |
alexcleu/wav2vec2-large-xlsr-polish | 3d530ea46d94be16a03b16fca4708a86e6cf7218 | 2021-07-05T19:07:31.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | alexcleu | null | alexcleu/wav2vec2-large-xlsr-polish | 36 | null | transformers | 6,688 | ---
language: pl
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2vec2 Large 53 Polish by Alex Leu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pl
type: common_voice
args: pl
metrics:
- name: Test WER
type: wer
value: 24.846030
---
# wav2vec2-large-xlsr-polish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Polish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.846030
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
anirudh21/albert-large-v2-finetuned-mnli | 866c382134ff198ffcda8cd2a8ccdaa4b3b061ba | 2022-02-01T19:12:55.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
] | text-classification | false | anirudh21 | null | anirudh21/albert-large-v2-finetuned-mnli | 36 | null | transformers | 6,689 | Entry not found |
btk/gpt100k | c7d9614154b7a54126e0a8e2759e20e78a2d50f4 | 2021-05-21T14:26:30.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | btk | null | btk/gpt100k | 36 | null | transformers | 6,690 | Entry not found |
butchland/bert-finetuned-ner | 639de575a6990aead9946a027116e8bc166101d2 | 2021-12-17T15:53:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | butchland | null | butchland/bert-finetuned-ner | 36 | null | transformers | 6,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9389679126695336
- name: Recall
type: recall
value: 0.9554022214742511
- name: F1
type: f1
value: 0.9471137804471137
- name: Accuracy
type: accuracy
value: 0.9873138282215812
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Precision: 0.9390
- Recall: 0.9554
- F1: 0.9471
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0877 | 1.0 | 1756 | 0.0662 | 0.9081 | 0.9344 | 0.9210 | 0.9827 |
| 0.0376 | 2.0 | 3512 | 0.0599 | 0.9362 | 0.9502 | 0.9431 | 0.9862 |
| 0.0209 | 3.0 | 5268 | 0.0586 | 0.9390 | 0.9554 | 0.9471 | 0.9873 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cardiffnlp/twitter-roberta-base-sep2020 | 24a573219ff3cd246f022b39dfbc2e29b01e5f4f | 2022-02-09T11:14:34.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-sep2020 | 36 | null | transformers | 6,692 | # Twitter September 2020 (RoBERTa-base, 103M)
This is a RoBERTa-base model trained on 102.86M tweets until the end of September 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.55215 not
2) 0.16466 getting
3) 0.08991 fully
4) 0.05542 being
5) 0.01733 still
------------------------------
I keep forgetting to bring a <mask>.
1) 0.18145 mask
2) 0.04476 book
3) 0.03751 knife
4) 0.03713 laptop
5) 0.02873 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.53243 the
2) 0.24435 The
3) 0.04717 End
4) 0.02421 this
5) 0.00958 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99045 The movie was great
2) 0.96650 Just finished reading 'Embeddings in NLP'
3) 0.95947 I just ordered fried chicken 🐣
4) 0.95707 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-sep2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
cardiffnlp/twitter-roberta-base-sep2021 | 01e6dc6e35e03bb1d7ea1ff00ecdc6459ce7aec3 | 2022-02-09T11:16:24.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-sep2021 | 36 | null | transformers | 6,693 | # Twitter September 2021 (RoBERTa-base, 120M)
This is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.39329 fully
2) 0.26694 getting
3) 0.17438 not
4) 0.03422 still
5) 0.01845 all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.06773 mask
2) 0.04548 book
3) 0.03826 charger
4) 0.03506 backpack
5) 0.02997 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.63009 the
2) 0.16154 The
3) 0.02110 this
4) 0.01903 End
5) 0.00810 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99022 The movie was great
2) 0.96274 Just finished reading 'Embeddings in NLP'
3) 0.96006 I just ordered fried chicken 🐣
4) 0.95725 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
classla/bcms-bertic-generator | e4e0c2901fac6a67e5710ec893fc9451630fa19b | 2021-05-21T13:29:30.000Z | [
"pytorch",
"electra",
"pretraining",
"hr",
"bs",
"sr",
"cnr",
"hbs",
"transformers",
"masked-lm",
"license:apache-2.0"
] | null | false | classla | null | classla/bcms-bertic-generator | 36 | 1 | transformers | 6,694 | ---
language:
- hr
- bs
- sr
- cnr
- hbs
tags:
- masked-lm
widget:
- text: "Zovem se Marko i radim u [MASK]."
license: apache-2.0
---
# BERTić* [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian
* The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well).
This is the smaller generator of the main [discriminator model](https://huggingface.co/classla/bcms-bertic), useful if you want to continue pre-training the discriminator model.
If you use the model, please cite the following paper:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
|
facebook/s2t-wav2vec2-large-en-tr | ae0ccd057a5c698ddb7fd439c9238ae49b8865d8 | 2021-11-14T20:39:59.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"en",
"tr",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"transformers",
"audio",
"speech-translation",
"speech2text2",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-wav2vec2-large-en-tr | 36 | 2 | transformers | 6,695 | ---
language:
- en
- tr
datasets:
- covost2
- librispeech_asr
tags:
- audio
- speech-translation
- automatic-speech-recognition
- speech2text2
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Common Voice 1
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99989.mp3
- example_title: Common Voice 2
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99986.mp3
- example_title: Common Voice 3
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99987.mp3
---
# S2T2-Wav2Vec2-CoVoST2-EN-TR-ST
`s2t-wav2vec2-large-en-tr` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Turkish text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-tr", feature_extractor="facebook/s2t-wav2vec2-large-en-tr")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-tr")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-tr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-tr (BLEU score): **17.5**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
giacomomiolo/scibert_reupload | e3e95d2b36223eaa73a25de6de157fda8a1a697b | 2021-05-19T17:19:25.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | giacomomiolo | null | giacomomiolo/scibert_reupload | 36 | null | transformers | 6,696 | Entry not found |
google/pegasus-big_patent | a127b8185d15d2ca5eb56198eb31394a2b057abc | 2020-10-22T16:33:21.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/pegasus-big_patent | 36 | 1 | transformers | 6,697 | Entry not found |
huggingtweets/studiocanaluk | fc2ef2798237475275c18e932e98b72ee2e32a99 | 2021-12-10T22:08:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/studiocanaluk | 36 | null | transformers | 6,698 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1302895184070483968/nK3jFcnc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">StudiocanalUK</div>
<div style="text-align: center; font-size: 14px;">@studiocanaluk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from StudiocanalUK.
| Data | StudiocanalUK |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 529 |
| Short tweets | 226 |
| Tweets kept | 2479 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3j3agdl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @studiocanaluk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28qyfq4n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28qyfq4n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/studiocanaluk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jinmang2/textcnn-ko-dialect-classifier | 4972105e9b86a2a0ea1809fe493e46fe8a62d0f6 | 2022-01-01T08:11:25.000Z | [
"pytorch",
"text-classification",
"transformers"
] | text-classification | false | jinmang2 | null | jinmang2/textcnn-ko-dialect-classifier | 36 | null | transformers | 6,699 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.